## Abstract

Toward the goal of achieving broadband and omnidirectional invisibility, we propose a method for practical invisibility cloaking. We call this “digital cloaking,” where space, angle, spectrum, and phase are discretized. Experimentally, we demonstrate a two-dimensional (2D) planar, ray optics, digital cloak by using lenticular lenses, similar to “integral imaging” for three-dimensional (3D) displays. Theoretically, this can be extended to a good approximation of an “ideal” 3D cloak. With continuing improvements in commercial digital technology, the resolution limitations of a digital cloak can be minimized.

© 2016 Optical Society of America

## 1. INTRODUCTION

An “ideal” invisibility cloak can be considered to be broadband, omnidirectional, 3D, macroscopic, and operational in the visible spectrum, and have phase matching for the full field of light [1]. Scientific research into invisibility cloaking gained momentum with the initial omnidirectional cloaking designs that used artificial materials (“metamaterials”) [2,3]. These guide electromagnetic waves around a hidden object, using metamaterials that are engineered with coordinate transformations, so they are called “transformation optics” cloaks. Many interesting designs have resulted from transformation optics, but due to their narrow bandwidth, anisotropy, and manufacturing difficulties, practical cloaks have been challenging to build [4].

Broad bandwidth and omnidirectionality appear to be the main competing elements for ideal invisibility cloaking, as both seem unachievable simultaneously [5,6]. Thus, to demonstrate cloaking, researchers have relaxed these or other ideal characteristics. Some of these efforts include broadband “carpet cloaks” for visible light on reflective surfaces [7], unidirectional phase-matching cloaks [8], macroscopic ray optics cloaking [9,10], a cylindrical cloak for light through a diffusive medium [11], or a cloak that achieves all in the small-angle regime [6].

In this work we propose “digital cloaking,” by discretizing space, angle, spectrum, and phase, as an approximation to ideal cloaking. Since detectors, including imaging systems such as our eyes, are limited in resolution (spatially and temporally), digital cloaking can appear to be omnidirectional for a broad spectrum when observed. In fact, discretization of space is inherent in nature, with atoms and molecules making up matter. Even metamaterial cloaking relies on discrete structures that are subwavelength in scale, to generate an averaging effect for the operational wavelength(s) [1]. Della Giovampaola and Engheta went further and proposed digitizing metamaterials, by using just two types of “metamaterial bits” to make exotic lenses and designs [12]. For our digital cloak, we simply propose that the discretization be larger than atomic or wavelength scales, on the order of the resolution limits of the observer, be it biological or a machine/device. For human visual acuity, resolution finer than about 30 arcsec is sufficient [13].

The digital cloak we demonstrate is an “active” device that
requires external power input. However, passive discretized cloaking is
also possible (see Supplement
1 for this, along with a lensless
version of a digital cloak). Active cloaks have been proposed before,
where the incoming signals are known *a priori* or detected
quickly, so that outgoing signals from antennas cancel the incoming
wave(s) [14]. Other active cloaks,
which compensate for absorption and increase bandwidth, include using
active metamaterial surfaces for dominant scattering cancellation or using
electronic circuits for acoustic cloaks [5]. These rely on custom-engineered material, whereas our
digital cloaks can use commercially available technology that is improving
independently of any cloaking efforts. We believe this will be an
advantage for scaling and implementation.

## 2. DIGITAL INTEGRAL CLOAKING THEORY

Invisibility cloaking makes the cloaked object appear transparent, as if
the light fields exited the cloaked space *without*
anything in it. It is a form of illusion, where light bends around the
cloaked space, but re-forms afterward to appear as if it had never bent.
This allows both the cloaked object and the cloaking device to not only be
hidden but appear transparent [3,10].

We first make a “ray optics” approximation, where the full phase of the electromagnetic field of light is not necessarily matched (we later discuss how to remove this approximation). For imaging, whether by camera or by the human eye, the phase is typically not detectable, which is why ray tracing is usually sufficient for designing imaging devices. Ray optics cloaking can be considered a discretization of spectrum and phase for a given ray, since its phase (modulo $2\pi $) will match for one or more discrete frequencies, or discrete phase values can be matched for a given frequency. Ray optics alone significantly reduces the complexities of cloaking, such that isotropic, off-the-shelf materials can be used to build macroscopic cloaks [10].

To build large-field-of-view (FOV) cloaks that are practical, we then discretize space and momentum (or angle). We call this method of cloaking “discretized cloaking.” Since detectors including the human eye have finite resolution, discretization can be unnoticeable. Figure 1(a) shows an example, where each discretization in space is called a “superpixel.” Each spatial superpixel can contain separate pixels that detect and/or display discrete ray angles. Additional “pixels” may also be necessary for other ray characteristics. Discretized cloaking allows digital imaging and display technologies to be placed on the surface of the cloak. Utilizing such digital technology for cloaking is what we call “digital cloaking.” Strictly speaking, digital cloaking may discretize the spectrum of frequencies further than just the ray optics approximation. For example, some digital displays might only show red, green, and blue (RGB “subpixels”), so additional pixels/subpixels for discrete color may be required.

Implementing a cloak requires us to propagate the rays from input to output correctly. This can be done using the “paraxial cloaking” matrix (Eq. (1) of Ref. [10]), since the final ABCD matrix is still valid outside of the “paraxial” (small-angle) regime. Given a transverse position ${x}_{i}$, angle ${\theta}_{i}$, and longitudinal position ${z}_{i}$ of the input ray [see Fig. 1(b)], the output ray is given by (with the same variable names, but with the subfix “$f$”)

We have assumed rotational symmetry about the center axis ($\mathbf{z}$) and that the ambient medium has refractive index $n$. Note that $L=({z}_{f}-{z}_{i})$ is constant for the planar cloak in Fig. 1(b).

To detect and reproduce proper ray positions and angles, we can utilize Shack–Hartmann wavefront sensors, or “fly’s eye” lens arrays. These use arrays of small lenses to spatially separate rays of different angles [see Fig. 1(a)]. Remarkably, Lippmann proposed using this concept in 1908, and attempted to demonstrate this “integral photography” (or “integral imaging”) with limited technology [15]. Resolution, depth of field, and limited viewing angles are typically drawbacks for these “integral” 3D displays, but improvements are being made [16]. With current commercial efforts to increase the pixel density of displays, we anticipate resolution will improve continually. For cloaking, we can use lens arrays on a digital display panel to generate the desired ray output pattern according to Eq. (1).

Microlenslet arrays have been suggested previously for transformation optics by the Courtial group [17]. They use two pairs of lenslet arrays in a confocal setting (both focal planes overlapping), as a “window” that can refract light passing through. They have suggested using these pairs of arrays as the building blocks for a passive cloaking device, where the object inside appears shrunk. So far, they have only simulated such effects [18].

We use the term “integral cloaking” for cloaking that uses integral imaging techniques, and “digital integral cloaking” for integral cloaking using digital technology. An example of its implementation is shown in Fig. 1(b). For purposes of demonstration, we simplified with only two parallel plates and two lenslet arrays, where we captured rays with one plate and displayed rays with the other. To simplify the required equipment, we also limited our cloak to 2D, where observers are at a fixed height, and move only in the plane horizontal to the floor [$\mathbf{x}\u2013\mathbf{z}$ plane in Fig. 1(b)]. Since both eyes of an observer typically lie on the same horizontal plane, stereoscopic depth can still be perceived with the 2D version. Integral cloaking in the vertical plane follows the same principles, just rotated, so that in algorithm and in theory, 2D cloaking extends to 3D in a relatively straightforward manner.

## 3. DIGITAL INTEGRAL CLOAK DEMONSTRATION

Figures 2(a) and 2(b) show the setup for our 2D digital integral cloak. The input plane (input camera sensor on a slider) and display screen were separated by $L=13\text{\hspace{0.17em}}\mathrm{cm}$. The cloakable volume behind the active display screen was then $\sim 2500\text{\hspace{0.17em}}{\mathrm{cm}}^{3}$. The background objects consisted of four sets of colored blocks, the total depth of the object space (from the input plane) being 90 cm (see Supplement 1 for details of the setup). Rays from the input camera are “propagated” by a computer to the output.

For the image capture (input) plane, we used a digital camera (Sony DSC-RX10), mounted on a mechanical slider that scans horizontally at a fixed speed (see Visualization 2 and Visualization 3). Each camera frame represented a single lenslet and superpixel [of the input surface in Fig. 1(b)] located at the instantaneous camera position (${x}_{i},{y}_{i}$). The camera image pixels then corresponded to the detector pixels, as shown in Fig. 1(a). From the camera FOV, we could then calculate the input ray angles (${\theta}_{i}$) for these pixels. Knowing the input ray position and angle, a computer then propagated the ray to the correct output pixel using Eq. (1).

For 2D, not only was a scanning camera easier to obtain than a combination of lenslet and detector arrays [input surface of Fig. 1(b)], but it had improved performance. This is because a continuous scan gave a horizontal spatial resolution of 0.106 mm in camera positions. This was about 10 times better than the horizontal spatial resolution of our final system (1.34 mm), which was set by the output lenslet array (see Supplement 1). In addition, commercial cameras are highly aberration-corrected, whereas lenslet arrays usually have little, if any, correction; so the former have sharp images, for both input and output.

The benefits of our horizontal scanning method came at the cost of a delay in time. For our setup (Fig. 2), the input scan required 29 s, and the computational processing required 22 s on the laptop that ran our code. We required additional time to test and transfer data, but with proper hardware interfacing, this can be automated with little delay. Both scan and processing times increase with the dimensions of the cloakable volume. For example, the horizontal scan distance required is (${W}_{s}+2L\text{\hspace{0.17em}}\mathrm{tan}({\mathrm{FOV}}_{l}/2)$). Here, ${W}_{s}$ is the active screen width of the output display, and ${\mathrm{FOV}}_{l}$ is the FOV of the output lenslet array. Subjective quality requirements of the cloak can dictate the speed as well. A 3D version would require raster scanning over a 2D ($\mathbf{x}\u2013\mathbf{y}$) plane, which can be difficult and time consuming if using a single camera. Thus, for real-time or 3D digital cloaking, using a 2D array of detectors combined with a fly’s eye lenslet array [Fig. 1(b)] for the input surface would be a practical, though likely costly, approach.

We now describe the display (output) plane of our cloak. For the output display, we used a 20 cm (diagonal) LCD monitor (Apple iPad mini 4). Our output lenslet array was a 2D cylindrical lenslet array (20-lenses-per-inch array from Micro Lens Technology). Both display monitor and lenslet array were commercially available. For a 3D integral cloak, a fly’s eye lens array should replace the cylindrical lenslet array. By slanting the cylindrical lenses, we utilized the 3 RGB subpixels to gain 3 times the horizontal angular resolution (in number of “views”), at the sacrifice of vertical resolution [16]. Our output system generated 51.5 discrete “views” over 29° of viewing angles (FOV), horizontally. This 29° was the FOV of the lenslet array (${\mathrm{FOV}}_{l}$), and limited the cone of angles for both the output and input of our cloaking system, since the input camera FOV was larger ($\sim 60\xb0$). Each “view” corresponds to a discrete ray angle/momentum [one pixel in Fig. 1(a)] that is displayed for our system. This determined the output angular resolution of our cloaking system, giving 0.56° between neighboring views. Note that this output angular resolution of the digital integral cloak is how much an observer must move to see a change in image (corresponding to the subsequent “view”). So smaller angular resolution values provide more continuous viewing, and allow farther observation distances, than larger values.

Figures 2(c)–2(f) show a horizontal ($\mathbf{x}$) demonstration of this 2D digital integral cloak. An “observer” camera at a fixed height ($y$) near the center of the cloak, and fixed distance $z$ from the cloak, was placed on a slider to scan horizontally ($x$). This camera was 260 cm from the display screen (cloak). Figures 2(c)–2(f) show 10.8° of the total 13.4° viewing range of Visualization 1. The objects behind the cloak match in horizontal alignment, size (magnification), and parallax motion for varying object depths (from the cloak). As expected for real 3D scenery, the objects that are farther from the screen move across the cloaking screen quicker than those closer to the screen.

The vertical magnification was matched for a particular observer distance and object depth combination, since this was a 2D cloak with cylindrical lenses. In our case, from the observation distances used in Figs. 2(c)–2(f), the vertical sizes of objects near the farthest blocks (dark green) and red blocks were roughly matched. If spherical fly’s eye lenslet arrays are used for a full 3D integral cloak, the vertical alignment and magnification can match for all object and observer distances, in theory.

Figures 3(a)–3(d) show a longitudinal ($\mathbf{z}$) demonstration of our digital integral cloak, by varying observation distances away from the cloaking screen. The horizontal FOVs occupied by the cloaking screen, from the observer camera, were 2.53°, 2.93°, 3.38°, and 4.59°, for Figs. 3(a)–3(d), respectively (assumptions in Supplement 1). This is the range of angles (“views”) of the light rays that the observer camera captures. As an observer moves closer to the cloak [from Fig. 3(a) to Fig. 3(d)], a larger range of angles is seen. This corresponds to a larger spatial amount of the background scene being shown by the cloak (horizontally). For a cloaking system, which should appear as if absent (transparent), this is as expected.

Finally, we characterize our digital integral cloak with additional quality metrics (details and additional factors in Supplement 1). Since ours was a 2D demonstration, we limited our analysis to the horizontal ($\mathbf{x}$) and longitudinal ($\mathbf{z}$) dimensions. The horizontal input angular resolution for our system was 0.031°, which corresponds to the uncertainty in the input ray angles. (Recall that the output angular resolution was 0.56°.) To provide sufficient depth of field, we stopped-down our input camera to $f\text{-}\text{number}=f/10$. The resulting input aperture diameter was then 0.88 mm [the effective lenslet diameter in Fig. 1(a)]. This corresponds to the range of transverse spatial positions of the objects that are captured for each detector pixel of the input camera. Comparatively, the output aperture was 1.34 mm.

As shown in Visualization 3 and Supplement 1, our demonstrated depth of field was over 60 cm, such that all the objects we demonstrated for the cloak (Figs. 2 and 3) were at least in good focus when collected for input. The input camera was not the limiting factor here, as we could achieve a depth of field of several meters, but the display (output) surface limited the resolution required to display object depths clearly. The spatial sensitivity of our slanted lenslet array (to be misaligned on the display) is such that a 0.026 mm change in position will shift the “view” seen. The angular sensitivity of the lenslet array alignment with respect to the display screen pixels was ${(8.8\times {10}^{-3})}^{\xb0}$.

## 4. EXTENDING TO A DISCRETIZED IDEAL CLOAK

Our digital integral cloak can be extended to approximate an ideal cloak,
by making it omnidirectional and phase matching. Figure 4(a) shows an ideal, spherically
symmetric cloak and some rays that enter and exit it. We assume rotational
symmetry (about $\mathbf{z}$), so only the cross section of the
spherical cloak is shown. For simplicity, only rays with one angle are
shown, but spherical symmetry implies that the cloak will work for all
angles (omnidirectional). The dashed arrows show how the rays should
*appear* to have traveled inside the cloak, which is to
exit as if each ray propagated through the cloak in a straight line. In
reality, the rays within the cloak should curve *around* an
object or space that is intended to be invisible.

Building an omnidirectional cloak has been elusive to demonstrate, even for ray optics. However, with discretized cloaking, we can approximate omnidirectionality, as shown in Fig. 4(b). Whereas $L$ was constant for our demonstration (Fig. 1), in Fig. 4(b) each ray has its own longitudinal distance $L=({z}_{f}-{z}_{i})$, which is now dependent on its input and output planes for the cloak. Although Fig. 4(b) shows a cloak that is circular in 2D, or spherical in 3D, arbitrarily shaped discretized cloaks are possible. For cloaks with general shapes, Eq. (1) as given can be applied for each 2D plane containing the $\mathbf{z}$ axis.

The phase of the light fields can be matched by including properly engineered materials for a fixed-shape cloak, or spatial light modulator arrays for a cloak with dynamic shapes. If we assume each pixel (or subpixel) corresponds to a single ray position, angle, and frequency, it is straightforward to trace an input pixel to its output pixel [Eq. (1)]. Each pair is then a unidirectional propagation from input pixel to output pixel (dashed lines in Figs. 1 and 4), with respect to a new $\mathbf{z}$ axis. This allows the paraxial full-field cloaking theory to be used for each pixel pair, to calculate the phase and dispersion necessary for phase matching of light fields [6]. This assumption/approximation becomes increasingly accurate as the cloak pixel size decreases.

## 5. DISCUSSION

Our digital cloak demonstration was dynamic so that a changing background could be displayed properly, after a finite lag time for scanning and processing. Work to make a real-time cloak is under way. Depending on the length scales for how the cloak is to be observed, the requirements for detection and output can change. For example, if the observer is far away from the cloak, then large screens with low resolution can be sufficient. Lastly, with increased computational power and refined resolution, digital cloaking can be adapted to be wearable. Sensors can determine the position and orientation for each pixel, and a processor can calculate the correct ray propagation [Eq. (1)]. This will allow for cloaks that are dynamic in shape.

In conclusion, to approximate an ideal cloak for practical observation, we have proposed discretized cloaking. In particular, we have demonstrated a 2D digital integral cloak for ray optics, by using commercially available technologies. Our demonstration had 0.56° angular resolution over 29° FOV, and spatial resolution of 1.34 mm, limited by the output system. The principles for generating a 3D integral cloak follow easily, and we have suggested how to match the phase of the light fields. Digital cloaking has potential for implementation as a wearable cloak, since the technology required continues to improve commercially.

## Funding

Army Research Office (ARO) (W911 NF-12-1-0263); Defense Sciences Office, DARPA (DSO, DARPA) (W31P4Q-12-1-0015).

## Acknowledgment

The authors thank Aaron Bauer and Greg Schmidt for discussions on resolution measurements.

See Supplement 1 for supporting content.

## REFERENCES

**1. **G. Gbur, “Invisibility physics: past,
present, and future,” Prog.
Opt. **58**, 65–114
(2013). [CrossRef]

**2. **J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling electromagnetic
fields,” Science **312**, 1780–1782
(2006). [CrossRef]

**3. **U. Leonhardt, “Optical conformal
mapping,” Science **312**, 1777–1780
(2006). [CrossRef]

**4. **M. McCall, “Transformation optics and
cloaking,” Contemp. Phys. **54**, 273–286
(2013). [CrossRef]

**5. **R. Fleury, F. Monticone, and A. Alu, “Invisibility and cloaking:
origins, present, and future perspectives,”
Phys. Rev. Appl. **4**, 037001 (2015). [CrossRef]

**6. **J. S. Choi and J. C. Howell, “Paraxial full-field
cloaking,” Opt. Express **23**, 15857–15862
(2015). [CrossRef]

**7. **J. S. Li and J. B. Pendry, “Hiding under the carpet: a new
strategy for cloaking,” Phys. Rev.
Lett. **101**, 203901
(2008). [CrossRef]

**8. **N. Landy and D. R. Smith, “A full-parameter
unidirectional metamaterial cloak for
microwaves,” Nat. Mater. **12**, 25–28
(2013). [CrossRef]

**9. **J. C. Howell, J. B. Howell, and J. S. Choi, “Amplitude-only, passive,
broadband, optical spatial cloaking of very large
objects,” Appl. Opt. **53**, 1958–1963
(2014). [CrossRef]

**10. **J. S. Choi and J. C. Howell, “Paraxial ray optics
cloaking,” Opt. Express **22**, 29465–29478
(2014). [CrossRef]

**11. **R. Schittny, M. Kadic, T. Bueckmann, and M. Wegener, “Invisibility cloaking in a
diffusive light scattering medium,”
Science **345**, 427–429
(2014). [CrossRef]

**12. **C. Della Giovampaola and N. Engheta, “Digital
metamaterials,” Nat.
Mater. **13**, 1115–1121
(2014). [CrossRef]

**13. **M. Bass, J. M. Enoch, and V. Lakshminarayanan, “Vision and vision
optics,” in *Handbook of
Optics*, 3rd ed.
(McGraw-Hill, 2010),
Vol. 3.

**14. **F. G. Vasquez, G. W. Milton, and D. Onofrei, “Active exterior cloaking for
the 2D Laplace and Helmholtz equations,”
Phys. Rev. Lett. **103**, 073901
(2009). [CrossRef]

**15. **G. Lippmann, “Epreuves reversibles.
Photographies integrales,” C. R.
Acad. Sci. **146**, 446–451
(1908).

**16. **J. Geng, “Three-dimensional display
technologies,” Adv. Opt.
Photon. **5**, 456–535
(2013). [CrossRef]

**17. **A. C. Hamilton and J. Courtial, “Generalized refraction using
lenslet arrays,” J. Opt. A **11**, 065502 (2009). [CrossRef]

**18. **S. Oxburgh, C. D. White, G. Antoniou, E. Orife, and J. Courtial, “Transformation optics with
windows,” Proc. SPIE **9193**, 91931E
(2014). [CrossRef]