We study the ray optics of generalized lenses (glenses), which are ideal thin lenses generalized to have different object- and image-sided focal lengths, and the most general light-ray-direction-changing surfaces that stigmatically image any point in object space to a corresponding point in image space. Gabor superlenses [UK patent 541,753 (1940); J. Opt. A 1, 94 (1999) [CrossRef] ] can be seen as pixelated realizations of glenses. Our analysis is centered on the nodal point. Whereas the nodal point of a thin lens always resides in the lens plane, that of a glens can reside anywhere on the optical axis. Utilizing the nodal point, we derive simple equations that describe the mapping between object and image space and the light-ray-direction change. We demonstrate our findings with the help of ray-tracing simulations. Glenses allow novel optical instruments to be realized, at least theoretically, and our results facilitate the design and analysis of such devices.
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Thin-lens approximation is an idealized description of ray optical imaging with lenses. The ideal thin lens is considered planar and, as the name suggests, of zero thickness, and any two or more light rays that intersect at a point (the object) before transmission through the lens (“in object space”) intersect again at a point (the image) after transmission through the lens (“in image space”). In each case, the definition of light ray includes not only the straight-line segment light travels before or after intersecting the lens but also the straight-line continuation on either side.
Previously, a generalization of ideal thin lenses was considered: the most general planar surface that redirects light rays, without offsetting them, such that any point in object space is stigmatically imaged into a corresponding point in image space (and vice versa) is an ideal thin lens with different object- and image-sided focal lengths  [Fig. 1(a)]. Note that there is no nonplanar surface that can do this ; thus, an ideal thin lens with two focal lengths is the most general surface of any shape that images in this way.
Thin lenses with different object- and image-sided focal lengths can be realized experimentally, with certain shortcomings, in the form of Gabor superlenses  (Section 2), which comprise two confocal microlens arrays with different pitch. Gabor superlenses do not perform stigmatic imaging, in which the individual rays pass through the object and image position, but integral imaging, in which the axes of bundles of light rays intersect. Gabor superlenses have been experimentally demonstrated and their imaging properties investigated in .
Here, we define an ideal thin lens with two focal lengths as a glens, which, according to taste, stands for generalized lens or Gabor superlens. The singular term glens should not be confused with the plural of “glen,” meaning a glacial valley in Scotland [Fig. 1(b)]. Unlike earlier work, we study the nodal points of a glens, two of the cardinal points that describe the characteristics of an imaging system, and show that their positions coincide in a single nodal point (Section 3). Utilizing this nodal point, we derive vector expressions for the mapping between object and image space (Section 4) and for the light-ray-direction change on transmission (Section 5). We visualize the view through a few glenses using ray-tracing simulations (Section 6).
2. PIXELATED REALIZATIONS OF GLENSES
For wave-optical reasons, a glens (or indeed an idealized thin lens) operating in air cannot be realized such that it correctly transforms all incident light-ray fields. This follows from considerations of the optical path length  or by showing that the required light-ray-direction change can result in light-ray fields without wave-optical analog . The latter argument relies on assuming that a corresponding phase front exists, and then showing that this cannot be the case, as it would have to be discontinuous at every point.
It is possible to gather these phase discontinuities, concentrating them along lines that separate areas where the phase front is continuous. These areas can be chosen to be so small that they cannot be resolved, just like the pixels of a computer monitor. Miniaturizing the pixels increases the effects of diffraction, but, in certain cases, the combined effect of pixel visibility and diffraction can be invisibly small . The resulting light field can then be realized in terms of rays, but, strictly speaking, it is wave-optically forbidden .
The required redirection of small phase-front pieces can be achieved by transmission through a suitable array of small telescopes, whereby each telescope redirects one phase-front piece and therefore acts like a pixel of a pixelated glens. The famous Gabor superlens [3,4] is precisely such an array of telescopes.
As already mentioned in the Introduction, such a pixelated glens does not produce stigmatic images, in which every light ray that passes through an object position passes through the corresponding image position, exactly. Instead, it produces integral images , in which the axes of (thin) bundles of light rays intersect. Integral imaging is inferior in quality to stigmatic imaging, but, for the reasons outlined above, is less restricted and offers additional possibilities.
The term integral image has its origin in integral photography, a technique invented more than a century ago  in which an image of a scene is taken not through a single lens, like in a normal camera, but through a planar lenslet (or microlens) array. In the simplest case, the film (or image detector) is positioned in the focal plane of the lenslet array. A light-field camera (or plenoptic camera) [10,11] is a modern version of just such a device, and algorithms have been invented that allow the captured light fields to be viewed in novel ways . As each lenslet is in a different position, the resulting integral photograph represents the scene from a range of perspectives and, thus, contains depth information about the scene. If the captured image is placed in the focal plane of an identical lenslet array and viewed through this lenslet array, a 3D integral image of the scene appears.
The same integral image can be obtained without recording and developing the integral photograph but by directly viewing the light distribution created by the first lenslet array in its focal plane with a second, identical, lenslet array that shares this focal plane, that is, by simply viewing the scene through two identical lenslet arrays, separated by the sum of their focal lengths . Such a pair of confocal lenslet arrays can, thus, be seen as an integral camera (the first lenslet array) and its viewer (the second lenslet array), with the integral photograph being formed in the common focal plane but never recorded and developed and instead viewed directly.
The individual telescopes can be generalized in a number of ways: for example, by making the focal lengths of the two lenslet arrays different  or by displacing the lenslet arrays relative to each other such that they remain confocal but such that the optical axes of the lenslets in one array no longer align with those of the lenslets in the other , resulting in generalized confocal lenslet arrays (GCLAs). The mapping between object space and (integral) image space can be so general that homogeneous GCLAs (in which the telescopes are all identical) can form pixelated transformation-optics (PTO) devices [16,17].
A Gabor superlens comprises inhomogeneous GCLAs (in which the telescopes are not identical): the two lenslet arrays are confocal, but the separation between neighboring lenslets is different in the two arrays. (Note that they are not rotated with respect to each other, which would result in a Moiré magnifier .)
It is important to understand that GCLAs (including Gabor superlenses) suffer from a number of imperfections in addition to those associated with pixel visibility and diffraction mentioned above. Perhaps the most noticeable of these imperfections results from light incident from certain directions entering through the first lens of one telescope and exiting through the second lens of another telescope. Such light forms additional images . In principle, it can be absorbed with baffles separating the individual telescopes but at the price of reduced transmission. The associated field-of-view limitation and reduction in transmission coefficient have been quantified for simple geometries . Other imperfections are related to the limited imaging quality of the simple lenses used in designs that can be easily produced, each lens simply consisting of a (in the simplest case spherical) bump on the surface of a plastic sheet. These imperfections are currently the topic of ongoing optical-engineering efforts [21,22].
3. NODAL POINT
The behavior of a thin lens is fully determined by the positions of the plane of the lens, its optical axis, and the focal points. As a thin lens does not offset light rays upon transmission, any light ray intersecting the thin lens enters the lens and leaves it from the same position in the lens plane. The optical axis, which is perpendicular to the lens plane, is the axis of cylindrical symmetry of the lens. Using imaging language (before transmission through the lens, light rays are in object space; afterward they are in image space), the focal points can be defined as those points where light rays that are parallel to the optical axis in object space intersect in image space (at the image-sided focal point), and where light rays that are parallel to the optical axis in image space intersect in object space (at the object-sided focal point). They are located on the optical axis on both sides of the lens, a distance (the focal length) from the lens plane. The lens center, where the optical axis intersects the lens plane, has the property that any light ray incident on it passes straight through it. This property is useful, for example, when drawing a principal-ray diagram to construct the position of the image of a given object position.
The behavior of a glens is also fully determined by the positions of the glens plane, the optical axis, and the focal points . It is similar to a lens in the following ways:
- 1. A glens also does not offset light rays upon transmission, and so any light ray intersecting a glens is incident on, and leaves from, the same position in the glens plane.
- 2. A glens also has an optical axis, which is perpendicular to the glens plane and which is the axis of cylindrical symmetry.
- 3. A glens also has focal points where light rays that are parallel to the optical axis in object (image) space intersect in image (object) space, and these are located on the optical axis.
A glens also differs from a lens in a number of ways:
- 1. The focal points are, in general, located at different distances from the glens, and might even reside on the same side of the glens.
- 2. Light rays through the center of a glens in general change direction.
The ray diagram shown in Fig. 1(a) illustrates a few of these properties.
Figure 1(a) also introduces a number of conventions for glenses. First, we place on the optical axis one of the axes of a Cartesian coordinate system. We call the associated coordinate ; the origin is chosen such that the glens resides in the plane. This allows us to distinguish between light rays that approach the glens from negative space, i.e., from the direction of negative values, from those that approach from positive space, i.e., from the direction of positive . This, in turn, allows us to distinguish between the focal lengths in negative space and in positive space, which is required here, as these are, in general, different (unlike in the case of a lens). The focal lengths in negative and positive space, and , are defined as the coordinates of the corresponding focal points, and . (Throughout this paper, we continue to label positions and distances that refer to negative space with a superscript “−” and those that refer to positive space with a superscript “+”.) The glens plane is indicated by a line with triangles at the ends. Unlike the triangles at the end of the lines indicating a thin-lens plane, the triangles indicating a glens are drawn asymmetrically, on the positive side of the glens, representing the broken symmetry between the spaces on either side. The negative and positive sides of a glens can also be indicated by placing a “−” and a “+” on the relevant sides of the glens. Note that either space can be object or image space; for example, if light is incident from negative space, then negative space is object space and positive space is image space.
We stress that our definition of the focal lengths is different from the standard definition; if the focal lengths according to the standard definition are on the positive side and on the negative side, then and . The reason behind this redefinition is that it uses the same coordinate system on both sides of the glens, which is convenient for our purposes in that any distance, which simultaneously refers to negative space and positive space, can be defined unambiguously. We will see below that the nodal distance, the coordinate of the nodal point, is an example of such a distance.
All rays shown in Fig. 1(a) are involved in the imaging between two conjugate positions, and . As a glens images, by definition, each object-space point into a corresponding image-space position, any light ray that intersects the object position in object space intersects the image position in image space. To construct the image position, it is sufficient to construct the intersection of two such rays, and in Fig. 1(a) this was done with the ray that passes through the object-sided focal point and the ray that is incident parallel to the optical axis and which passes through the image-sided focal point. The trajectories of additional rays involved in the imaging between and can be constructed as two straight-line segments, one from to any point in the glens plane, the second from the same point in the glens plane to . A glance at the figure reveals that the ray through the glens center, P, does not pass through the glens undeviated, reducing the usefulness of P in ray diagrams. But Fig. 1(a) also shows that there exists another ray that passes through both and and which passes through the glens plane undeviated, but this ray does not pass through the glens center but, instead, intersects the optical axis at a different position, N.
With the help of Fig. 2, we can demonstrate that any light ray incident on N—from any direction—passes through the glens undeviated. For reasons that will become clear, we call N the nodal point of the glens. (It is, of course, also possible to demonstrate the existence of the nodal point from the mapping equations derived in .) Consider a light ray, marked “1,” incident from negative space with an arbitrary direction through the focal point . After transmission through the glens, its direction is parallel to the optical axis. We call the point where it intersects the positive focal plane I. Now consider a second light ray, marked “2,” incident with the same direction and aiming at the point I. As the two rays are parallel in negative space, they intersect in a point in the positive-space focal plane. This point is I, and the second light ray is already intersecting it in negative space, i.e., before transmission through the glens. In order to intersect it also after transmission through the glens, its direction therefore cannot be altered by transmission through the glens.
It can easily be seen that the second, undeviated, ray always intersects the optical axis at the same position, N, which is, as the name choice suggests, the position of the nodal point. Because of the congruence of the triangles and , N resides a distance from , and a distance1) shows that the position of N is independent of the direction of the incident rays, which justifies our earlier assertion that any ray incident on N with any direction passes through the glens undeviated.
Finally, we put our results in the context of the standard theory of ideal imaging systems, which allows the behavior of an ideal imaging system to be determined by the locations of a number of cardinal points .
First, above we reviewed the property of a glens in that it does not offset light rays upon transmission; thus, any light ray passing through the glens leaves it from the same point it is incident on. This is specifically the case for any light ray parallel to the optical axis, which means  that the glens plane is a principal plane. There are actually two principal planes, each corresponding to light rays incident from one side of the imaging system (here the glens), and it is immediately clear that both those principal planes coincide in the glens plane. The principal points, which are defined as the points the two principal planes intersect the optical axis, coincide with the glens center, which is therefore the glens’s principal point, which we had presciently named P.
Second, the nodal points of an ideal imaging system are defined as the two points on the optical axis with the property that any light ray incident on one emerges from the optical system through the other, with the same light-ray direction. In the case of a glens, the two nodal points coincide in the point N; thus, it is indeed the glens’s nodal point.
4. MAPPING BETWEEN OBJECT AND IMAGE SPACE
We now use the properties of the nodal point to derive a simple equation for the mapping between object and image space. Figure 3 shows the geometry. It shows two light rays involved in the imaging between a pair of conjugate points, Q (object space) and (image space). Using standard notation, we write quantities that refer to object space as unprimed, those that refer to image space as primed. Both light rays pass through Q and ; additionally, light ray 1 passes through the nodal point, N, and light ray 2 passes through the object-sided focal point, F.
To construct the mapping between Q and , we first consider light ray 1 whose trajectory, by virtue of passing through N, is a straight line. The vector , where and are the position vectors that correspond to the points and Q, respectively, is then proportional to the vector , where is the position vector corresponding to N.
The ratio of the lengths of these vectors can be found by considering light ray 2. The two shaded triangles shown in Fig. 3 are similar, and their relative size is that of the two vectors. Expressed in terms of the length of the bottom sides of these triangles, this ratio is , and so
It is instructive to write the mapping described in Eq. (3) in terms of Cartesian components. We have already defined the coordinate, and make this one of the coordinates of a Cartesian coordinate system, the others being and . For an object position , Eq. (3) returns the image position , where
We observe the following:
- 1. Reassuringly, the glens plane and the focal planes are being mapped as expected. Any point in the glens plane, i.e., any point with , is mapped to itself. Any point at is imaged to , i.e., into the image-sided focal plane. Similarly, any point in the object-sided focal plane () is imaged to .
In analogy to its lens equivalent, we call this equation the glens equation. Note that it has retained the form of the third equation in Eqs. (10) in , despite the fact that we have redefined the sign of the component of negative space positions (as in the term referring to negative space both numerator and denominator change sign).
- 3. When the change in the definition of the components is taken into account, the and components of the equation are the same as the first two equations in Eqs. (10) in .
- 6. From the component of Eq. (4), it is immediately clear that any object in the plane with is imaged again into the same plane. We call this plane the nodal plane. The point where the nodal plane intersects the optical axis is, of course, the nodal point N. Substitution of into Eqs. (8) and (9) reveal that points in the nodal plane are imaged with the same transverse and longitudinal magnification, namely,
- 7. From the component of Eq. (4), it can be seen that exchanging the order of two parallel glenses, one immediately behind the other and sharing an optical axis, changes the mapping of the combination, unless both have the same nodal distance. If the object-sided focal lengths of the first and second glens are and , and their nodal distances are and , then the combination images an object position with longitudinal coordinate to an image position with longitudinal coordinate
Exchanging the order of the glenses leaves the numerator and the first two terms of the denominator unchanged, but the third term of the denominator changes from to . If , this does not actually change the third term of the denominator. A calculation of the transverse coordinates of the image reveals that, provided , reversing the order of the glenses leaves all coordinates of the image unchanged.
It is also instructive to investigate the mapping in the limit when glenses become homogeneous, the imaging sheets studied in . This can be visualized as picking a point on the surface of a glens and then isotropically and infinitely magnifying the glens with the point on the surface as the center of the magnification. The focal lengths then become infinite, and the nodal point N becomes infinitely distant. In this limit, and , and so the mapping equation, Eq. (3), becomes
Comparison with Eq. (11) in  reveals that
Finally, we point out that Eq. (5) is in the form of a collineation (also known as a projective transform), the most general bijective (i.e., one-to-one and onto) transformation that takes lines to lines and planes to planes . It is, of course, necessary that this transformation takes lines to lines as all object positions on any incident light ray must be imaged to corresponding image positions on the corresponding outgoing light ray, and the trajectories of both the incident and outgoing rays are straight lines.
Translating two of the properties of glenses into the language of collineations allows us to identify the mapping more precisely. The first of these properties is that light rays exit a glens from the same positions where they entered it but on the other side. This means that any point in the plane of the glens is imaged to itself and, therefore, a fixed point of the collineation. In 2D, the straight line on which the fixed points reside would be called the axis of the collineation. The second of these properties is the existence of the nodal point, which resides on the straight line through any pair of object and image positions. In the language of collineations, the nodal point would be called the center of the collineation, and, by virtue of possessing a center, the glens mapping is a central collineation . The relationship between the axis and the center of the collineation defines the subclasses of collineations  that describe the mapping due to thin lenses and glenses: In thin lenses, the center resides on the axis of collineation, which makes the mapping an elation. In glenses, the center does not in general reside on the axis of collineation, which makes the mapping a homology.
5. LIGHT-RAY-DIRECTION CHANGE UPON TRANSMISSION THROUGH A GLENS AND ITS IMPLEMENTATION IN DR TIM
We now derive equations describing the light-ray-direction change upon transmission through a glens. We write these equations in vector form to facilitate implementation in our ray-tracing software, Dr TIM .
Figure 4 sketches the trajectory of a light ray that is incident, with nonzero but otherwise arbitrary direction , on an arbitrary point S on a glens. As before, we refer to the space from which the ray is incident as object space (unprimed), the other space as image space (primed).
The direction of the light ray after transmission through the glens can be constructed using the fact that parallel incident light rays, after transmission through the glens, intersect in the image-sided focal plane. The direction of the redirected light ray is, then,
To calculate , we consider a principal ray that intersects N (position vector ) and that is parallel to the incident ray. We can then write in the form
Substitution into Eq. (16) gives
6. RAYTRACING SIMULATIONS
We have used the equation describing the change of light-ray direction upon transmission through a glens, Eq. (18), to add the capability to simulate transmission through a glens to our custom raytracer Dr TIM . (More precisely, we have used Eqs. (A3) and (A4) (see Appendix A), which are versions of Eq. (18) that also work in the case of infinite focal lengths and nodal distance.) We have used this new capability to simulate the view through glenses, concentrating on phenomena associated with pseudoscopic imaging and the location of the nodal point outside of the plane of the glens, which do not occur for lenses.
Figure 5 simulates the view through a glens of objects in the glens’s nodal plane. Frames (b) and (d) in Fig. 5 show a collection of objects located close to the nodal plane of a glens, seen through that glens from different directions. The objects touch the nodal plane from behind; the vertex of the cone is at the nodal point. Comparison with frames (a) and (c), which show the same scene without the glens, confirms that the nodal point is seen in the same position irrespective of the presence of the glens, which is consistent with the glens imaging its nodal point to itself. In all cases, the virtual camera is focused to the nodal plane, and the fact that the objects remain in focus when seen through the glens is consistent with the nodal plane being imaged into itself. Finally, it can be seen that the nodal plane appears magnified by a factor , which is expected from Eq. (10) and the choice of focal lengths, and , for which the figure was calculated.
Figure 6 demonstrates what happens when the camera is located at the nodal point. If the camera is a pinhole camera, every light ray recorded by the camera is a light ray through the glens’s nodal point, and such light rays do not change direction when passing through the glens. This is why the view in frame (a), which shows a scene with the glens absent, looks the same as that in frame (b), in which the glens is present. Each image produced by the glens is, therefore, seen in the same direction as the corresponding object. Frames (c) and (d) illustrate that these images are formed at distances different from those of the objects. In frame (c), in which the virtual camera is focused on the (front of the) objects in the scene, the objects are therefore blurred, but in frame (d), in which the camera is focused on the plane into which the (front of the) objects are being imaged by the glens according to the glens equation [Eq. (7)], they are in focus.
Finally, we simulate the view through a telescope formed by glenses that share a common focal point (the positive focal point of the first glens coincides with the negative focal point of the second) and a common nodal point (Fig. 7). Such a telescope changes the beam size without changing its propagation direction. This is forbidden by Liouville’s theorem as phase-space volume is not preserved, reflecting the fact that glenses are wave-optically forbidden. (Note that, conversely, a single homogeneous glens with such that  changes the beam angle without changing its size.) The mapping between object and image space created by such a telescope is an isotropic stretching by a factor , centered on N. When seen from N, any object is seen in the “true” direction, but a factor further away. This is demonstrated in Fig. 8.
In this paper, we have defined glenses as idealized thin lenses, generalized such that the focal lengths on both sides can be different. Glenses can be approximately realized in the form of Gabor superlenses. We have shown that a glens possesses a nodal point that can be located anywhere on the optical axis. When formulated in terms of this nodal point, the equations describing the mapping between object and image space and the light-ray-direction change on transmission through a glens become particularly simple.
We intend to apply our findings to the construction of novel optical instruments, starting with a generalization of the pixellated transformation-optics devices described in .
APPENDIX A: REPRESENTATION OF GLENS PARAMETERS IN DR TIM
It can happen that both focal lengths become infinite, namely, in the limit of homogeneous glenses , in which case the equations for the mapping, Eq. (3), and for the light-ray-direction change, Eq. (18), do not work. It would be possible to switch to alternative equations, but we took a different approach when programming glenses into Dr TIM, namely, to formulate the glenses in terms of parameters that work irrespective of whether or not the focal lengths are finite or infinite.
The solution is to divide the quantities that go to by a length, , such that they become finite, and describe the glens in terms of those dimensionless parameters, namely, , , and . The length itself can become infinite. In the case of glenses with finite focal lengths, we simply choose (Dr TIM uses dimensionless units). In the case of glenses with infinite focal lengths, we choose , the nodal distance.
Formulated in terms of , the mapping equation, Eq. (3), becomes
In the limit , for any finite value of and for any finite object position , and so
The equation describing the light-ray-direction change, Eq. (18), can also be formulated in terms of . It becomes
For a finite intersection point , this simplifies to
Engineering and Physical Sciences Research Council (EPSRC) (EP/M010724/1); Grantová Agentura České Republiky (P201/12/G028).
1. J. Courtial, “Geometric limits to geometric optical imaging with infinite, planar, non-absorbing sheets,” Opt. Commun. 282, 2480–2483 (2009). [CrossRef]
2. J. Courtial, S. Oxburgh, and T. Tyc, “Direct, stigmatic, imaging with curved surfaces,” J. Opt. Soc. Am. A 32, 478–481 (2015). [CrossRef]
3. D. Gabor, “Improvements in or relating to optical systems composed of lenticules,” UK patent 541,753 (December 10, 1941).
4. C. Hembd-Sölner, R. F. Stevens, and M. C. Hutley, “Imaging properties of the Gabor superlens,” J. Opt. A 1, 94–102 (1999). [CrossRef]
5. D. S. Goodman, “General principles of geometric optics,” in Handbook of Optics, M. Bass, E. W. V. Stryland, D. R. Williams, and W. L. Wolfe, eds., 2nd ed., Fundamentals, Techniques, and Design (McGraw-Hill, 1995), Chap. 1.15, Vol. I, pp. 1.60–1.68.
6. J. Courtial and T. Tyc, “Generalised laws of refraction that can lead to wave-optically forbidden light-ray fields,” J. Opt. Soc. Am. A 29, 1407–1411 (2012). [CrossRef]
7. E. N. Cowie, C. Bourgenot, D. Robertson, and J. Courtial, “Resolution limit of pixellated optical components” (in preparation).
8. A. C. Hamilton and J. Courtial, “Metamaterials for light rays: ray optics without wave-optical analog in the ray-optics limit,” New J. Phys. 11, 013042 (2009). [CrossRef]
9. G. Lippmann, “La photographie intégrale,” C. R. Hebd. Acad. Sci. 146, 446–451 (1908).
10. T. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]
11. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Technical Report CTSR 2005-02 (2005).
12. R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005). [CrossRef]
13. R. F. Stevens, “Integral images formed by lens arrays,” poster presented at Conference on Microlens Arrays, Teddington, London, May 11–12, 1995, Vol. 5.
14. J. Courtial, “Ray-optical refraction with confocal lenslet arrays,” New J. Phys. 10, 083033 (2008). [CrossRef]
15. A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009). [CrossRef]
16. S. Oxburgh, C. D. White, G. Antoniou, E. Orife, and J. Courtial, “Transformation optics with windows,” Proc. SPIE 9193, 91931E (2014). [CrossRef]
17. S. Oxburgh, C. D. White, G. Antoniou, E. Orife, T. Sharpe, and J. Courtial, “Large-scale, white-light, transformation optics using integral imaging,” J. Opt. (in press).
18. M. C. Hutley, R. Hunt, R. F. Stevens, and P. Savander, “The moiré magnifier,” Pure Appl. Opt. 3, 133–142 (1994). [CrossRef]
19. J. Courtial, “Standard and non-standard metarefraction with confocal lenslet arrays,” Opt. Commun. 282, 2634–2641 (2009). [CrossRef]
20. T. Maceina, G. Juzeliūnas, and J. Courtial, “Quantifying metarefraction with confocal lenslet arrays,” Opt. Commun. 284, 5008–5019 (2011). [CrossRef]
21. E. N. Cowie, C. Bourgenot, D. Robertson, and J. Courtial, “Optical design of generalised confocal lenslet arrays” (in preparation).
22. E. N. Cowie and J. Courtial, “Engineering the field of view of generalised confocal lenslet arrays (GCLAs)” (in preparation).
23. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 2.2.
24. S. Oxburgh and J. Courtial, “Perfect imaging with planar interfaces,” J. Opt. Soc. Am. A 30, 2334–2338 (2013). [CrossRef]
25. J. C. R. Wylie, Introduction to Projective Geometry (Dover, 2008), pp. 186–190.
26. A. Heyting, N. G. D. Bruijn, J. D. Groot, and A. C. Zaanen, Axiomatic Projective Geometry, 2nd ed. (North-Holland, 1980), Chap. 2, p. 41.
27. H. S. M. Coxeter, Projective Geometry, 2nd ed. (University of Toronto, 1974), pp. 49–57.
28. S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014). [CrossRef]