Abstract
We construct combinations of three skew ideal lenses whose mapping between object and image space corresponds to a rotation of the object space around a common intersection line of all included lenses. The angle of image rotation Δθ can be set arbitrarily within a range (0, 2π) by tuning the parameters of the lenses. The resulting skew-lens image rotator could form the basis of novel applications, e.g. simulating curved spaces.
Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
1. Introduction
Although geometrical optics is an ancient and well-developed field of research [1,2], new ideas keep emerging. Transformation optics [3–6], the science of designing inhomogeneous material structures that distort light rays according to the eponymous coordinate transformation, is the source of many highly original devices. Unfortunately, it is usually very challenging, sometimes even impossible, to fabricate the material structures prescribed by transformation optics [7,8]. This difficulty has motivated a new direction of research: realising transformation-optics ideas using readily available materials, such as crystals available in bulk [9,10], and standard optical elements, for example lenses [11,12].
We recently constructed, theoretically, an omnidirectional transformation-optics device purely from ideal thin lenses [12]. The device can be interpreted as an omnidirectional lens [12] and as various types of invisibility cloak [13]. The ideal lenses forming the device are in general not parallel, a situation in which standard physical lenses do not work particularly well. However, freeform lenses or phase holograms can be designed such that they redirect specific ray bundles (such as the rays that contribute to the image seen from a particular viewing position) exactly like ideal lenses, and the development of metalenses, lenses realised in the form of metasurfaces, is progressing rapidly [14–17].
Studying the imaging properties of our lens-based transformation-optics device motivated the development of a simplified description of imaging with pairs of skew ideal lenses [18]. In this description, the concept of transverse planes is generalised to two conjugate sets of parallel planes, one in object space, the other in image space. In general, the planes in one set are rotated relative to those in the other. The longitudinal direction in object space is the same as that in image space, indicating that the two spaces are sheared with respect to each other.
Here we show that the addition of another lens can result in the relationship between object and image space being a pure rotation around the line where the planes of all three lenses intersect, by an arbitrary angle in a range $(0,2\pi )$. The direction of the rotation axis is therefore close to perpendicular to the line of sight through the three lenses, but we speculate that replacing the three lenses by appropriate omni-directional lenses [12] can result in image rotation around an axis at any arbitrary angle to the line of sight, including parallel to it. We thus present, for the first time to our knowledge, an optical system that employs tilted lenses for image rotation. We also discuss the physical realisability of tilted-lens image rotators.
This paper is structured as follows. In section 2 we review the description of pairs of skew ideal lenses developed in Ref. [18]. We describe the idea for achieving image rotation with three ideal lenses in section 3 and work out the mathematical details in section 4. Sections 5 and 6 discusses the physical realisability and potential applications of three-lens image rotation. In section 7 we discuss the conditions on the positions of the principal points in combinations of ideal lenses that satisfy the loop-imaging condition, which is important in the construction of ideal-lens transformation-optics devices. We present a concluding discussion in section 8.
2. Two skew-lens imaging
We define an imaging device in a standard way as a device which changes the directions of light rays passing through such that all light rays emerging from a point $\mathbf {Q}$ intersect again at point $\mathbf {Q'}$. Point $\mathbf {Q}$ — an object — is said to be in an object space and point $\mathbf {Q'}$ — an image — is said to be in an image space. A real image is created at the actual light-ray intersection, whereas a virtual image is created at an intersection of backward elongations of the light rays that otherwise do not intersect.
Our design of a skew-lens image rotator arises from the theory of imaging due to a system of two skew lenses, presented in Ref. [18]. Therefore, we will provide an overview of this theory in the following lines; then, we will extend this concept to design a system of ideal lenses that provides a mapping from object space to image space that corresponds to a rotation of the object space, an image rotation. Reference [18] constructs skew coordinate systems for object space and image space in which the imaging equations take the form of the ideal-lens mapping. The corresponding coordinates are called lens-imaging coordinates. The object-space coordinate system is defined by the set of basis vectors $(\mathbf {u},\hat {\mathbf {v}},\hat {\mathbf {w}})$, the image-space coordinate system by the basis-vector set $(\mathbf {u}',\hat {\mathbf {v}}',\hat {\mathbf {w}}')$ (Fig. 1). Throughout this paper, we use the notation that hats denote unit vectors (note that $\mathbf {u}$ and $\mathbf {u}'$ are not unit vectors), and that unprimed entities refer to object space and primed entities to image space. The basis vectors $\hat {\mathbf {w}}$ and $\hat {\mathbf {w}}'$ are identical and coincide with the direction of two-lens optical axis, the straight line passing through the principal points of both lenses, P$_1$ and P$_2$. In this paper, we will assume that the vector $\mathbf {P}_2-\mathbf {P}_1$ is perpendicular to the intersection line V of the included lenses. Then, the projected focal lengths are defined with respect to this two-lens optical axis, as $g_i=f_i/\cos {\varphi _i}$, where $\varphi _i$ is the angle between the normal of the $i$th lens and the two-lens optical axis. In terms of these projected focal lengths, the two-lens system has an effective focal length
where $d$ is the distance between the principal points of the individual lenses (P$_1$ and P$_2$ in Fig. 1). We will call $d$ the scaling parameter in this paper. The remaining basis vectors $\mathbf {u},\hat {\mathbf {v}}$ span the object-sided transverse planes; similarly, $\mathbf {u}',\hat {\mathbf {v}}'$ span the image-sided transverse planes. As defined, the object-sided transverse planes form a parallel set of planes which is imaged due to the two-lens system to another set of parallel planes, the image-sided transverse planes. In general, the two sets of planes are not parallel to each other. In the global, Cartesian, $x,y,z$ coordinate system defined in Fig. 1 (chosen with its origin at P$_1$; the $z$ axis is pointing towards P$_2$, that is, along the two-lens optical axis, the $y$ axis is parallel with the intersection line V of the included lenses, and the $x$ axis is perpendicular to both $y$ and $z$), the position vectors of the principal points P$_1$ and P$_2$ are $\mathbf {P}_1=(0,0,0)$ and $\mathbf {P}_2=(0,0,d)$, the object-sided transverse planes form an angle $\theta$ with the $x$ axis and the image-sided transverse planes an angle $\theta '$. The difference $|\theta '-\theta |$ can be either bigger or smaller than the difference $|\varphi _2-\varphi _1|$; Fig. 1 shows the case when $|\theta '-\theta |>|\varphi _2-\varphi _1|$. We will specify the allowed ranges of all angular parameters in detail in Section 4, where the relevant formulas are provided.Now, we will briefly review the derivation of the two-lens imaging equation. Let us denote $P_z=df_\mathrm {D}/g_2$ and $P'_z=d-df_\mathrm {D}/g_1$ the $z$ coordinates of the two-lens object-sided principal point, P, and of the image-sided principal point, P$'$, respectively [18]. Then, the equations for the transverse planes can be written in the form
It can be seen that the object-sided transverse plane given by Eq. (2) intersects the $z$ axis at point $P_z+w$; similarly, the image-sided transverse plane intersects the $z$ axis at point $P'_z+w'$. The lengths of the basis vectors $\mathbf {u}$ and $\mathbf {u}'$ are chosen such that
and the basis vectors $\hat {\mathbf {v}}$ and $\hat {\mathbf {v}}'$ are chosen such thatEquations (2), (3) and (4) provide coordinate transformations $(x,y,z)\rightarrow (u,v,w)$ and $(x',y',z')\rightarrow (u',v',w')$, which can be written in the following matrix form:
These transformations correspond to those presented in Eqs (44) and (45) in Ref. [18] for $\alpha =0$. Transformations (6) and (7) ensure that the origin of the object-sided lens-imaging coordinate system coincides with the object-sided two-lens principal point, P, and that the origin of the image-sided lens-imaging coordinate system coincides with the image-sided two-lens principal point, P$'$. As mentioned at the beginning of this section and shown in Ref. [18], when expressed in lens-imaging coordinates, the imaging equation due to two skew lenses is of the standard form
The image position can be expressed in the global $(x', y', z')$ coordinate system by applying the inverse coordinate transformation $(u',v',w')\rightarrow (x',y',z')$,
Below, we employ this equation as the basis for our derivation of an imaging equation due to three skew lenses, and we show that such a combination (under certain circumstances) can provide an image rotation.
3. How to perform image rotation with three skew lenses
The idea how to obtain image rotation with a combination of three lenses is shown in Fig. 2(a). Consider a system D of two skew lenses, with principal planes $\mathcal {P}$, $\mathcal {P}'$ (which are placed before, and after the two-lens system for the particular case shown in Fig. 2(a); for a different set of parameters, however, the principal planes $\mathcal {P}$ and $\mathcal {P}'$ can be also placed such that both/one of them is located between the lenses) and effective focal length $f_\mathrm {D}$ (which can be both positive and negative; Fig. 2(a) shows the case when $f_\mathrm {D}<0$). A set of planes which are parallel with $\mathcal {P}$ — the object-sided transverse planes — are imaged due to D to a set of planes which are parallel with $\mathcal {P}'$ — the image-sided transverse planes.
Since the object- and image-sided transverse planes form an angle $\Delta \theta =\theta '-\theta$, the mapping due to the system D corresponds to the one due to a single lens, but with sheared object and image spaces. If another shearing is implemented, however, such that it exactly cancels the original one, the resulting mapping might correspond to the rotation of the object space, an image rotation. The additional shearing can be performed if one inserts lens L$_3$ into the imaging system such that the plane of L$_3$ coincides with $\mathcal {P}'$ and the object-sided focal plane of L$_3$ coincides with the image-sided focal plane $\mathcal {F}'$ of the system D. By doing so, all the object-sided transverse planes will be imaged to the image space due to this new system $\mathrm {T}=\mathrm {D}+\mathrm {L}_3$ with a magnification equal to $+1$ (i.e. non-inverted). To cancel the coordinate shearing due to D, the principal point P$_3$ is required to be placed to a position such that all lines, which are perpendicular to the object-sided transverse planes, become perpendicular also in the image space. To find the desired position of P$_3$, we will employ a collimated ray bundle, perpendicular to the object-sided principal plane $\mathcal {P}$. This bundle is focused to a point X in the image-sided focal plane $\mathcal {F}'$ due to the system D. Then, the principal point P$_3$ of lens L$_3$ should be placed within the image-sided principal plane $\mathcal {P}'$ of D such that the object-sided focal point of L$_3$ coincides with point X. By doing so, the ray bundle will be collimated again and perpendicular to the image-sided principal plane $\mathcal {P}'$ after transmission through the lens L$_3$ (see Fig. 2(a)).
One can see that the projected focal length $g_3$ of lens L$_3$ is exactly opposite to the effective focal length $f_\mathrm {D}$ of the two-lens system D, i.e. $g_3=-f_\mathrm {D}$. There is no shearing of the image due to three-lens system $\mathrm {T}$: the parallel ray bundle considered above was chosen to be perpendicular to the transverse planes both in object and image space; as object-space points on one such ray are imaged to image-space points on the same ray, planes that are perpendicular to the object-sided transverse planes are therefore imaged to planes that are perpendicular to the image-sided transverse planes again. This means that the mapping due to the three-lens system $\mathrm {T}=\mathrm {D}+\mathrm {L}_3$ is a pure rotation by an angle $\Delta \theta =\theta '-\theta$. Figure 2(c) demonstrates this image rotation: point O$'''$, an image of object-point O due to the system $\mathrm {T}$, corresponds to point O rotated by an angle $\Delta \theta$ around the intersection line V.
4. Mathematical treatment
In this section, we will formulate the ideas from the previous section mathematically. We will see below that it is convenient to work in the lens-imaging coordinates of the two-lens system D. In these lens-imaging coordinates, the three-lens system $\mathrm {T}$ is equivalent to a system of two coplanar lenses like the one presented in Fig. 2(b): the two-lens system D acts like a lens L$_\mathrm {D}$ of focal length $f_\mathrm {D}$ and with its principal points coinciding (i.e. P=P$'$) at the origin of the coordinate system. The other lens, L$_3$, is of focal length $g_3=-f_\mathrm {D}$ and its principal point P$_3$ is offset in the (transverse) $u$-direction. The offset has to be chosen such that the image shearing due to the combination L$_\mathrm {D}$ + L$_3$ exactly cancels the shearing of the lens-imaging coordinate system, so that the mapping in the global Cartesian coordinates is an image rotation.
This formulation enables a simple description of the imaging due to the system T in lens-imaging coordinates: first, an intermediate image $(u',v',w')$ due to lens L$_\mathrm {D}$ is created, in accordance with Eq. (7). Then, the intermediate image $(u',v',w')$ is re-imaged by lens L$_3$ (with principal-point position $(h,0,0)$ and focal length $g_3=-f_\mathrm {D}$) as follows:
Combining this equation with Eq. (7) yields the imaging equation due to the three-lens system T in lens-imaging coordinates, which can be written in the matrix form
To transform this imaging equation into global Cartesian coordinates, we simply replace the factor describing the effect of a thin lens in Eq. (9) by the matrix describing the effect of the three-lens system T (Eq. (11)). This gives the equation
Performing the matrix multiplication yields
For Eq. (13) to describe a rotation, the necessary (but not sufficient) requirement is that $3\times 3$ matrix on the right-hand side must be anti-symmetric. That yields the condition
We are required to set $\theta = -\theta '$ in order that the denominator equals 1 and hence that the matrix in Eq. (13) can be a pure rotation matrix. With our earlier definition $\Delta \theta \equiv \theta ' - \theta$, the condition on the fraction $h/f_\mathrm {D}$ simplifies to
and it is easy to show that the diagonal terms in the $3\times 3$ matrix in Eq. (13) are both equal to $\cos {\Delta \theta }$ if $\theta = -\theta '$. Therefore, Eq. (13) can be written in the formFrom Fig. 2(a) one can see that the point P$'$ is the point P, rotated around line V by an angle $\Delta \theta = \theta ' - \theta$, and therefore satisfies the equation
Here, $V_x$ and $V_z$ are the $x$ and $z$ coordinates of the intersection of the line V with the plane $y=0$ and $V_y\in \mathbb {R}$. Summing Eqns. (18) and (19) gives
This equation states that the mapping between object space and image space is a rotation by an angle $\Delta \theta =\theta '-\theta =2\theta '$ around V. Therefore, we will call the three-lens system T a skew-lens image rotator. The formulas for the parameters of the skew-lens image rotator, namely the coordinates of the intersection line V, focal lenghts $f_1$, $f_2$, $f_3$ and the principal-point position $\mathbf {P}_3=(-f_D\sin {\Delta \theta },0,2f_D\sin ^2\frac {\Delta \theta }{2})$ of lens L$_3$, are provided in Appendix A.
The formulas derived in Appendix A show that crucial parameters of the skew-lens image rotator are the angle of rotation $\Delta \theta$, the angle $\varphi _{13}=\Delta \theta /2-\varphi _1$ between lenses L$_1$ and L$_3$, and the angle $\varphi _{12}=\varphi _2-\varphi _1$ between lenses L$_1$ and L$_2$, depicted in Fig. 2(a). To specify the allowed ranges of these angular parameters, we will employ the following formulas for the focal lengths $f_1$, $f_2$, and $f_3$ of the lenses included in the rotator (the full derivation is provided in Appendix A):
To avoid non-physical cases when either $f_i=0$ or $f_i=\infty$ (where $i=\{1,2,3\}$) and ensure that the order of lenses L$_1$, L$_2$ and L$_3$ is preserved, the angles $\Delta \theta$, $\varphi _{13}$ and $\varphi _{12}$ must satisfy the following: $\varphi _{12}\neq N\pi (N\in \mathbb {N})$, $\mathrm {Sgn}(\varphi _{12})=\mathrm {Sgn}(\varphi _{13})$, $|\varphi _{12}| <|\varphi _{13}|$ (if $|\varphi _{13}|>\pi$, then the condition $|\varphi _{12}|<\pi$ needs to be satisfied for light rays to pass through both lenses L$_1$ and L$_2$; similarly, $|\varphi _{13}-\varphi _{12}|<\pi$), $\Delta \theta \neq \varphi _{13}$, $\Delta \theta \neq 0$ and $\Delta \theta \neq \pm 2\pi$. All other combinations $(\Delta \theta,\varphi _{13},\varphi _{12})$ are allowed and we can therefore specify the ranges of these angles: $\Delta \theta \in (0,\pm 2\pi )$, $\varphi _{13}\in (0,\pm (\pi +|\varphi _{12}|))$ and $\varphi _{12}\in (0,\mathrm {Sgn}(\varphi _{13})\,\mathrm {min}(\pi,|\varphi _{13}|))$.
The image rotation due to the three-lens system discussed above is demonstrated by the ray-tracing simulations shown in Fig. 3. Figure 3(a,b) show views through two different three-lens combinations that are designed to rotate the image seen through all three lenses by an angle $\Delta \theta = -15^\circ$ around an axis V; Fig. 3(c) shows the same scene but without the lenses and with the camera instead rotated around V by $-\Delta \theta$. The part of the image in which the scene is seen through all three lenses in (a,b) is identical to the corresponding part of the image shown in (c), as expected. A comparison of Figs. 3(a) and (b) shows that the size of the field of view, that is the angular size of the image seen through all three lenses, depends on the parameters of the three lenses. Specifically, the field of view can be increased, if the inclination $\varphi _2-\varphi _1$ between the first two lenses is reduced.
5. Physical realisability of skew-lens image rotators
The way ideal thin lenses are often used is as a first step in the design of optical systems. These systems can then be realised by replacing the ideal thin lenses with physical, usually refractive, lenses. The resulting device works well if the lenses are used paraxially, but this is not the case in our skew-lens image rotators, and raytracing simulations with more realistic optical components (specifically simple phase holograms of lenses; Fig. 4(b)) indeed show that the simulated skew-lens image rotator does not work. It is therefore important to discuss the limits of applicability of our devices.
Firstly, research on metalenses is still progressing rapidly and in many ways making metalenses more like ideal thin lenses, for example making them flatter and working at higher NAs (and therefore less paraxially) [21]. It appears unrealistic to expect metalenses to become virtually ideal, but there is a possibility that they will become “ideal enough” for theoretical ideal-lens devices such as skew-lens image rotators and omnidirectional lenses [12] to become practical, at least over a limited parameter range.
Secondly, a phase hologram can be optimised to image a pair of positions into each other, stigmatically and in the simplest case for a single wavelength, simply by setting the phase shift introduced by the phase hologram to compensate for the difference in phase due to the difference in geometrical path length between light rays that have passed through different points on the hologram. Specifically, a phase hologram can be designed to image between the position of the pinhole of a pinhole camera and the image of this pinhole position due to an ideal thin lens. In a device comprising several ideal thin lenses the lens closest to the pinhole camera produces an image of the pinhole position, the next-closest lens produces an image of that image, and so on, and if the ideal thin lenses are replaced by phase holograms optimised for these image positions then the view through the resulting device is identical to that through the ideal-lens device in a photo taken by the the pinhole camera. This is shown in Fig. 4(c). If the pinhole position differs from the position for which the holograms are optimised (Fig. 4(d)), or if in a photo taken with a finite-size aperture (Fig. 4(e)), the view through the holographic device differs from that through the ideal-lens device. In this way, it might be possible to create physical realisations of ideal-lens devices that work for a particular observer position.
Thirdly, it is — at least in principle — possible to devise transformation-optics devices in which the light-ray trajectories are identical to those in our ideal-lens structures. The permittivity and permeability tensors of the material that forms such a device can be calculated from the mapping, via the Jacobian and the metric tensor [22]. However, it should be noted that at the focal planes (for a single lens; in lens combinations, any images of electromagnetic-space positions at infinity) the metric tensor diverges, which in turn can be avoided by ensuring that physical space does not contain such positions. Finally, it should also be noted that there are simpler ways to design rotating transformation-optics devices [23], but perhaps the general idea of realising ideas developed in the context of ideal thin lenses in the form of transformation-optics devices will find applications.
6. Application of a skew-lens image rotator for simulations of curved spaces
Provided physical realisations of tilted-lens image rotators can be made to work well enough, these devices might well find use in existing applications of optical image rotation, for example in optical derotation [24].
Here we discuss an additional potential application, one that makes good use of the fact that our tilted-lens image rotators rotate around the axis along which the lenses intersect, and that this rotation axis is usually approximately perpendicular to the direction along which light would travel through the lenses. In Ref. [25], a novel approach to optical simulation of curved spaces has been presented. The strategy is based on the idea of unfolding manifolds into a flat space, so the light rays propagate as in a flat space inside these flattened manifolds. When unfolding the curved space, wedges of “missing space" appear [25]. To preserve the topology of the manifold even after such an unfolding and flattening, the faces of the wedges of “missing space” need to be equipped with “gluing instructions”, which ensure that these faces are identified with each other mathematically. In the corresponding optical simulation, the faces are identified optically, by a device that transfers the optical field between the two faces of the “missing space”. Such a device is called a space-cancelling wedge.
Regarding just ray-optical simulations of curved spaces (i.e., neglecting the wave-optical aspects), a symmetric skew-lens image rotator (i.e. $\varphi _{12}=\varphi _{13}/2$) is an appropriate candidate for a space-cancelling wedge. That this is indeed the case for a surface of a tetrahedron can be seen in Fig. 5: panel (a) shows an unfolded net of a surface of a regular tetrahedron, surrounded by three ideal-lens space-cancelling wedges, each providing an image rotation by an angle $\Delta \theta =\pi$. Panels (b) and (c) present raytracing simulations of the view of a white sphere located on the surface of a tetrahedron as seen from within the surface, with an added third, Euclidean, dimension perpendicular to the surface. Whereas, (b) shows the simulation with space-cancelling wedges realised using skew-lens image rotators, (c) presents the same scene with perfect space-cancelling wedges. In the simulation with ideal-lens space-cancelling wedges, white vertical lines can be seen, caused by light rays missing the lenses completely: this appears to be a fundamental defect, which cannot be removed completely, but significantly suppressed by shrinking the angles between the lenses, decreasing the scaling parameter $d$ or combining both strategies.
7. Application of the skew-lens image rotator to resolving an open question of the loop imaging condition
Here we use the three lens rotator construction to resolve a previously unsolved problem arising from the authors’ previous work on ideal-lens transformation optics (TO) devices [12,13,26]. When designing these devices, we derived an essential condition that must be satisfied for a lens structure to be a TO device: the loop-imaging condition requires the combination of all ideal lenses (or, more generally, optical components/materials) encountered along any closed loop to image every object-space point back to itself [12]. This then poses the general question: which combinations of ideal lenses image every point back to itself?
In Ref. [26], this question was asked for glenses [27], a generalisation of ideal thin lenses. A number of conditions on the positions of the nodal points (in ideal thin lenses, the nodal point coincides with the principal point) were derived, and it was noted that the conditions on the nodal-point positions become less restrictive as the number of glenses increases. Specifically, for a combination of 3 glenses to image every point to itself, the nodal points (which, in lenses, coincide with the principal points) of all three glenses must coincide. For a combination of 4 glenses to image every point to itself, the nodal points must lie on a straight line. And that for a combination of some minimum number of glenses, the nodal points no longer have to lie on a straight line. That minimum number of glenses was shown to be $\leq 6$, so it is either 5 or 6.
Applying our findings of image rotation with three lenses, we can show that the minimal number is in fact 5. We show this by constructing an example of a combination of 5 lenses (which are special cases of glenses) that images every point to itself, as follows. Consider two skew-lens image rotators, T and T$'$, each rotating the image by an angle $\Delta \theta =\pi$, positioned such that their intersection lines (and also rotation axes) V and V$'$ coincide. As the rotation axes are the same, the combination of T and T$'$ effects a single $2 \pi$ rotation, and so maps every point to itself, and therefore satisfies the loop-imaging condition.
Furthermore, if T and T$'$ are positioned such that the third lens L$_3$ of T is coplanar with the first lens L$'_1$ of T$'$, then lenses L$_3$ and L$'_1$ can be combined into a single lens, L$_{13}$ (see Fig. 6). The focal length of the combined lens L$_{13}$ is $f_{13}=f'_{1}f_{3}/(f'_{1}+f_{3})$, where $f'_{1}$ and $f_3$ are the focal lengths of lenses L$'_1$ and L$_3$, respectively. The combination of T and T$'$, which satisfies the loop-imaging condition, therefore consists of 5 lenses.
How about the locations of the principal points? The parameters for each rotator can be chosen such that the principal points of all 6 lenses (before combining L$_3$ and L$'_1$ into a single lens) in the two rotators lie in the same plane and at the same (finite) distance from the line V (which is the case for the “regular rotator” described by Eqn. (35) in Appendix A) – they lie on a circle. As the principal points of lenses L$_3$ and L$'_1$ coincide, this common principal point is also the principal point of the combined lens L$_{13}$, and so the principal points of all 5 lenses in the $2 \pi$ rotator lie on a circle of finite radius. Specifically, the principal points do not lie on a straight line. This proves that a combination of five (g)lenses can image every point to itself even if the principal points of the intersecting lenses do not lie on a straight line.
8. Conclusions
We have constructed, to our knowledge for the first time, an optical image rotator from three tilted ideal lenses.
Our work contributes to previous findings related to tilted lenses: our description of imaging with tilted lenses is closely related to the Scheimpflug theorem [28], which is the basis of perspective-control (or tilt-shift) lenses. Tilted lenses were also considered in optometry to adjust for the cylindrical power [29–31]. Furthermore, the analysis and use of tilted optics was considered in modern optical instruments and in modifying laser beams parameters [32,33]. Strategies for optical map transformations have been considered for some time, both theoretically and experimentally, including rotations about the optical axis [34–37].
Unfortunately, our tilted-lens image rotators do not work very well if the ideal lenses are replaced by more realistic lenses. Perhaps in the longer term research on metalenses will lead to physical realisations, but for the moment our work is mostly of theoretical interest.
Appendix A. Parameters of the three-lens image-rotation system
In this paper, we showed that an appropriate combination of three skew lenses provides an image rotation of the entire object space. However, a relationship between the parameters of the system T and the inclination angle $\Delta \theta =\theta '-\theta$ remains unrevealed. We will settle this debt in this section. First, we will find the intersection line $\mathbf {V}=(V_x,V_y,V_z)$ of all lenses included in the rotator (which is actually the line around which the rotation is performed). Then, we will derive formulas for focal lengths $f_1$, $f_2$ and $f_3$ of all included lenses in terms of angles $\varphi _1$ and $\varphi _2$ (formed by lenses L$_1$ and L$_2$ respectively with $x$-axis), the scaling parameter $d$ and the angle of image rotation $\Delta \theta$. After that, we will re-express the obtained formulas for $f_1$, $f_2$ and $f_3$ using more “user-friendly" parameters: the scaling parameter $d$, the angle of rotation $\Delta \theta$, angle $\varphi _{13}$ between lenses L$_1$ and L$_3$, and angle $\varphi _{12}$ between lenses L$_1$ and L$_2$. Finally, we will express the principal-point positions $\mathbf {P}_1$, $\mathbf {P}_2$ and $\mathbf {P}_3$ in a Cartesian coordinate system $(\xi,\eta,\zeta )$ such that the origin coincides with point $\mathbf {V_0}=(V_x,0,V_z)$, $\eta$ axis coincides with line V and the $\xi$-axis is parallel with lens L$_2$.
Now, let us find the intersection line $\mathbf {V}$ of all lenses included in the rotator. This can be found easily as an intersection of lenses L$_1$ and L$_2$ of focal lengths $f_1$ and $f_2$. These lenses form angles $\varphi _1$ and $\varphi _2$ respectively with $x$-axis (as indicated in Fig. 1). The plane of the lens L$_1$ can be parametrically expressed as $x=-z\cot {\varphi _1}$; analogously, the plane of the lens L$_2$ can be expressed as $x=-(z-d)\cot {\varphi _2}$. Just recall that $d$, the scaling parameter, is a distance of principal points $\mathbf {P}_1=(0,0,0)$ and $\mathbf {P}_2=(0,0,d)$ of lenses L$_1$ and L$_2$. Then, the intersection line $\mathbf {V}$ of L$_1$ and L$_2$ can be found
In the following lines, the formulas for focal lengths $f_1$, $f_2$ and $f_3$ and the principal-point position $\mathbf {P}_3$ of lens L$_3$ will be derived. According to the theory of imaging with two skew lenses presented in [18], the object-sided transverse planes make an angle $\theta$ with the $x$-axis, which is related to the parameters of the system D=L$_1$+L$_2$ by the Eq. (35) in Ref. [18]
where $g_i=f_i/\cos {\varphi _i}$ are the projected focal lengths of included lenses and the scaling parameter $d$ is the separation between the principal points of the lenses. A similar formula can be derived for the angle $\theta '$ between the $x$-axis and the image-sided transverse planes (Eq. (36) in Ref. [18]):As shown in section 4, for the system D to be a part of an image rotator, the angles $\theta$ and $\theta '$ must satisfy the condition $\theta +\theta '=0$. This implies a constraint on the parameters of the system D. Specifically, summing Eqs. (22) and (23) with $\theta =-\theta '$ yields the following relation:
If Eq. (24) is satisfied, the following simple expressions can be derived for $\cot {\theta }$ and $\cot {\theta '}$ respectively:
For a given rotation angle $\Delta \theta =\theta '-\theta =2\theta '$, Eq. (26) can be solved for the focal length $f_1=g_1\cos {\varphi _1}$ to give
where we have denoted $\cot {\theta '}\equiv \cot \frac {\Delta \theta }{2}$. A formula for the focal length $f_2=g_2\cos {\varphi _2}$ of lens L$_2$ can be obtained from Eq. (24):Finally, the third lens L$_3$ needs to be added to the system D to complete the image rotator. The plane of the lens L$_3$ coincides with the image-sided principal plane $\mathcal {P}'$ of the system D. From Eq. (16), the position of the principal point of lens L$_3$, expressed in lens-imaging coordinates, is $(u,v,w)=(u',v',w')=(-f_\mathrm {D}\sin {\Delta \theta },0,0)$. Here, of course, $f_\mathrm {D}$ denotes the effective focal length of system D, given by Eq. (1). Since the plane of lens L$_3$ coincides with the image-sided principal plane $\mathcal {P}'$ of the two-lens system of lenses L$_1$ and L$_2$, the position $\mathbf {P}_3=(P_{3x}, P_{3y},P_{3z})$ of the principal point of lens L$_3$, expressed in global Cartesian coordinates, can be found using Eq. (8) to be
The projected focal length $g_3$ of lens L$_3$ equals $-f_\mathrm {D}$ and thus the actual focal length $f_3$ is
It is convenient to express the formulas for the focal lengths $f_1$, $f_2$ and $f_3$ in terms of the following parameters: the scaling parameter $d$, the angle of rotation $\Delta \theta$, angle $\varphi _{13}$ between lenses L$_1$ and L$_3$, and angle $\varphi _{12}\in (0,\varphi _{13})$ between lenses L$_1$ and L$_2$. From Figs. 1 and 2(a), one can deduce that $\varphi _{13}=\Delta \theta /2-\varphi _1$ and $\varphi _{12}=\varphi _2-\varphi _1$ (provided that $\theta =-\Delta \theta /2$ and $\theta '=\Delta \theta /2$). Inserting these parameters to Eqs. (27), (28) and (30) yields the following formulas for $f_1$, $f_2$ and $f_3$
Finally, we will express the principal point positions $\mathbf {P}_1$, $\mathbf {P}_2$ and $\mathbf {P}_3$ in a coordinate system $(\xi,\eta,\zeta )$, whose origin coincides with a point $\mathbf {V_0}=(V_x,0,V_z)$ on the line V, $\eta$ axis coincides with the line V and the $\xi$-axis is parallel with the lens L$_2$. In such a coordinate system, vectors $\mathbf {P}_1$, $\mathbf {P}_2$ and $\mathbf {P}_3$ are of a form
It is worth mentioning that the formulas presented in Eqs. (31) and (33) can be simplified if additional assumptions are implemented. Here, we provide two examples:
- 1. Symmetric rotator: $\varphi _{12}=\varphi _{13}/2$. In such a case,$$\begin{aligned}f_1=f_3 &= \frac{d}{2\sin{\frac{\Delta\theta}{2}}}\sin{\left(\Delta\theta-\varphi_{13}\right)}, \\ f_2 &= \frac{d}{2\sin{\frac{\Delta\theta}{2}}}\sin{\frac{\varphi_{13}}{2}}, \\ R_1=R_3 &= -\frac{d}{\sin{\frac{\varphi_{13}}{2}}}\cos{\left(\frac{\Delta\theta-\varphi_{13}}{2}\right)}, \\ R_2 &= -\frac{d}{\sin{\frac{\varphi_{13}}{2}}}\cos{\left(\varphi_{13}-\frac{\Delta\theta}{2}\right)}. \end{aligned}$$
Funding
Engineering and Physical Sciences Research Council (EP/M010724/1).
Disclosures
The authors declare no conflicts of interest.
Data availability
Simulations were performed with our custom raytracer Dr TIM [19] (which has been significantly extended since the publication of Ref. [19]). Dr TIM’s up-to-date source code is available at [38]. Runnable Java Archives (.jar) of the software, plus data files listing the parameters used in specific simulations, are available in Ref. [39], together with Mathematica notebooks containing calculations.
References
1. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 1999), 7th ed.
2. W. J. Smith, Modern Optical Engineering (McGraw-Hill, 2000), 3rd ed.
3. J. B. Pendry, D. Schurig, and D. R. Smith, “Controlling electromagnetic fields,” Science 312(5781), 1780–1782 (2006). [CrossRef]
4. U. Leonhardt, “Optical conformal mapping,” Science 312(5781), 1777–1780 (2006). [CrossRef]
5. J. Pendry, “Optics: All smoke and metamaterials,” Nature 460(7255), 579–580 (2009). [CrossRef]
6. M. Šarbort and T. Tyc, “Spherical media and geodesic lenses in geometrical optics,” J. Opt. 14(7), 075705 (2012). [CrossRef]
7. D. Schurig, J. Mock, B. Justice, S. A. Cummer, J. B. Pendry, A. Starr, and D. Smith, “Metamaterial electromagnetic cloak at microwave frequencies,” Science 314(5801), 977–980 (2006). [CrossRef]
8. F. Monticone and A. Alù, “Invisibility exposed: physical bounds on passive cloaking,” Optica 3(7), 718–724 (2016). [CrossRef]
9. X. Chen, Y. Luo, J. Zhang, K. Jiang, J. B. Pendry, and S. Zhang, “Macroscopic invisibility cloaking of visible light,” Nat. Commun. 2(1), 176 (2011). [CrossRef]
10. H. Chen, B. Zheng, L. Shen, H. Wang, X. Zhang, N. Zheludev, and B. Zhang, “Ray-optics cloaking devices for large objects in incoherent natural light,” Nat. Commun. 4(1), 2652 (2013). [CrossRef]
11. J. S. Choi and J. C. Howell, “Paraxial ray optics cloaking,” Opt. Express 22(24), 29465–29478 (2014). [CrossRef]
12. J. Courtial, T. Tyc, J. Bělín, S. Oxburgh, G. Ferenczi, E. N. Cowie, and C. D. White, “Ray-optical transformation optics with ideal thin lenses makes omnidirectional lenses,” Opt. Express 26(14), 17872–17888 (2018). [CrossRef]
13. J. Bělín, T. Tyc, M. Grunwald, S. Oxburgh, E. N. Cowie, C. D. White, and J. Courtial, “Ideal-lens cloaks and new cloaking strategies,” Opt. Express 27(26), 37327–37336 (2019). [CrossRef]
14. M. Khorasaninejad, F. Aieta, P. Kanhaiya, M. A. Kats, P. Genevet, D. Rousso, and F. Capasso, “Achromatic metasurface lens at telecommunication wavelengths,” Nano Lett. 15(8), 5358–5362 (2015). [CrossRef]
15. M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352(6290), 1190–1194 (2016). [CrossRef]
16. E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, and A. Faraon, “Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with metamolecules,” Optica 3(6), 628 (2016). [CrossRef]
17. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13(3), 220–226 (2018). [CrossRef]
18. J. Bělín and J. Courtial, “Imaging with two skew ideal lenses,” J. Opt. Soc. Am. A 36(1), 132–141 (2019). [CrossRef]
19. S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: Ray-tracer TIM, with additional specialist scientific capabilities,” Comput. Phys. Commun. 185(3), 1027–1037 (2014). [CrossRef]
20. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 1996), chap. 5.1.3, 2nd ed.
21. J. Engelberg and U. Levy, “The advantages of metalenses over diffractive lenses,” Nat. Commun. 11(1), 1991 (2020). [CrossRef]
22. U. Leonhardt and T. Philbin, Geometry and light: the science of invisibility (Courier Corporation, 2010).
23. H. Chen and C. Chan, “Transformation media that rotate electromagnetic fields,” Appl. Phys. Lett. 90(24), 241105 (2007). [CrossRef]
24. W. F. Fagan and P. Waddell, “Industrial Applications Of Image Derotation,” in Industrial Applications of Laser Technology, vol. 0398W. F. Fagan, ed., International Society for Optics and Photonics (SPIE, 1983), pp. 193–199.
25. D. G. Garcia, G. J. Chaplain, J. Bělín, T. Tyc, C. Englert, and J. Courtial, “Optical triangulations of curved spaces,” Optica 7(2), 142–147 (2020). [CrossRef]
26. T. Tyc, J. Bělín, S. Oxburgh, C. D. White, E. N. Cowie, and J. Courtial, “Combinations of generalized lenses that satisfy the edge-imaging condition of transformation optics,” J. Opt. Soc. Am. A 37(2), 305–315 (2020). [CrossRef]
27. G. J. Chaplain, G. Macauley, J. Bělín, T. Tyc, E. N. Cowie, and J. Courtial, “Ray optics of generalized lenses,” J. Opt. Soc. Am. A 33(5), 962–969 (2016). [CrossRef]
28. T. Scheimpflug, “Improved method and apparatus for the systematic alteration or distortion of plane pictures and images by means of lenses and mirrors for photography and for other purposes,” GB Patent No. 1196 (1904).
29. E. C. Pickering and C. H. Williams, “Foci of lenses placed obliquely,” in Proceedings of the American Academy of Arts and Sciences, vol. 10 (JSTOR, 1874), pp. 300–307.
30. W. Harris, “Tilted power of thin lenses,” Optom. Vis. Sci. 79(8), 512–515 (2002). [CrossRef]
31. W. F. Harris, “Effect of tilt on the tilted power vector of a thin lens,” Optom. Vis. Sci. 83(9), E693–E696 (2006). [CrossRef]
32. J. M. Sasian, “Image plane tilt in optical systems,” Opt. Eng. 31(3), 527–532 (1992). [CrossRef]
33. G. Massey and A. Siegman, “Reflection and refraction of gaussian light beams at tilted ellipsoidal surfaces,” Appl. Opt. 8(5), 975–978 (1969). [CrossRef]
34. G. Nemes and A. Kostenbauder, “Optical systems for rotating a beam,” Laser Beam Characterization pp. 99–109 (1993).
35. G. Sudarshan and R. G. Hulet, “Image rotation device,” (1995). US Patent 5, 430, 575.
36. E. Sudarshan, N. Mukunda, and R. Simon, “Realization of first order optical systems using thin lenses,” Opt. Acta 32(8), 855–872 (1985). [CrossRef]
37. D. Swift, “Image rotation devices—a comparative survey,” Opt. Laser Technol. 4(4), 175–188 (1972). [CrossRef]
38. “Dr TIM, a highly scientific raytracer,” https://github.com/jkcuk/Dr-TIM.
39. J. Bělín, G. Ferenczi, and J. Courtial, “Software and data files related to image rotation with ideal thin lenses,” figshare (2022), https://doi.org/10.6084/m9.figshare.19330394.