Abstract

We discuss new effects related to relativistic aberration, which is the apparent distortion of objects moving at relativistic speeds relative to an idealized camera. Our analysis assumes that the camera lens is capable of stigmatic imaging of objects at rest with respect to the camera, and that each point on the shutter surface is transparent for one instant, but different points are not necessarily transparent synchronously. We pay special attention to the placement of the shutter. First, we find that a wide aperture requires the shutter to be placed in the detector plane to enable stigmatic images. Second, a Lorentz-transformation window [Proc. SPIE 9193, 91931K (2014) [CrossRef]  ] can correct for relativistic distortion. We illustrate our results, which are significant for future spaceships, with raytracing simulations.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. INTRODUCTION

In his famous 1905 paper [1], Einstein introduced special relativity and discussed the Lorentz–FitzGerald contraction of a fast-moving sphere into an ellipsoid. This apparent deformation had been studied before, when FitzGerald suggested it [2] to explain the Michelson–Morley experiment [3]. In 1924, Lampa [4] took into account the time when the relevant light rays leave the fast-moving object to arrive simultaneously at the observer, thereby correctly describing the visual appearance of objects. Taking into account these time-of-flight effects alters the apparent relativistic distortion of fast-moving objects. Lampa’s work was largely ignored until, in 1959, Penrose [5] and Terrell [6] rediscovered the role of ray timing. They arrived at the conclusion that fast-moving objects would not actually appear contracted, but in the cases investigated, they would appear rotated, an effect now known as Penrose–Terrell rotation. However, it was observed shortly afterwards that straight lines could appear curved [7], showing that the apparent distortion of fast-moving objects is not always a rotation and thus demonstrating the limited applicability of Penrose–Terrell rotation [8]. A few of these effects have been observed experimentally [9].

In 1995, researchers first used raytracing to create photorealistic images of fast-moving objects [10]. This was discussed in terms of images taken with an idealized camera moving at relativistic speeds. The timing of the relevant light rays is determined by the shutter model: the placement of the shutter within the camera, which is assumed to open for one instant. In their nomenclature, the shutter placement considered by earlier authors, in which all light rays that contribute to the photo arrive simultaneously at the camera’s pinhole, is called the pinhole shutter model. Figure 1(b) shows an example of a raytracing simulation for the pinhole shutter model. This approach has been developed in various directions, enabling raytracing of independently moving objects [13] and interactive relativistic raytracing. Examples of the latter include a museum exhibit (where the observer controls his/her speed and direction through a bicycle interface) [14], and the free real-time relativity [15] and a slower speed of light [16], which allow real-time, interactive simulation of movement at relativistic speeds.

 figure: Fig. 1.

Fig. 1. Relativistic distortion, and comparison with the effect of time-of-flight effects only. The images show raytracing simulations of photos of a scene (a) taken with a camera moving at relativistic speed through the scene (b) and with an (unphysical) camera in which only time-of-flight effects are taken into account, but those due to special relativity are ignored (c). The image shown in (c) was simulated by using the Galilean transformation instead of the Lorentz transformation when transforming between reference frames [11]. All simulations were performed for a pinhole camera using the pinhole shutter model. In (b) and (c), the camera moves with velocity ${\boldsymbol\beta}c$, where ${\boldsymbol\beta}=(0.1c,0,0.99c)^{\intercal}$ in the left-handed coordinate system used by our raytracing software, in which the $x$, $y$, and $z$ directions represent the right, up, and into the page, respectively. The simulations were performed using the scientific raytracer Dr TIM [12].

Download Full Size | PPT Slide | PDF

The studies outlined above considered the effect of taking a picture with a pinhole camera. In other work, this has been generalized in two ways that are relevant to this paper. The first is a generalization of the pinhole to a wide aperture [1719], which (in the aperture-plane shutter model, the natural generalization of the pinhole-shutter model) was shown mathematically (and without showing any photorealistic images) to lead to a global comatic aberration. The second is a generalization of cameras with a single aperture to multi-aperture cameras, of which a particularly simple example is a pair of cameras for the creation of stereo pairs simulating binocular vision. The result from this second generalization that is most relevant to our considerations is the finding that binocular vision fails at relativistic speeds [20]. In the same study, photorealistic stereo pairs of objects moving at relativistic speeds and in curved space–times were calculated, including a stereo movie that simulated falling into a black hole.

In order to investigate whether or not light-ray-direction-changing optical components called telescope windows (also known as generalized confocal lenslet arrays, GCLAs) [21] can simulate relativistic distortion (they cannot [11]), we extended the capabilities of our own custom scientific raytracer TIM [22] to include relativistic raytracing that fully simulates the distortion of objects moving with respect to the observer, but neglects other effects such as the headlight effect, the concentration of light in the direction of the observer’s forward movement, and wave-optical effects such as the color change associated with the Doppler shift. The resulting new version of TIM, Dr TIM [12], has a range of other capabilities, including creating stereo pairs of images for viewing in a number of ways, e.g., on 3D monitors or as red–blue anaglyphs, enabling a stereo view of the scene; rendering photos taken with a virtual camera with a wide aperture (non-zero-size aperture plane, i.e., not a pinhole camera); and simulating the visual effect of transmission through windows that perform generalized refraction. The combination of Dr TIM’s new relativistic-raytracing capability with the last two of these other capabilities is unique, and when experimenting with these combinations, we noticed a number of effects, which we describe and study here. Note that time-of-flight effects on their own result in a distortion that shares many of the qualities of relativistic distortion [Fig. 1(c)], and that the effects discussed in this paper are largely time-of-flight effects, modified by special relativity.

This paper is structured as follows. We establish the basis of what follows in Section 2. In Section 3, we consider the effect of a wide aperture in relativistic photography, answering the question of whether a camera moving relative to the scene can take stigmatic (i.e., ray-optically perfect) images. In Section 4, we consider the effect of taking a photo of a relativistic scene through a Lorentz-transformation window [11], which changes the direction of transmitted light rays in the same way in which light rays change direction upon change of an inertial frame. We discuss our results in Section 5, before concluding in Section 6.

2. RAY TRAJECTORY IN THE SCENE FRAME AND IN THE CAMERA FRAME

Consider a camera moving with velocity ${\boldsymbol\beta}c$ through a scene of stationary objects. We define two reference frames: in the camera frame, the camera is at rest; in the scene frame, the scene is at rest.

Both the camera’s imaging system and shutter are highly idealized. The imaging system is treated as a planar thin lens that images, stigmatically, an arbitrary transverse plane at rest in the camera frame into the detector plane. The shutter is a surface—not necessarily a plane—each point on which is opaque at all times apart from one instant (finite but arbitrarily short), when it is transparent. Note that different points on the shutter are not necessarily transparent simultaneously in the camera frame (nor indeed the scene frame).

The planar thin lens is assumed to be capable of delaying transmission of different light rays by different delay times, without affecting them in any other way; this will be important later. Note that the single plane of the camera lens could also represent the two principal planes of an idealized imaging system [23] that are imaged into each with transverse magnification $+1$, whereby the space between them is omitted for simplicity, but the time taken by light rays to travel between conjugate positions could be accounted for as a transmission delay.

For our purposes, it is instructive to discuss how relativistic raytracing software, including Dr TIM, traces the trajectories of the light rays that contribute to a photo. In order to contribute to the photo, a light ray eventually has to hit the camera’s detector, which means its trajectory has to pass through any apertures in the system, and its timing has to be such that it passes through the shutter surface at the precise time when the position where it intersects the shutter is transparent. We call such a light ray a photo ray. Like all other light rays, photo rays start at a light source, interact with any scene objects, and end at the detector inside the camera, but Dr TIM (like almost all rendering raytracing software) traces photo rays backwards, from the camera, through the scene, to a light source. What matters for relativistic raytracing is events during a ray’s existence, specifically positions on the ray trajectory and the times when the ray passes through those positions.

We first consider the situation in the camera frame (Fig. 2), in which we use unprimed coordinates throughout this paper. The ray trajectory ends at a position ${\textbf{D}}$ in the detector plane, which corresponds to a particular pixel in the simulated photo that is to be calculated. The ray arrives from the direction of a position ${\textbf{L}}$ on the camera’s lens. As the lens is a thin lens that images every point stigmatically, the ray arrives at the point ${\textbf{L}}$ on the lens from the direction of the point ${\textbf{P}}$ that is conjugate to ${\textbf{D}}$.

 figure: Fig. 2.

Fig. 2. (a) Trajectory of a photo ray (a light ray contributing to a photo), viewed in the camera frame. The trajectory is shown as a red line with an arrow tip at its end. The camera is moving with (relativistic) velocity ${\boldsymbol\beta}c$ through a stationary scene. The camera’s lens stigmatically images the position ${\textbf{P}}$ to the position ${\textbf{D}}$ on the detector, which means it re-directs any ray from ${\textbf{P}}$ such that it subsequently passes through ${\textbf{D}}$. ${\textbf{L}}$ is the point where the ray intersects the idealized thin lens (vertical double-sided arrow). In the camera frame, the light ray passes ${\textbf{P}}$ at time ${t_{\text{P}}}$, enters ${\textbf{L}}$ at ${t_{\text{L}}}$, and reaches ${\textbf{D}}$ at ${t_{\text{D}}}$. (b) Several photo rays from ${\textbf{P}}$, shown in the camera frame. Due to the imaging properties of the lens, these rays intersect the same image position ${\textbf{D}}$ on the detector (not shown). Different rays intersect the camera lens at different positions, labeled ${{\textbf{L}}_1}$ to ${{\textbf{L}}_5}$. Each ray can be Lorentz-transformed into the scene frame by considering two events: for ray $i$, these are the times and positions of the ray passing through ${\textbf{P}}$ and ${{\textbf{L}}_i}$. (c) The same rays, shown in the scene frame. Ray $i$ ($i=1$ to 5) passes through the scene-frame positions ${\textbf{P}}_i^\prime$ and ${\textbf{L}}_i^\prime$, the positions of the events described above, Lorentz-transformed into the scene frame. These positions depend on the choice of shutter model and camera velocity ${\boldsymbol\beta}c$. In the scene frame, the rays do not necessarily intersect in a single point, which is the case in the example shown.

Download Full Size | PPT Slide | PDF

Before hitting the lens, the ray intersects objects and light sources at rest in the scene frame (the last one of which is in the direction of ${\textbf{P}}$).

Somewhere along the way, the light ray passes through the shutter, whose location and opening time is determined by the shutter model. This event determines the times of all the events mentioned above.

For raytracing purposes it is crucial when the photo ray hits the lens. This event is important because this is when the raytracing algorithm switches between the camera frame and the scene frame. The time of the event is, of course, determined by the shutter model. We will refer below to the set of the positions where the algorithm switches between frames as the Lorentz surface; this transition could happen anywhere on the last segment of a photo ray’s trajectory, but it is conceptually convenient to identify the transition as happening on a surface, usually the lens plane, on the grounds that this is where the camera frame “begins.” The time of the ray hitting the lens in the camera frame (at position ${\textbf{L}}$) determines the position ${{\textbf{L}}^\prime}$ of that same event Lorentz-transformed into the scene frame (as the camera is moving), which is then the starting point of standard (reverse) raytracing in the scene frame. This standard raytracing also requires the ray direction in the scene frame, and so it is necessary to Lorentz-transform the ray direction from the camera frame to the scene frame, a transformation that does not depend on time.

For the arguments in this paper it is sometimes helpful to consider not the event of the photo ray passing through the lens and the direction of the incident ray, but instead the events of the photo ray passing through points ${\textbf{P}}$ and ${\textbf{L}}$ in the camera frame [Fig. 2(b)]. The starting point of raytracing in the scene frame is then again the point ${{\textbf{L}}^\prime}$, and the direction of the ray in the scene frame is the direction of the straight line through ${{\textbf{P}}^\prime}$ and ${{\textbf{L}}^\prime}$ [Fig. 2(c)].

After the light ray has been Lorentz-transformed into the scene frame, where all the objects are stationary, timing no longer matters. For this reason, only the positions of events Lorentz-transformed into the scene frame need to be calculated. If, in the camera frame, an event happens at position ${\boldsymbol x}$ and at time $t$, then the corresponding, “Lorentz-shifted” position in the scene frame is [10]

$${{\boldsymbol x}^\prime}={\boldsymbol x}+(\gamma-1)\frac{{({\boldsymbol\beta}\cdot {\boldsymbol x}){\boldsymbol\beta}}}{{{\beta^2}}}+\gamma {\boldsymbol\beta}ct,$$
where $\gamma=1/\sqrt {1-{\beta^2}}$. Note that the Lorentz shift, described by the second and third terms on the right-hand side of Eq. (1), is always parallel to ${\boldsymbol\beta}$. (Also note that the velocity of the scene frame in the camera frame is $-{\boldsymbol\beta}$, and the “$-$” sign in front of ${\boldsymbol\beta}$ leads to the “$+$” sign in front of the final term.) If the normalized direction of the ray in the camera frame is $\hat{\boldsymbol d}$, then the (not necessarily normalized) ray direction in the scene frame is [11]
$${\boldsymbol d^\prime}=\hat{\boldsymbol d}+(\gamma-1)(\hat{\boldsymbol\beta}\cdot \hat{\boldsymbol d})\hat{\boldsymbol\beta}+\gamma {\boldsymbol\beta},$$
where $\hat{\boldsymbol\beta}=\hat{\boldsymbol\beta}/{\boldsymbol\beta}$ is a unit vector in the direction of ${\boldsymbol\beta}$.

All relativistic effects discussed in this paper are due to the change upon Lorentz transformation in the trajectory of the photo rays.

3. TAKING PHOTOS WITH A WIDE APERTURE

Consider an idealized camera with a wide aperture that can be focused on any surface in the camera frame, which the camera lens images, stigmatically, to the detector plane. Our analysis is primarily concerned with the question of whether or not it is possible for such a camera to image, stigmatically, objects in a scene if the camera is moving relative to the scene. Other aspects, such as the apparent distortion of the scene, are of secondary concern.

We consider the subset of photo rays that intersect the detector plane at the same position ${\textbf{D}}$, and establish the conditions under which these photo rays intersect in a single point in the scene frame. That scene-frame point is then stigmatically imaged onto the camera’s detector. We are specifically discussing a camera with a “wide aperture,” by which we mean that photo rays can pass through different points ${\textbf{L}}$ on the lens (Fig. 2); this is what differentiates the camera from a pinhole camera. For a scene-frame point to be imaged stigmatically onto the detector, all photo rays emitted from that scene-frame point must intersect the detector at the same position, irrespective of the position where they pass through the lens. Note that this condition for stigmatic imaging is always satisfied for a pinhole camera.

First, we consider the photo rays that intersect in the same detector position ${\textbf{D}}$ in the camera frame. Because of the (idealized) assumed imaging capabilities of the camera lens, ${\textbf{D}}$ is the image of an object point ${\textbf{P}}$ where these photo rays intersect before passing through the lens. Different photo rays intersect the lens at different positions; we call the position where the $i$th photo ray intersects the lens ${{\textbf{L}}_i}$. Figure 2(b) shows the trajectories of five photo rays between ${\textbf{P}}$ and the lens. The point ${\textbf{P}}$ is drawn as a real object, i.e., a point where the actual light-ray trajectories intersect, but note that our analysis is also valid for the situation where ${\textbf{P}}$ is a virtual object, i.e., a point where not the actual trajectories, but their straight-line continuations, intersect.

Second, we consider the same set of photo rays in the scene frame [Fig. 2(c)]. As discussed in Section 2, a photo ray that passes through the camera-frame positions ${\textbf{P}}$ and ${\textbf{L}}$ passes through the scene-frame positions ${{\textbf{P}}^\prime}$ and ${{\textbf{L}}^\prime}$, the positions of the camera-frame events of the ray passing through ${\textbf{P}}$ and ${\textbf{L}}$, Lorentz-transformed into the scene frame. ${{\textbf{L}}^\prime}$ is given by Eq. (1) with ${\boldsymbol x}={\textbf{L}}$ and $t$ being the time the ray passes through ${\textbf{L}}$ in the scene frame; similarly, ${{\textbf{P}}^\prime}$ is given by Eq. (1) with ${\boldsymbol x}={\textbf{P}}$ and $t$ being the time the ray passes through ${\textbf{P}}$ in the scene frame. For the $i$th photo ray, the position ${\textbf{L}}={{\textbf{L}}_i}$, and so for the different photo rays under consideration, the positions ${\textbf{L}}$ are all different. The times $t$ when they intersect these positions might or might not be different, depending on the shutter model. This means that it is not possible to say very much about the Lorentz-shifted positions ${\textbf{L}}_i^\prime$ of the different photo rays in the general case, other than that they are shifted from the positions ${\textbf{L}}$ in a direction parallel to ${\boldsymbol\beta}$. In contrast, the position ${\textbf{P}}$ is the same for all photo rays under consideration. This means that the corresponding scene-frame position ${\textbf{P}}_i^\prime$ of the $i$th ray depends only on the time when that ray passes through ${\textbf{P}}$ in the scene frame. Furthermore, if this time is different for the $i$th and $j$th photo rays, then the positions ${\textbf{P}}_i^\prime$ and ${\textbf{P}}_j^\prime$ are different. Conversely, whenever this time is the same for two or more photo rays, i.e., when they pass through the camera-frame position ${\textbf{P}}$ simultaneously, then they pass through the same corresponding scene-frame position ${{\textbf{P}}^\prime}$. This means that these photo rays intersect at that position ${{\textbf{P}}^\prime}$; rays 2 and 4 in Fig. 2(c) are examples. (In general, they have different directions, as the positions ${\textbf{L}}_i^\prime$ in general lie in different directions from ${{\textbf{P}}^\prime}$.) If another set of photo rays simultaneously passes through ${\textbf{P}}$ at a different time $t$ [e.g., rays 1 and 5 in Fig. 2(c)], then those rays intersect at a different scene-frame position, which means that the different photo rays do not all intersect in a single point in the scene frame. In general, if the photo rays under consideration do not all pass through ${\textbf{P}}$ simultaneously then they can be divided up into different sets that do pass through ${\textbf{P}}$ simultaneously. In all physically relevant situations, this means that whenever the photo rays through ${\boldsymbol D}$ pass through ${\boldsymbol P}$ at different times, they do not all intersect in the same point in the scene frame.

Below, we see this happening in different shutter models. Of the infinitely many different possible shutter models, we consider here four. As already mentioned above, we restrict ourselves to instantaneous shutter models, in which each position on the shutter surface becomes transparent for one instant and is completely absorbing at all other times. Three out of the four shutter models we consider are synchronous in the sense that all positions on the shutter surface become transparent simultaneously in the camera frame, namely, at time $t={t_S}$.

A. Aperture-Plane Shutter Model

We start with the simplest generalization of the pinhole shutter model to wide apertures, the synchronous aperture-plane shutter model, in which the shutter is located at the aperture containing the lens. In standard cameras, a shutter placed in this way is an example of a central shutter, a shutter placed somewhere within the lens assembly.

As always in our analysis, in the camera frame, it is the case that all photo rays that eventually intersect the detector at the same position ${\textbf{D}}$ intersect at the same object position ${\textbf{P}}$ before entering the camera lens. In the aperture-plane shutter model, all camera rays pass through the lens at the same time, which implies that camera rays that pass through different points on the lens in general pass through the position ${\textbf{P}}$ at different times. The spatial parts of the corresponding events are therefore also in general different, which means that in the aperture-plane shutter model (and in contrast to other shutter models), there are no positions that are stigmatically imaged by the camera.

Figure 3 illustrates this result. It shows raytracing simulations of photos taken with a camera at rest in the scene frame (which means that the timing of the photo rays, and therefore the shutter model, is irrelevant) and a camera that uses the detector-plane shutter model, moving at $\beta\approx 99.5\%$ of the speed of light in the scene frame. The scene contains an array of white spheres, each placed (as explained in the next paragraph) such that it should be in focus when the camera is moving, provided the shutter model allows stigmatic imaging. It can be seen that almost all of the spheres appear blurred in the photo taken with the moving camera.

 figure: Fig. 3.

Fig. 3. Simulated photos taken with a camera that uses the aperture-plane shutter model. In (a), the camera is at rest; in (b), it is moving at $\beta\approx 99.5\%$ of the speed of light. The scene contains a $9\times 9$ array of small white spheres centered on the scene-frame surface on which the camera is focused when moving. The figure is calculated for ${\boldsymbol\beta}{=(0.1,0,0.99)^{\intercal}}$. The camera was focused on a plane a distance 10 (in units of floor-tile lengths) in the camera frame, which transforms into a curved surface in the scene frame. The horizontal angle of view is 120° in (a) and 20° in (b). In both cases, the simulated aperture radius is 0.05 (Dr TIM’s interactive version refers to this aperture size as “medium”).

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Simulated photo taken with a camera that uses the aperture-plane shutter model. The simulation parameters differ from those used to create Fig. 3(b) only in the aperture radius, which is 0.2 (“huge”).

Download Full Size | PPT Slide | PDF

The white spheres were positioned as follows. In turn, every one of a square grid of positions in the camera-frame plane on which the camera is focused was calculated. For this camera-frame position, ${\textbf{P}}$, the time ${t_{\text{P}}}$ in the camera frame was calculated when the photo ray through ${\textbf{P}}$ and the aperture center passed through ${\textbf{P}}$. The event of this photo ray passing through ${\textbf{P}}$, at time ${t_{\text{P}}}$, was then Lorentz-transformed into the scene frame, according to Eq. (1), and a white sphere was centered there. In a photo taken with a (moving) pinhole camera, in which all photo rays pass through the center of the aperture, these spheres would form a square array in the image.

It is interesting to study the aberration shown in Fig. 3(b) in more detail. The blurring of the spheres due to this aberration is visible more clearly in Fig. 4, which is a raytracing simulation calculated for parameters identical to those used for Fig. 3(b), apart from an increased aperture size. As discussed in Section 2, the blurring in Fig. 4, like all other effects discussed in this paper, is due to the shift in position of two events on the last ray-trajectory segment. From Eq. (1), it is clear that this shift is always in the direction of ${\boldsymbol\beta}$. A single point light source is therefore seen through different points on the aperture as different point light sources positioned on a line through the original point light source with direction ${\boldsymbol\beta}$. This at first seems to suggest that any point (or small sphere) should appear elongated into a straight line (or cylinder) with direction ${\boldsymbol\beta}$, but the actual appearance of a point light source is complicated by the fact that different point light sources on that straight line are seen only from the direction of corresponding points on the aperture. In the example shown in Fig. 4, the effect is that each sphere appears as a curved line.

Figure 3 shows another interesting characteristic. In the standard example of comatic aberration, the parabolic mirror, bundles of parallel rays are not focused to a point, unless they are parallel to the optical axis, in which case they are focused to a point. A distant object seen in one direction, namely, that of the optical axis, is therefore imaged sharply; those in other directions are not (and they often appear to have a tail like that of a comet, which is known as “coma”). However, in Fig. 4, the spheres seen in two directions appear to be in sharper focus. The blurred shape of the different spheres shown in Fig. 3 suggests that this happens for different reasons in the two cases:

  • 1. The central sphere appears to be sharp, as all photo rays pass through this position at approximately the same time. The reason is the following. In the aperture-plane shutter model, all light photo rays pass through the aperture plane simultaneously, so the difference in the times different photo rays pass through the position ${\textbf{P}}$ is determined purely by the difference in the optical path length between ${\textbf{P}}$ and different points ${\textbf{L}}$ on the aperture. For any point on the aperture-plane normal that passes through the aperture center, this optical path length is of the form
    $$l(r)=l(0)+O({r^2}),$$
    where $l(r)$ is the optical path length between ${\textbf{P}}$ and a position ${\textbf{L}}$ on the aperture that is a distance $r$ from the aperture center, and $l(0)$ is the optical path length between ${\textbf{P}}$ and the aperture center. This means that the variation $\Delta l$ in optical path lengths between ${\textbf{P}}$ and different positions ${\textbf{L}}$ on the aperture is relatively small in this case. The different photo rays therefore pass through such a position ${\textbf{P}}$ approximately simultaneously, which implies that the corresponding scene-frame positions ${\textbf{P}}_i^\prime$ lie relatively close together—in the case of the central sphere in Fig. 3, so close that it appears in focus.
  • 2. The rightmost sphere half-way down the image appears relatively sharp as the line of point light sources into which a single point light source in that direction is stretched out in the direction in which the sphere is seen from the center of the camera, and indeed the center of that sphere, ${(1,0,10)^{\intercal}}$, lies almost precisely in the direction $\beta=(0.1,0,0.99)^{\intercal}$ from the center of the aperture, which is positioned at the origin.

B. Detector-Plane Shutter Model

In the detector-plane shutter model, the shutter is placed in the detector plane. Such a shutter is usually called a focal-plane shutter, but as the detector plane coincides with the focal plane only if the camera is focused to an infinitely distant plane, we call the corresponding shutter model detector-plane shutter model. In this shutter model, all photo rays reach the detector simultaneously.

An ideal imaging system has the property that all light rays that pass between a pair of conjugate positions via the imaging system take the same time to do so. This follows from the principle of equal optical path [24], and it implies that light rays that intersect a point ${\textbf{D}}$ on the detector simultaneously also intersect the conjugate position ${\textbf{P}}$ simultaneously. It then follows that all photo rays that eventually intersect at ${\textbf{D}}$ have previously intersected ${\textbf{P}}$ simultaneously, in what is a single event (same place, same time) in any frame. This, in turn, implies that the photo rays that reach the same position ${\textbf{D}}$ on the detector all previously intersected in the scene frame, at the position ${{\textbf{P}}^\prime}$. In other words, the scene-frame position ${{\textbf{P}}^\prime}$ is stigmatically imaged to the camera-frame position ${\textbf{D}}$.

The raytracing simulation shown in Fig. 5 demonstrates this. The opening time of the shutter was set up such that the timing of those rays that traveled along the optical axis was identical to that in the aperture-plane-shutter-model setup in Fig. 3(b). (Light rays that are inclined with respect to the optical axis while traveling from the aperture to the detector take longer to do so. The timing of these rays is different, resulting in the distortions in Figs. 3(b) and 5 to be different.) An array of spheres was again placed into the scene frame such that their centers were lying in the scene-frame surface imaged into the camera-frame detector plane by paraxial photo rays (in this case all photo rays). The spheres can be seen to be in focus, consistent with our result that any scene-frame position can be stigmatically imaged onto the detector.

 figure: Fig. 5.

Fig. 5. Simulated photo taken with a moving camera that uses the detector-plane shutter model and an ideal thin lens as imaging element. AS in Fig. 3, the scene contains a $9\times 9$ array of small white spheres centered on the scene-frame surface on which the camera is focused when moving, but note that the scene-frame surface on which the cameras are focused is different from that on which the camera in Fig. 3 is focused. The spheres can be seen to be in sharp focus. The remainder of the scene and the camera velocity are identical to those used to calculate Fig. 3.

Download Full Size | PPT Slide | PDF

It is interesting to consider cameras that use the detector-plane shutter model in combination with imaging elements that do not respect the principle of equal optical path, i.e., for which it is not the case that all light rays that pass between a pair of conjugate positions via the lens take the same time to do so. Examples of such elements include phase holograms of lenses, Fresnel lenses, and Fresnel zone plates. In the case of the lens, the thickness of high-refractive-index material (e.g., glass) changes across the lens such that the light-ray-direction change due to the local phase gradient and the time delay due to slower propagation in the high-refractive-index medium are just right (to a very good approximation) for all light rays to take the same time to travel between conjugate positions. Phase holograms and Fresnel lenses replicate the light-ray-direction change, but not the time delay; Fresnel zone plates (and other holograms of lenses) ensure that all light from the object position interferes constructively at the image position, but the time delay is again not replicated. This means that photo rays that intersect in a detector position ${\textbf{D}}$ simultaneously do not intersect in the conjugate position ${\textbf{P}}$ simultaneously. The transmission of the photo rays through ${\textbf{P}}$ is therefore not a single event, and the events that correspond to different photo rays passing through ${\textbf{P}}$ therefore get Lorentz-transformed to different positions ${{\textbf{P}}^\prime}$. [Inspection of Eq. (1) reveals that these positions are spread out in the direction of ${\boldsymbol\beta}$]. There is therefore no scene-frame position that is stigmatically imaged to the camera-frame position ${\textbf{D}}$.

Figure 6 illustrates this. It consists of two parts. Part (a) is the same as Fig. 5, but taken with a significantly larger aperture, thereby dramatically increasing the magnitude of any blurring present. The simulated focusing element is an ideal thin lens in which all light rays that travel between conjugate positions take the same time to do so. The array of spheres can be seen to be still in focus. Part (b) is calculated in precisely the same way, but for a camera in which the focusing element is a phase hologram of an ideal thin lens, i.e., an imaging element through which different light rays take different times to travel between conjugate positions. The spheres in the array shown in the picture are in the same positions as before. They are in the scene-frame surface that is stigmatically imaged into the camera-frame detector plane by an ideal thin lens, but now that the imaging element is a phase hologram of an ideal thin lens, this scene-frame surface is imaged into the camera-frame detector plane only by paraxial rays. The array of spheres is now clearly out of focus.

 figure: Fig. 6.

Fig. 6. Simulated photos taken with a camera that uses the detector-plane shutter model in combination with an ideal thin lens (a) and a phase hologram of a thin lens (b). The simulated aperture radius is 0.2 (called “huge” in Dr TIM’s interactive version) to make the blurring visible. In (a), which is calculated for parameters that differ from those used to calculate Fig. 5 only in the increased aperture size, the spheres are still in focus. In (b), which is calculated for parameters that differ from those used to calculate (a) only in the time delay introduced by the imaging element, the spheres are clearly blurred.

Download Full Size | PPT Slide | PDF

C. Focus-Surface Shutter Model

In the focus-surface shutter model, the “shutter” is conceptually located in the focus surface, the camera-frame surface (normally a plane) onto which the camera is focused. There is a practical problem with this shutter placement: if the shutter was some physical device in the camera frame, somehow attached to the camera, then any object in the scene frame that would be in focus would—at the moment of being in focus—collide with the shutter. However, the focus-surface shutter model also describes a scene comprising one or more point light sources that flash for one instant, as follows. A point light source flashing for an instant is a single event, in any frame. If, in the camera frame, that flash event happens in the focus surface at the time ${t_{\text{S}}}$, then the rays from this flash would be identical in trajectory and timing to the photo rays for the focus-surface shutter model from a point light source at the flash event’s scene-frame position. This model can therefore be completely analyzed by considering the flash event to take place in the camera frame. This, in turn, means that the camera can be focused to a flash’s position in the camera frame, and so each flash can be imaged stigmatically.

This answers our primary question about imaging of individual point light sources in the scene frame. However, a further discussion is in order to dispel the impression that a camera that somehow manages to operate a focus-scene shutter model takes identical pictures to a camera operating a detector-plane shutter model. Figure 7 shows a simulated photo of a scene that is taken with a focus-scene shutter model. As before, it contains an array of spheres centered in the focus surface. The spheres can be seen to be in focus, but in a few cases strongly distorted, an aspect that is not well understood currently. The distortion of the scene is clearly different from that in the corresponding image taken with a camera that uses the detector-plane shutter model in combination with a perfect lens, shown in Fig. 6(a).

 figure: Fig. 7.

Fig. 7. Simulated photos taken with cameras that use the focus-surface shutter model. The simulated aperture radius is 0.05 (“medium”).

Download Full Size | PPT Slide | PDF

The reason is that the timing of the rays is different in the camera frame. The focus-surface shutter model used to create Fig. 7 was set up such that the timing of the rays that contributed to the central pixels in the two images was identical. But this means that the timing of all other rays was different: on one hand, a photo ray that contributes to any other pixel in the focus-scene shutter model passes through the focus surface at the same time as the rays that contribute to the central pixel, but it then has to travel further from its intersection point with the focus surface to the corresponding detector pixel, and so it arrives there later than the rays that arrive at the central pixel; on the other hand, a photo ray contributing to that same other pixel in the detector-plane shutter model arrives at that pixel at the same time as all other photo rays, specifically those that arrive at the central pixel. As the timing of the photo rays is therefore different in the two shutter models, and as the camera is moving, photo rays whose trajectories are identical in the camera frame correspond to different trajectories in the scene frame, and the distortion of the scene looks different.

D. Fixed-Point Surface Shutter Model

Above, it was found that the focus-surface shutter model and the detector-plane shutter model in combination with an ideal lens are related in that all photo rays that pass through a position in the focus surface simultaneously also pass through the conjugate position in the detector plane simultaneously. The difference between the two shutter models was the time when the photo rays passed through different camera-frame focus-surface positions, which in turn altered the corresponding rays in the scene frame (leading to a different apparent distortion) and changed the scene-frame positions of the events of the photo rays passing through the camera-frame focus surface—it distorted the scene-frame focus surface.

We are free to consider non-synchronous shutters in which the photo rays pass through different camera-frame focus-surface positions at arbitrary times. Here, we choose those times such that the apparent distortion is such that in a photo taken with a moving camera and in a photo taken with the same camera at rest, the focus surface looks identical. This works as follows.

We note that in the equation for the spatial part of the Lorentz transformation, Eq. (1), there is for any arbitrary position ${\boldsymbol x}$ a time $t$ for which ${\boldsymbol x}$ is a spatial fixed point of the Lorentz transformation, i.e., ${{\boldsymbol x}^\prime}={\boldsymbol x}$. This time is easily found by substituting ${{\boldsymbol x}^\prime}={\boldsymbol x}$ into Eq. (1) and solving for $t$, which is

$$t=\frac{{(1-\gamma)({\boldsymbol\beta}\cdot {\boldsymbol x})}}{{c\gamma {\beta^2}}}.$$
We can imagine the scene to contain a collection of point flash lamps, each flashing at the time that makes its position a spatial fixed point of the Lorentz transformation. In two photos, one taken with a moving camera with a continuously open shutter, the other taken with a similar camera at rest, these point flash lamps would then look identical. Alternatively, a non-synchronous shutter in the detector plane can achieve the same result.

Figure 8 demonstrates that this idea works. Part (a) shows a simulated image taken with a camera at rest that is focused on a depth plane that contains the axes of a number of horizontal and vertical cylinders. Part (b) shows a simulated image taken with a moving camera. The vertical and horizontal cylinders that are in focus in part (a) are still in focus, and they have remained in the same position as in (a). In places, they have changed thickness due to most of the cylinder mantles actually lying outside the focus plane, which contains the cylinder axes.

 figure: Fig. 8.

Fig. 8. Raytracing simulations illustrating the fixed-point-surface shutter model. In (a), the camera is at rest; in (b), it is moving with velocity ${\boldsymbol\beta}c=(0.1c,0,0.99c)^{\intercal}$ in the scene frame. The camera is focused on the plane $z=8$ in the camera frame. Because of the timing of the photo rays, objects in this plane appear identical in both photos. Objects outside this plane appear distorted; those close to the focusing plane, like the mantle of the cylinders centered in the focusing plane, are distorted only slightly.

Download Full Size | PPT Slide | PDF

4. CORRECTING RELATIVISTIC ABERRATION WITH LORENTZ WINDOWS

During relativistic raytracing of a light ray that contributes to a photo, the light ray is transformed from the camera frame to the scene frame at the “Lorentz surface,” using the procedure outlined in Section 2. The remaining calculations, for a particular ray, are the usual, static, raytracing ones.

If, colocated with the Lorentz surface, we place an optical element that changes the direction of a ray by an amount specified by the relativistic Doppler effect, then this will exactly cancel the direction change (or, rather, the apparent direction change due to the change of frame) that results from the Lorentz transformation. Such interfaces have previously been defined as Lorentz-transformation windows [11]. This means that a photo taken with the moving camera will look the same as a photo taken with the camera at rest, except for a change in apparent position due to the Lorentz transformation.

If we return to Eq. (1), and set ${{\boldsymbol x}^\prime}={\boldsymbol x}$, then we find

$$(\gamma-1)\frac{{({\boldsymbol\beta}\cdot {\boldsymbol x})}}{{{\beta^2}}}-\gamma ct=0.$$
For a given ${\boldsymbol x}$, in the camera frame, this gives the time $t$ at the Lorentz surface when events at ${\boldsymbol x}$ will be mapped to the same coordinate value ${{\boldsymbol x}^\prime}$ in the scene frame. We refer to these points ${\boldsymbol x}$ as spatial fixed points of the Lorentz transformation for time $t$. Thus, a photo taken at this time through a canceling window, as described above, will therefore look identical to a photo taken with the camera at rest.

In particular, consider placing the shutter at the Lorentz surface, stationary in the frame of the camera, in a surface of spatial fixed points of the Lorentz transformation for time $t={t_{\text{S}}}$. This surface is a plane perpendicular to ${\boldsymbol\beta}$ that satisfies the condition

$$-{\boldsymbol\beta}\cdot {\boldsymbol x}=\frac{{\gamma {\beta^2}c{t_{\text{S}}}}}{{\gamma-1}}=\frac{{\gamma+1}}{\gamma}c{t_{\text{S}}}.$$
Note the appearance of the minus sign in front of the scalar product of ${\boldsymbol\beta}$ and ${\boldsymbol x}$, which is due to the fact that the relevant Lorentz transformation is from the camera frame into the scene frame, and therefore the relevant ${\boldsymbol\beta}$ is the velocity of the scene in the camera frame, which is the negative of the velocity of the camera in the scene frame, which we defined as ${\boldsymbol\beta}$. As every position in the shutter plane is a spatial fixed point of the Lorentz transformation for time ${t_{\text{S}}}$, light rays that pass through the shutter do not change position upon change of reference frame. We therefore hypothesize that a Lorentz-transformation window, placed in the plane of spatial fixed points of the Lorentz transformation for time ${t_{\text{S}}}$, can make a photo taken with a camera moving at relativistic velocity look identical to one taken with a camera at rest.
 figure: Fig. 9.

Fig. 9. Relativistic blurring and distortion and its cancellation with a Lorentz window. The frames show simulated photos of a portrait scene, with the camera focused on the same plane in front of the camera. (a) The camera is stationary in the scene frame. The subject’s eyes are in the camera’s focusing plane. (b) The camera is moving with relativistic velocity ${\boldsymbol\beta}c$ with respect to the scene frame, where ${\boldsymbol\beta}=(0.1,0,0.99)^{\intercal}$. The camera’s shutter was placed in a plane perpendicular to ${\boldsymbol\beta}$ that passed through the position ${(0.111,0,1.1)^{\intercal}}$; the shutter opening time was ${t_{\text{S}}}=-1$. The scene appears distorted and out-of-focus. (c) Like (b), but with a Lorentz window placed in the shutter plane, which makes the photo look identical to that taken with the camera at rest. Throughout, the speed of light was set to $c=1$. All simulations were performed with an extended version of Dr TIM [12].

Download Full Size | PPT Slide | PDF

 figure: Fig. 10.

Fig. 10. Simulated photos taken through a Lorentz window with a camera that uses the detector-plane shutter model. (a) Without Lorentz window; (b) with Lorentz window. In (b), a slight distortion of the scene can be seen (floor tiles). The shutter-opening time was calculated such that the photo ray through the aperture center in the forward direction has the same timing as in the arbitrary-plane-shutter model.

Download Full Size | PPT Slide | PDF

To test our hypothesis, we performed raytracing simulations using our raytracer Dr TIM [12], extended to allow raytracing through a scene that includes objects at rest with respect to the camera (see Appendix A) and to allow the shutter to be placed in an arbitrary plane in front of the camera (see Appendix B). Figure 9 shows simulated photos of a scene taken with a camera at rest, with the same camera moving at a specific relativistic velocity with respect to the scene, and with the same moving camera with a suitable Lorentz window placed in the plane of spatial fixed points of the Lorentz transformation for a specific shutter opening time ${t_{\text{S}}}$. Consistent with our hypothesis, all aspects—including distortion and blurring—of the photo taken with the moving camera through the Lorentz window looks identical to that taken with the camera at rest. We also simulated such photos for other combinations of the camera velocity and shutter opening time, with the same result. Simulations that use a shutter model in which the Lorentz window is not placed in a plane of spatial fixed points of the Lorentz transformation (see Fig. 10) show that the relativistic distortion is undone imperfectly by the Lorentz window in that case.

We also tested the effect of a Lorentz window placed in a plane of spatial fixed points of the Lorentz transformation at time ${t_{\text{S}}}$ of a stereo camera. As mentioned in the Introduction, a stereo pair taken with a stereo camera moving at relativistic speed is distorted in a way that means it will not be perceived as a 3D scene when presented in “raw” form to a binocular observer [20]. The reason is that the relativistic distortion places the image of a point taken by the left camera and that taken by the right camera at different vertical positions. This means that the parallax shift is no longer exclusively sideways, but also has a vertical component, which means that an observer would not perceive the two images as pertaining to the same point. An example of a relativistic stereo pair, in the form of an anaglyph for viewing with red–blue glasses, is shown in Fig. 11(b). Figure 11(c) is calculated in precisely the same way, but with a Lorentz window placed in the plane of spatial fixed points at the shutter-opening time ${t_{\text{S}}}$. The anaglyph is identical to the one shown in Fig. 11(a), which has been calculated for a camera at rest and in the absence of a Lorentz window. Note that the left and right cameras in the stereo camera need a common shutter plane and shutter time for this effect to work.

 figure: Fig. 11.

Fig. 11. Anaglyphs made from simulated stereo pairs taken with a camera at rest (a), a camera moving at relativistic velocity ${\boldsymbol\beta}c=(0.1,0,0.99)^{\intercal}c$ relative to the scene (b), and the same relativistically moving camera but with a Lorentz window placed in the plane of spatial fixed points of the Lorentz transformation for the shutter-opening time (c). In (b) and (c), the camera’s shutter is located in a plane perpendicular to ${\boldsymbol\beta}$ through the position $(0.111,0,1.1)^{\intercal}$, and the shutter opening time is ${t_{\text{S}}}=-1$, parameters chosen such that the shutter plane is a plane of spatial fixed points of the Lorentz transformation for ${t_{\text{S}}}$. Eye separation 0.4 in the horizontal direction; camera directions are chosen such that a plane a distance 5 in front of the camera appears in the paper/monitor plane.

Download Full Size | PPT Slide | PDF

5. DISCUSSION

It is possible to spot a number of well-known effects in our simulations. First, the objects in the simulated photos are all illuminated from the same direction, but as they have all undergone different Penrose–Terrell rotations, they appear to be illuminated from a range of directions. This can clearly be seen in the array of spheres shown in Fig. 5 and in the vertical cylinders shown in Fig. 8. Second, the coma-type aberration predicted in Ref. [19] for the aperture-plane shutter model can be seen in Figs 3 and 4: spheres not seen in the straight-ahead or boost direction appear blurred, consistent with a magnification that varies with aperture position, the defining property of coma [25]. A coma-type aberration can also be seen in the detector-plane shutter model in combination with the phase hologram of a lens, Fig. 6(b).

It is important to ask under which circumstances the blurring discussed above is significant. We address this question only briefly, as a full discussion would be lengthy and outside the scope of this paper. A pragmatic answer is “when it can be resolved by the camera,” i.e., when it is comparable to or greater than the blur introduced by the lens and when the image of a single point light source becomes blurred across two or more detector pixels. To estimate the angular size of the relativistic blur, we consider different photo rays, all from the same point light source at position ${{\boldsymbol P}_i}$ in the scene frame, passing through points ${{\boldsymbol L}_i}$ on the aperture, and specifically the events of these photo rays passing through ${{\boldsymbol P}_i}$ [Fig. 12(a)]. In the camera frame [Figs. 12(b) and 12(c)], the positions of these events, ${{\boldsymbol P}_i}$, are spread out on a line with direction ${\boldsymbol\beta}$ and length $\gamma\beta c\Delta {t^\prime}$, where $\Delta {t^\prime}$ is the time in the scene frame between the first and the last photo rays passing through ${{\boldsymbol P}^\prime}$. Of course, $\Delta {t^\prime}$ depends on the shutter model, the velocity ${\boldsymbol\beta}$, and the location of the object in relation to the camera. We define the vector ${\boldsymbol \Delta}{\boldsymbol P}$ to be the difference between the outermost ${{\boldsymbol P}_i}$ s.

 figure: Fig. 12.

Fig. 12. (a) Photo rays from a point light source at position ${{\boldsymbol P}^\prime}$ in the scene frame pass through different positions ${\boldsymbol L}_i^\prime$ on the lens, indicated as a thick double-sided arrow. In general, the events of different photo rays passing through ${{\boldsymbol P}^\prime}$ occur at different times. (b) In the scene frame, the positions ${{\boldsymbol P}_i}$ of these events are spread out along a line parallel to ${\boldsymbol\beta}$. ${\boldsymbol \Delta}{\boldsymbol P}$ points between two of the outermost positions ${{\boldsymbol P}_i}$. In the small-aperture limit, from the perspective of the (small) aperture, the positions ${{\boldsymbol P}_i}$ are distributed over an angular range of size $\alpha$. (c) The dotted line indicates the surface that is best imaged into the detector plane.

Download Full Size | PPT Slide | PDF

The situation is easiest to tackle in the small-aperture limit, in which $\parallel {\boldsymbol \Delta}{\boldsymbol P}\parallel$, the length of ${\boldsymbol \Delta}{\boldsymbol P}$, is much smaller than the aperture size. After some basic trigonometry it is clear in Fig. 12(b) that the blur angle $\alpha$ can be approximated as

$$\alpha\approx\frac{{\Delta P\sin\nu}}{d},$$
where $\nu$ is the angle between the direction of ${\boldsymbol \Delta}{\boldsymbol P}$ and the “average” direction in which the points ${{\boldsymbol P}_i}$ are seen from the lens, and where $d$ is the average distance of these points from the lens. With everything else being equal, the blur angle $\alpha$ is clearly bigger for small distances $d$ and tends towards zero in the limit of $d\to\infty$. This means that it is possible to take sharp images of sufficiently distant objects. The images shown in this paper do not show this, as the camera is focused on nearby objects.

If the small-aperture limit is not applicable, i.e., if $\parallel {\boldsymbol \Delta}{\boldsymbol P}\parallel$ is comparable to, or smaller than, the aperture size, it matters which positions ${{\boldsymbol L}_i}$ on the lens different photo rays pass through. This can be seen in Fig. 12(c), which shows the trajectories of different photo rays for each of three different point light sources that have undergone the same Lorentz shifts. One effect, clearly visible in Fig. 12(c), is that the distance where the (imperfect) image is formed, i.e., the distance where the bundle of photo rays from the same object position that passed through different points in the aperture has the smallest cross section, depending on the direction of the object position. Specifically, the image is formed at the distance of the “average” Lorentz-shifted position only if the object is in the direction ${\boldsymbol\beta}$ from the center of the camera.

6. CONCLUSION

Extending our custom raytracer has enabled us to visualize old and new effects related to relativistic distortion.

It is easy to identify avenues for further study. For example, it would be interesting to study shutter models that are less idealized and more closely represent physical shutters. In reality, shutters open at each point of the shutter surface for the same—very brief—length of time, but at different positions, the shutter opens at different times. This happens, for example, in shutters in which a thin slit is quickly moved across the shutter, as in “two-curtain shutters” that can be found in single-lens reflex (SLR) cameras. Another potential avenue for further study is the visualization of wave-optical effects associated with relativistic coma [18] and with relativistic photography with a wide aperture in general.

APPENDIX A: RELATIVISTIC RAYTRACING OF SCENES INCLUDING OBJECTS MOVING WITH THE CAMERA

Dr TIM’s relativistic raytracing procedure allows the simulation of objects moving with the camera, which are therefore at rest in the camera frame. This enables the simulation of a light-ray-direction changing “filter” attached to the camera in Section 4.

Dr TIM assumes that the objects moving with the camera are close to the camera; if a ray intersects these objects, then this is therefore assumed to happen immediately before the ray hits the camera. When tracing a ray backwards, Dr TIM therefore traces—using standard raytracing—through the scene consisting of all objects at rest in the camera frame until no more objects in this camera-frame scene are intersected, then transforms the ray into the scene frame and continues raytracing through the (scene-frame) scene.

In Dr TIM, the ray is set up (with the timing according to the shutter model) in the getRay method of the RelativisticAnyFocusSurfaceCamera class. This gets called from the calculatePixelColour method, which then initiates tracing of the ray through the camera-frame scene. Once no more objects are being encountered, the getColourOfRayFromNowhere method gets called, which Lorentz-transforms the ray into the scene frame and continues tracing the ray through the scene frame.

APPENDIX B: ARBITRARY-PLANE SHUTTER MODEL

The shutter model determines the timing of light rays that pass through the shutter. This timing is important, as it is required when transforming between the scene frame and the camera frame. Dr TIM determines the times when a light ray intersects different surfaces from the time ${t_{{\text{L},\text{in}}}}$ when it passes through the aperture plane. Here, we derive the equations that allow calculation of this time for the arbitrary-plane shutter model, which we recently added to Dr TIM.

In Dr TIM’s arbitrary-plane shutter model, the shutter opens (like the shutters in all the other shutter models) at time ${t_{\text{S}}}$ and is positioned in a plane defined by a point ${\textbf{P}}$ in the plane and a vector ${\textbf{n}}$ normal to the plane. Dr TIM traces light rays backwards, starting from a position ${\textbf{L}}$ on the aperture plane at time ${t_{{\text{L},\text{in}}}}$.

The calculation of the time ${t_{{\text{L},\text{in}}}}$ happens in the getRay method of the RelativisticAnyFocusSurfaceCamera class, which is passed as input parameters the position ${\textbf{L}}$ and other parameters that allow straightforward calculation of the normalized physical direction (i.e., forward direction) $\hat{\textbf{d}}$ of the ray. It proceeds by calculating the physical distance $a$ the light ray has to travel from the point ${\textbf{S}}$ where it intersects the shutter plane to ${\textbf{L}}$. As it passes ${\textbf{S}}$ at time ${t_{\text{S}}}$, the time it passes ${\textbf{L}}$ is

$${t_{{\text{L},\text{in}}}}={t_{\text{S}}}+\frac{a}{c},$$
where $c$ is the speed of light.

The point ${\textbf{S}}$ where the ray intersects the shutter plane satisfies two conditions. First, it lies on the ray, which means it can be expressed in the form

$${\textbf{S}}={\textbf{L}}-a\hat{\textbf{d}}.$$
Second, it lies in the shutter plane, which means it satisfies the equation
$$({\textbf{S}}-{\textbf{P}})\cdot {\textbf{n}}=0.$$
Substitution of Eq. (B2) into Eq. (B3) and solving for $a$ gives
$$a=\frac{{({\textbf{P}}-{\textbf{L}})\cdot {\textbf{n}}}}{{\hat{\textbf{d}}\cdot {\textbf{n}}}}.$$
Substitution into Eq. (B1) gives
$${t_{{\text{L},\text{in}}}}={t_{\text{S}}}+\frac{{({\textbf{P}}-{\textbf{L}})\cdot {\textbf{n}}}}{{c \hat{\textbf{d}}\cdot {\textbf{n}}}}.$$

APPENDIX C: REPEATING THE SIMULATIONS IN THIS PAPER

The code for producing all simulated photos shown in this paper can be found in the various classes of the optics.raytrace.research. RelativisticPhotography package, which can be downloaded from Ref. [26], together with the rest of the code. The value of a number of static final variables near the top of the code determines which of these photos is produced.

The simulated photos shown in Figs. 3, 4, 5, 6, and 7 were performed using the ShutterModelFocussing class. The images shown in Fig. 8 were produced by the FixedPointSurfaceShutterModelTest class, those shown in Figs. 9 and 10 were calculated by the LorentzWindowDistortionCancellation class, and those in Fig. 11 by the LorentzWindowAnaglyph class.

Funding

Engineering and Physical Sciences Research Council (EP/M010724/1).

Acknowledgment

Many thanks to Martin Hendry for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. A. Einstein, “Zur Elektrodynamik bewegter Körper,” Ann. Phys. 17, 891–921 (1905). [CrossRef]  

2. G. F. FitzGerald, “The ether and the Earth’s atmosphere,” Science 13, 390 (1889).

3. A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887). [CrossRef]  

4. A. Lampa, “Wie erscheint nach der Relativitätstheorie ein bewegter Stab einem ruhenden Beobachter?” Z. Phys. 27, 138–148 (1924). [CrossRef]  

5. R. Penrose, “The apparent shape of a relativistically moving sphere,” Proc. Cambridge Philos. Soc. 55, 137–139 (1959). [CrossRef]  

6. J. Terrell, “Invisibility of the Lorentz contraction,” Phys. Rev. 116, 1041–1045 (1959). [CrossRef]  

7. M. L. Boas, “Apparent shape of large objects at relativistic speeds,” Am. J. Phys. 29, 283–286 (1961). [CrossRef]  

8. G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965). [CrossRef]  

9. M. A. Duguay and A. T. Mattick, “Ultrahigh speed photography of picosecond light pulses and echoes,” Appl. Opt. 10, 2162–2170 (1971). [CrossRef]  

10. A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” Technical report (University of Melbourne, 1995).

11. S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014). [CrossRef]  

12. S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014). [CrossRef]  

13. T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010). [CrossRef]  

14. D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006). [CrossRef]  

15. C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007). [CrossRef]  

16. G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.

17. N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989). [CrossRef]  

18. N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989). [CrossRef]  

19. K. B. Wolf, “Relativistic aberration of optical phase space,” J. Opt. Soc. Am. A 10, 1925–1934 (1993). [CrossRef]  

20. A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010). [CrossRef]  

21. A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009). [CrossRef]  

22. D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012). [CrossRef]  

23. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 2.2.

24. M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 3.3.3.

25. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 3.2, p. 67ff.

26. “Dr TIM, a highly scientific raytracer,” https://github.com/jkcuk/Dr-TIM.

References

  • View by:

  1. A. Einstein, “Zur Elektrodynamik bewegter Körper,” Ann. Phys. 17, 891–921 (1905).
    [Crossref]
  2. G. F. FitzGerald, “The ether and the Earth’s atmosphere,” Science 13, 390 (1889).
  3. A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887).
    [Crossref]
  4. A. Lampa, “Wie erscheint nach der Relativitätstheorie ein bewegter Stab einem ruhenden Beobachter?” Z. Phys. 27, 138–148 (1924).
    [Crossref]
  5. R. Penrose, “The apparent shape of a relativistically moving sphere,” Proc. Cambridge Philos. Soc. 55, 137–139 (1959).
    [Crossref]
  6. J. Terrell, “Invisibility of the Lorentz contraction,” Phys. Rev. 116, 1041–1045 (1959).
    [Crossref]
  7. M. L. Boas, “Apparent shape of large objects at relativistic speeds,” Am. J. Phys. 29, 283–286 (1961).
    [Crossref]
  8. G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965).
    [Crossref]
  9. M. A. Duguay and A. T. Mattick, “Ultrahigh speed photography of picosecond light pulses and echoes,” Appl. Opt. 10, 2162–2170 (1971).
    [Crossref]
  10. A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” (University of Melbourne, 1995).
  11. S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
    [Crossref]
  12. S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
    [Crossref]
  13. T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
    [Crossref]
  14. D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
    [Crossref]
  15. C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
    [Crossref]
  16. G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.
  17. N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
    [Crossref]
  18. N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
    [Crossref]
  19. K. B. Wolf, “Relativistic aberration of optical phase space,” J. Opt. Soc. Am. A 10, 1925–1934 (1993).
    [Crossref]
  20. A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010).
    [Crossref]
  21. A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009).
    [Crossref]
  22. D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
    [Crossref]
  23. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 2.2.
  24. M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 3.3.3.
  25. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 3.2, p. 67ff.
  26. “Dr TIM, a highly scientific raytracer,” https://github.com/jkcuk/Dr-TIM .

2014 (2)

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
[Crossref]

2012 (1)

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

2010 (2)

T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
[Crossref]

A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010).
[Crossref]

2009 (1)

A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009).
[Crossref]

2007 (1)

C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
[Crossref]

2006 (1)

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

1993 (1)

1989 (2)

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
[Crossref]

1971 (1)

1965 (1)

G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965).
[Crossref]

1961 (1)

M. L. Boas, “Apparent shape of large objects at relativistic speeds,” Am. J. Phys. 29, 283–286 (1961).
[Crossref]

1959 (2)

R. Penrose, “The apparent shape of a relativistically moving sphere,” Proc. Cambridge Philos. Soc. 55, 137–139 (1959).
[Crossref]

J. Terrell, “Invisibility of the Lorentz contraction,” Phys. Rev. 116, 1041–1045 (1959).
[Crossref]

1924 (1)

A. Lampa, “Wie erscheint nach der Relativitätstheorie ein bewegter Stab einem ruhenden Beobachter?” Z. Phys. 27, 138–148 (1924).
[Crossref]

1905 (1)

A. Einstein, “Zur Elektrodynamik bewegter Körper,” Ann. Phys. 17, 891–921 (1905).
[Crossref]

1889 (1)

G. F. FitzGerald, “The ether and the Earth’s atmosphere,” Science 13, 390 (1889).

1887 (1)

A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887).
[Crossref]

Atakishiyev, N. M.

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
[Crossref]

Boas, M. L.

M. L. Boas, “Apparent shape of large objects at relativistic speeds,” Am. J. Phys. 29, 283–286 (1961).
[Crossref]

Borchers, M.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Born, M.

M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 3.3.3.

Constable, G.

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

Courtial, J.

S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
[Crossref]

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009).
[Crossref]

Dance, S.

A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” (University of Melbourne, 1995).

Duguay, M. A.

Einstein, A.

A. Einstein, “Zur Elektrodynamik bewegter Körper,” Ann. Phys. 17, 891–921 (1905).
[Crossref]

Ertl, T.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Falk, M.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Fechtig, O.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

FitzGerald, G. F.

G. F. FitzGerald, “The ether and the Earth’s atmosphere,” Science 13, 390 (1889).

Frank, R.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Grave, F.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Gray, N.

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

Grottel, S.

T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
[Crossref]

Hamilton, A. C.

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009).
[Crossref]

Hamilton, A. J. S.

A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010).
[Crossref]

Hendry, M.

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

Howard, A.

A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” (University of Melbourne, 1995).

King, A.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Kitchen, L.

A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” (University of Melbourne, 1995).

Kortemeyer, G.

G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.

Kraus, U.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Lambert, D.

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

Lampa, A.

A. Lampa, “Wie erscheint nach der Relativitätstheorie ein bewegter Stab einem ruhenden Beobachter?” Z. Phys. 27, 138–148 (1924).
[Crossref]

Lassner, W.

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
[Crossref]

Mattick, A. T.

McCalman, L.

C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
[Crossref]

Mendez, I. R.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Michelson, A. A.

A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887).
[Crossref]

Morley, E.

A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887).
[Crossref]

Müller, T.

T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
[Crossref]

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Nollert, H.-P.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Oxburgh, S.

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
[Crossref]

Penrose, R.

R. Penrose, “The apparent shape of a relativistically moving sphere,” Proc. Cambridge Philos. Soc. 55, 137–139 (1959).
[Crossref]

Polhemus, G.

A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010).
[Crossref]

Ruder, H.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Savage, C. M.

C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
[Crossref]

Schafhitzel, T.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Schär, S.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Schirra, S.

G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.

Scott, G. D.

G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965).
[Crossref]

Searle, A.

C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
[Crossref]

Smith, W. J.

W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 2.2.

W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 3.2, p. 67ff.

Snehanshu, H.

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

Talati, S.

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

Tan, P.

G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.

Terrell, J.

J. Terrell, “Invisibility of the Lorentz contraction,” Phys. Rev. 116, 1041–1045 (1959).
[Crossref]

Tyc, T.

S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
[Crossref]

Viner, M. R.

G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965).
[Crossref]

Weiskopf, D.

T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
[Crossref]

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Wolf, E.

M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 3.3.3.

Wolf, K. B.

K. B. Wolf, “Relativistic aberration of optical phase space,” J. Opt. Soc. Am. A 10, 1925–1934 (1993).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
[Crossref]

Zahn, C.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Zatloukal, M.

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

Am. J. Phys. (3)

M. L. Boas, “Apparent shape of large objects at relativistic speeds,” Am. J. Phys. 29, 283–286 (1961).
[Crossref]

G. D. Scott and M. R. Viner, “The geometrical appearance of large objects moving at relativistic speeds,” Am. J. Phys. 33, 534–536 (1965).
[Crossref]

C. M. Savage, A. Searle, and L. McCalman, “Real time relativity: exploratory learning of special relativity,” Am. J. Phys. 75, 791–798 (2007).
[Crossref]

Am. J. Sci. (1)

A. A. Michelson and E. Morley, “On the relative motion of the Earth and the luminiferous ether,” Am. J. Sci. s3-34, 333–345 (1887).
[Crossref]

Ann. Phys. (1)

A. Einstein, “Zur Elektrodynamik bewegter Körper,” Ann. Phys. 17, 891–921 (1905).
[Crossref]

Appl. Opt. (1)

Comp. Phys. Commun. (2)

S. Oxburgh, T. Tyc, and J. Courtial, “Dr TIM: ray-tracer TIM, with additional specialist capabilities,” Comp. Phys. Commun. 185, 1027–1037 (2014).
[Crossref]

D. Lambert, A. C. Hamilton, G. Constable, H. Snehanshu, S. Talati, and J. Courtial, “TIM, a ray-tracing program for METATOY research and its dissemination,” Comp. Phys. Commun. 183, 711–732 (2012).
[Crossref]

IEEE Trans. Vis. Comput. Graphics (2)

T. Müller, S. Grottel, and D. Weiskopf, “Special relativistic visualization by local ray tracing,” IEEE Trans. Vis. Comput. Graphics 16, 1243–1250 (2010).
[Crossref]

D. Weiskopf, M. Borchers, T. Ertl, M. Falk, O. Fechtig, R. Frank, F. Grave, A. King, U. Kraus, T. Müller, H.-P. Nollert, I. R. Mendez, H. Ruder, T. Schafhitzel, S. Schär, C. Zahn, and M. Zatloukal, “Explanatory and illustrative visualization of special and general relativity,” IEEE Trans. Vis. Comput. Graphics 12, 522–534 (2006).
[Crossref]

J. Math. Phys. (2)

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. I. Geometrical optics,” J. Math. Phys. 30, 2457–2462 (1989).
[Crossref]

N. M. Atakishiyev, W. Lassner, and K. B. Wolf, “The relativistic coma aberration. II. Helmholtz wave optics,” J. Math. Phys. 30, 2463–2468 (1989).
[Crossref]

J. Opt. A (1)

A. C. Hamilton and J. Courtial, “Generalized refraction using lenslet arrays,” J. Opt. A 11, 065502 (2009).
[Crossref]

J. Opt. Soc. Am. A (1)

New J. Phys. (1)

A. J. S. Hamilton and G. Polhemus, “Stereoscopic visualization in curved spacetime: seeing deep inside a black hole,” New J. Phys. 12, 123027 (2010).
[Crossref]

Phys. Rev. (1)

J. Terrell, “Invisibility of the Lorentz contraction,” Phys. Rev. 116, 1041–1045 (1959).
[Crossref]

Proc. Cambridge Philos. Soc. (1)

R. Penrose, “The apparent shape of a relativistically moving sphere,” Proc. Cambridge Philos. Soc. 55, 137–139 (1959).
[Crossref]

Proc. SPIE (1)

S. Oxburgh, N. Gray, M. Hendry, and J. Courtial, “Lorentz-transformation and Galileo-transformation windows,” Proc. SPIE 9193, 91931K (2014).
[Crossref]

Science (1)

G. F. FitzGerald, “The ether and the Earth’s atmosphere,” Science 13, 390 (1889).

Z. Phys. (1)

A. Lampa, “Wie erscheint nach der Relativitätstheorie ein bewegter Stab einem ruhenden Beobachter?” Z. Phys. 27, 138–148 (1924).
[Crossref]

Other (6)

A. Howard, S. Dance, and L. Kitchen, “Relativistic ray-tracing: Simulating the visual appearance of rapidly moving objects,” (University of Melbourne, 1995).

G. Kortemeyer, P. Tan, and S. Schirra, “A slower speed of light: developing intuition about special relativity with games,” in Proceedings of the International Conference on the Foundations of Digital Games (FDG 2013) (ACM, 2013), pp. 400–402.

W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 2.2.

M. Born and E. Wolf, Principles of Optics (Pergamon, 1980), Chap. 3.3.3.

W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), Chap. 3.2, p. 67ff.

“Dr TIM, a highly scientific raytracer,” https://github.com/jkcuk/Dr-TIM .

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Relativistic distortion, and comparison with the effect of time-of-flight effects only. The images show raytracing simulations of photos of a scene (a) taken with a camera moving at relativistic speed through the scene (b) and with an (unphysical) camera in which only time-of-flight effects are taken into account, but those due to special relativity are ignored (c). The image shown in (c) was simulated by using the Galilean transformation instead of the Lorentz transformation when transforming between reference frames [11]. All simulations were performed for a pinhole camera using the pinhole shutter model. In (b) and (c), the camera moves with velocity ${\boldsymbol\beta}c$, where ${\boldsymbol\beta}=(0.1c,0,0.99c)^{\intercal}$ in the left-handed coordinate system used by our raytracing software, in which the $x$, $y$, and $z$ directions represent the right, up, and into the page, respectively. The simulations were performed using the scientific raytracer Dr TIM [12].
Fig. 2.
Fig. 2. (a) Trajectory of a photo ray (a light ray contributing to a photo), viewed in the camera frame. The trajectory is shown as a red line with an arrow tip at its end. The camera is moving with (relativistic) velocity ${\boldsymbol\beta}c$ through a stationary scene. The camera’s lens stigmatically images the position ${\textbf{P}}$ to the position ${\textbf{D}}$ on the detector, which means it re-directs any ray from ${\textbf{P}}$ such that it subsequently passes through ${\textbf{D}}$. ${\textbf{L}}$ is the point where the ray intersects the idealized thin lens (vertical double-sided arrow). In the camera frame, the light ray passes ${\textbf{P}}$ at time ${t_{\text{P}}}$, enters ${\textbf{L}}$ at ${t_{\text{L}}}$, and reaches ${\textbf{D}}$ at ${t_{\text{D}}}$. (b) Several photo rays from ${\textbf{P}}$, shown in the camera frame. Due to the imaging properties of the lens, these rays intersect the same image position ${\textbf{D}}$ on the detector (not shown). Different rays intersect the camera lens at different positions, labeled ${{\textbf{L}}_1}$ to ${{\textbf{L}}_5}$. Each ray can be Lorentz-transformed into the scene frame by considering two events: for ray $i$, these are the times and positions of the ray passing through ${\textbf{P}}$ and ${{\textbf{L}}_i}$. (c) The same rays, shown in the scene frame. Ray $i$ ($i=1$ to 5) passes through the scene-frame positions ${\textbf{P}}_i^\prime$ and ${\textbf{L}}_i^\prime$, the positions of the events described above, Lorentz-transformed into the scene frame. These positions depend on the choice of shutter model and camera velocity ${\boldsymbol\beta}c$. In the scene frame, the rays do not necessarily intersect in a single point, which is the case in the example shown.
Fig. 3.
Fig. 3. Simulated photos taken with a camera that uses the aperture-plane shutter model. In (a), the camera is at rest; in (b), it is moving at $\beta\approx 99.5\%$ of the speed of light. The scene contains a $9\times 9$ array of small white spheres centered on the scene-frame surface on which the camera is focused when moving. The figure is calculated for ${\boldsymbol\beta}{=(0.1,0,0.99)^{\intercal}}$. The camera was focused on a plane a distance 10 (in units of floor-tile lengths) in the camera frame, which transforms into a curved surface in the scene frame. The horizontal angle of view is 120° in (a) and 20° in (b). In both cases, the simulated aperture radius is 0.05 (Dr TIM’s interactive version refers to this aperture size as “medium”).
Fig. 4.
Fig. 4. Simulated photo taken with a camera that uses the aperture-plane shutter model. The simulation parameters differ from those used to create Fig. 3(b) only in the aperture radius, which is 0.2 (“huge”).
Fig. 5.
Fig. 5. Simulated photo taken with a moving camera that uses the detector-plane shutter model and an ideal thin lens as imaging element. AS in Fig. 3, the scene contains a $9\times 9$ array of small white spheres centered on the scene-frame surface on which the camera is focused when moving, but note that the scene-frame surface on which the cameras are focused is different from that on which the camera in Fig. 3 is focused. The spheres can be seen to be in sharp focus. The remainder of the scene and the camera velocity are identical to those used to calculate Fig. 3.
Fig. 6.
Fig. 6. Simulated photos taken with a camera that uses the detector-plane shutter model in combination with an ideal thin lens (a) and a phase hologram of a thin lens (b). The simulated aperture radius is 0.2 (called “huge” in Dr TIM’s interactive version) to make the blurring visible. In (a), which is calculated for parameters that differ from those used to calculate Fig. 5 only in the increased aperture size, the spheres are still in focus. In (b), which is calculated for parameters that differ from those used to calculate (a) only in the time delay introduced by the imaging element, the spheres are clearly blurred.
Fig. 7.
Fig. 7. Simulated photos taken with cameras that use the focus-surface shutter model. The simulated aperture radius is 0.05 (“medium”).
Fig. 8.
Fig. 8. Raytracing simulations illustrating the fixed-point-surface shutter model. In (a), the camera is at rest; in (b), it is moving with velocity ${\boldsymbol\beta}c=(0.1c,0,0.99c)^{\intercal}$ in the scene frame. The camera is focused on the plane $z=8$ in the camera frame. Because of the timing of the photo rays, objects in this plane appear identical in both photos. Objects outside this plane appear distorted; those close to the focusing plane, like the mantle of the cylinders centered in the focusing plane, are distorted only slightly.
Fig. 9.
Fig. 9. Relativistic blurring and distortion and its cancellation with a Lorentz window. The frames show simulated photos of a portrait scene, with the camera focused on the same plane in front of the camera. (a) The camera is stationary in the scene frame. The subject’s eyes are in the camera’s focusing plane. (b) The camera is moving with relativistic velocity ${\boldsymbol\beta}c$ with respect to the scene frame, where ${\boldsymbol\beta}=(0.1,0,0.99)^{\intercal}$. The camera’s shutter was placed in a plane perpendicular to ${\boldsymbol\beta}$ that passed through the position ${(0.111,0,1.1)^{\intercal}}$; the shutter opening time was ${t_{\text{S}}}=-1$. The scene appears distorted and out-of-focus. (c) Like (b), but with a Lorentz window placed in the shutter plane, which makes the photo look identical to that taken with the camera at rest. Throughout, the speed of light was set to $c=1$. All simulations were performed with an extended version of Dr TIM [12].
Fig. 10.
Fig. 10. Simulated photos taken through a Lorentz window with a camera that uses the detector-plane shutter model. (a) Without Lorentz window; (b) with Lorentz window. In (b), a slight distortion of the scene can be seen (floor tiles). The shutter-opening time was calculated such that the photo ray through the aperture center in the forward direction has the same timing as in the arbitrary-plane-shutter model.
Fig. 11.
Fig. 11. Anaglyphs made from simulated stereo pairs taken with a camera at rest (a), a camera moving at relativistic velocity ${\boldsymbol\beta}c=(0.1,0,0.99)^{\intercal}c$ relative to the scene (b), and the same relativistically moving camera but with a Lorentz window placed in the plane of spatial fixed points of the Lorentz transformation for the shutter-opening time (c). In (b) and (c), the camera’s shutter is located in a plane perpendicular to ${\boldsymbol\beta}$ through the position $(0.111,0,1.1)^{\intercal}$, and the shutter opening time is ${t_{\text{S}}}=-1$, parameters chosen such that the shutter plane is a plane of spatial fixed points of the Lorentz transformation for ${t_{\text{S}}}$. Eye separation 0.4 in the horizontal direction; camera directions are chosen such that a plane a distance 5 in front of the camera appears in the paper/monitor plane.
Fig. 12.
Fig. 12. (a) Photo rays from a point light source at position ${{\boldsymbol P}^\prime}$ in the scene frame pass through different positions ${\boldsymbol L}_i^\prime$ on the lens, indicated as a thick double-sided arrow. In general, the events of different photo rays passing through ${{\boldsymbol P}^\prime}$ occur at different times. (b) In the scene frame, the positions ${{\boldsymbol P}_i}$ of these events are spread out along a line parallel to ${\boldsymbol\beta}$. ${\boldsymbol \Delta}{\boldsymbol P}$ points between two of the outermost positions ${{\boldsymbol P}_i}$. In the small-aperture limit, from the perspective of the (small) aperture, the positions ${{\boldsymbol P}_i}$ are distributed over an angular range of size $\alpha$. (c) The dotted line indicates the surface that is best imaged into the detector plane.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

x = x + ( γ 1 ) ( β x ) β β 2 + γ β c t ,
d = d ^ + ( γ 1 ) ( β ^ d ^ ) β ^ + γ β ,
l ( r ) = l ( 0 ) + O ( r 2 ) ,
t = ( 1 γ ) ( β x ) c γ β 2 .
( γ 1 ) ( β x ) β 2 γ c t = 0.
β x = γ β 2 c t S γ 1 = γ + 1 γ c t S .
α Δ P sin ν d ,
t L , in = t S + a c ,
S = L a d ^ .
( S P ) n = 0.
a = ( P L ) n d ^ n .
t L , in = t S + ( P L ) n c d ^ n .

Metrics