## Abstract

Imaging technologies such as dynamic viewpoint generation are engineered for incoherent radiation using the traditional light field, and for coherent radiation using electromagnetic field theory. We present a model of coherent image formation that strikes a balance between the utility of the light field and the comprehensive predictive power of Maxwell’s equations. We synthesize research in optics and signal processing to formulate, capture, and form images from *quasi light fields*, which extend the light field from incoherent to coherent radiation. Our coherent cameras generalize the classic beamforming algorithm in sensor array processing and invite further research on alternative notions of image formation.

© 2009 Optical Society of America

## 1. INTRODUCTION

The light field represents radiance as a function of position and direction, thereby decomposing optical power flow along rays. The light field is an important tool used in many imaging applications in different disciplines, but is traditionally limited to incoherent light. In computer graphics, a rendering pipeline can compute new views at arbitrary camera positions from the light field [1]. In computational photography, a camera can measure the light field and later generate images focused at different depths, after the picture is taken [2]. In electronic displays, an array of projectors can present multiple viewpoints encoded in the light field, enabling 3D television [3]. Many recent incoherent imaging innovations have been made possible by expressing image pixel values as appropriate integrals over light field rays.

For coherent imaging applications, the value of decomposing power by position and direction has long been recognized without the aid of a light field, since the complex-valued scalar field encodes direction in its phase. A hologram encodes multiple viewpoints, but in a different way than the light field [4]. An ultrasound machine generates images focused at different depths, but from air pressure instead of light field measurements [5]. A Wigner distribution function models the operation of optical systems in simple ways, by conveniently inferring direction from the scalar field instead of computing non negative light field values [6]. Comparing these applications, coherent imaging uses the scalar field to achieve results similar to those that incoherent imaging obtains with the light field.

Our goal is to provide a model of coherent image formation that combines the utility of the light field with the comprehensive predictive power of the scalar field. The similarities between coherent and incoherent imaging motivate exploring how the scalar field and light field are related, which we address by synthesizing research across three different communities. Each community is concerned with a particular Fourier transform pair and has its own name for the light field. In optics, the pair is position and direction, and Walther discovered the first generalized radiance function by matching power predictions made with radiometry and scalar field theory [7]. In quantum physics, the pair is position and momentum, and Wigner discovered the first quasi-probability distribution, or phase-space distribution, as an aid to computing the expectation value of a quantum operator [8]. In signal processing, the pair is time and frequency, and while instantaneous spectra were used as early as 1890 by Sommerfeld, Ville is generally credited with discovering the first nontrivial quadratic time–frequency distribution by considering how to distribute the energy of a signal over time and frequency [9]. Walther, Wigner, and Ville independently arrived at essentially the same function, which is one of the ways to express a light field for coherent radiation in terms of the scalar field.

The light field has its roots in radiometry, a phenomenological theory of radiative power transport that began with Herschel’s observations of the sun [10], developed through the work of astrophysicists such as Chandrasekhar [11], and culminated with its grounding in electromagnetic field theory by Friberg *et al.* [12]. The light field represents radiance, which is the fundamental quantity in radiometry, defined as power per unit projected area per unit solid angle. Illuminating engineers would integrate radiance to compute power quantities, although no one could validate these calculations with the electromagnetic field theory formulated by Maxwell. Gershun was one of many physicists who attempted to physically justify radiometry, and who introduced the phrase *light field* to represent a three-dimensional vector field analogous to the electric and magnetic fields [13]. Gershun’s light field is a degenerate version of the one we discuss, and more closely resembles the time-averaged Poynting vector that appears in a rigorous derivation of geometric optics [14]. Subsequently, Walther generalized radiometry to coherent radiation in two different ways [7, 15], and Wolf connected Walther’s work to quantum physics [16], ultimately leading to the discovery of many more generalized radiance functions [17] and a firm foundation for radiometry [12].

Meanwhile, machine vision researchers desired a representation for all the possible pictures a pinhole camera might take in space–time, which led to the current formulation of the light field. Inspired by Leonardo da Vinci, Adelson and Bergen defined a plenoptic function to describe “everything that can be seen” as the intensity recorded by a pinhole camera parametrized by position, direction, time, and wavelength [18]. Levoy and Hanrahan tied the plenoptic function more firmly to radiometry by redefining Gershun’s phrase light field to mean radiance parametrized by position and direction [1]. Gortler *et al.* introduced the same construct, but instead called it the lumigraph [19]. Light field is now the dominant terminology used in incoherent imaging contexts.

Our contribution is to describe and characterize all the ways to extend the light field to coherent radiation, and to interpret coherent image formation using the resulting extended light fields. We call our extended light fields *quasi light fields*, which are analogous to the generalized radiance functions of optics, the quasi-probability and phase-space distributions of quantum physics, and the quadratic class of time–frequency distributions of signal processing. Agarwal *et al.* have already extended the light field to coherent radiation [17], and the signal processing community has already classified all of the ways to distribute power over time and frequency [20]. Both have traced their roots to quantum physics. But to our knowledge, no one has connected the research to show (i) that the quasi light fields represent *all* the ways to extend the light field to coherent radiation, and (ii) that the signal processing classification informs which quasi light field to use for a specific application. We further contextualize the references, making any unfamiliar literature more accessible to specialists in other areas.

Our paper is organized as follows. We describe the traditional light field in Section 2. We formulate quasi light fields in Section 3 by reviewing and relating the relevant research in optics, quantum physics, and signal processing. In Section 4, we describe how to capture quasi light fields, discuss practical sampling issues, and illustrate the impact of light field choice on energy localization. In Section 5, we describe how to form images with quasi light fields. We derive a light-field camera, demonstrate and compensate for diffraction limitations in the near zone, and generalize the classic beamforming algorithm in sensor array processing. We conclude the paper in Section 6, where we remark on the utility of quasi light fields and future perspectives on image formation.

## 2. TRADITIONAL LIGHT FIELD

The light field is a useful tool for incoherent imaging because it acts as an intermediary between the camera and the picture, decoupling information capture and image production: the camera measures the light field, from which many different traditional pictures can be computed. We define a pixel in the image of a scene by a surface patch *σ* and a virtual aperture (Fig. 1 ). Specifically, we define the pixel value as the power *P* radiated by *σ* toward the aperture, just as an ideal single-lens camera would measure. According to radiometry, *P* is an integral over a bundle of light field rays [21]:

**r**and in unit direction

**s**,

*ψ*is the angle that

**s**makes with the surface normal at

**r**, and ${\Omega}_{\mathbf{r}}$ is the solid angle subtended by the virtual aperture at

**r**. The images produced by many different conventional cameras can be computed from the light field using Eq. (1) [22].

The light field has an important property that allows us to measure it remotely: the light field is constant along rays in a lossless medium [21]. To measure the light field on the surface of a scene, we follow the rays for the images we are interested in, and intercept those rays with our camera hardware (Fig. 1). However, our hardware must be capable of measuring the radiance at a point and in a specific direction; a conventional camera that simply measures the irradiance at a point is insufficient. We can discern directional power flow using a lens array, as is done in a plenoptic camera [2].

In order to generate coherent images using the same framework described above, we must overcome three challenges. First, we must determine how to measure power flow by position and direction to formulate a coherent light field. Second, we must capture the coherent light field remotely and be able to infer behavior at the scene surface. Third, we must be able to use integral (1) to produce correct power values, so that we can form images by integrating over the coherent light field. We address each challenge in a subsequent section.

## 3. FORMULATING QUASI LIGHT FIELDS

We motivate, systematically generate, and characterize quasi light fields by relating existing research. We begin in Subsection 3A with research in optics that frames the challenge of extending the light field to coherent radiation in terms of satisfying a power constraint required for radiometry to make power predictions consistent with scalar field theory. While useful in developing an intuition for quasi light fields, the power constraint does not allow us to easily determine the quasi light fields. We therefore proceed in Subsection 3B to describe research in quantum physics that systematically generates quasi light fields satisfying the power constraint and that shows how the quasi light fields are true extensions that reduce to the traditional light field under certain conditions. While useful for generating quasi light fields, the quantum physics approach does not allow us to easily characterize them. Therefore, in Subsection 3C we map the generated quasi light fields to the quadratic class of time–frequency distributions, which has been extensively characterized and classified by the signal processing community. By relating research in optics, quantum physics, and signal processing, we express all the ways to extend the light field to coherent radiation, and provide insight on how to select an appropriate quasi light field for a particular application.

We assume a perfectly coherent complex scalar field $U\left(\mathbf{r}\right)$ at a fixed frequency *ν* for simplicity, although we comment in Section 6 on how to extend the results to broadband, partially coherent radiation. The radiometric theory we discuss assumes a planar source at $z=0$. Consequently, although the light field is defined in three-dimensional space, much of our analysis is confined to planes $z={z}_{0}$ parallel to the source. Therefore, for convenience, we use $\mathbf{r}=(x,y,z)$ and $\mathbf{s}=({s}_{x},{s}_{y},{s}_{z})$ to indicate three-dimensional vectors and ${\mathbf{r}}_{\perp}=(x,y)$ and ${\mathbf{s}}_{\perp}=({s}_{x},{s}_{y})$ to indicate two-dimensional projected versions.

#### 3A. Intuition from Optics

An extended light field must produce accurate power transport predictions consistent with rigorous theory; thus the power computed from the scalar field using wave optics determines the allowable light fields via the laws of radiometry. One way to find extended light fields is to guess a light field equation that satisfies this power constraint, which is how Walther identified the first extended light field [7]. The scenario involves a planar source at $z=0$ described by $U\left(\mathbf{r}\right)$, and a sphere of large radius *ρ* centered at the origin. We use scalar field theory to compute the flux through part of the sphere, and then use the definition of radiance to determine the light field from the flux.

According to scalar field theory, the differential flux $\mathrm{d}\Phi $ through a portion of the sphere subtending differential solid angle $\mathrm{d}\Omega $ is given by integrating the radial component of the energy flux density vector **F**. From diffraction theory, the scalar field in the far zone is

*λ*is the wavelength, and

**s**[23]. Now

According to radiometry, radiant intensity is flux per unit solid angle

*ad hoc*manner, but it is hard to find and verify the properties of all extended light fields this way, and we would have to individually analyze each light field that we do manage to find. So instead, we pursue a systematic approach to exhaustively identify and characterize the extended light fields that guarantee the correct radiant intensity in Eq. (6).

#### 3B. Explicit Extensions from Quantum Physics

The mathematics of quantum physics provides us with a systematic extended light field generator that factors the radiant intensity in Eq. (6) in a structured way. Walther’s extended light field in Eq. (8) provides the hint for this connection between radiometry and quantum physics. Specifically, Wolf recognized the similarity between Walther’s light field and the Wigner phase-space distribution [8] from quantum physics [16]. Subsequently, Agarwal *et al.* repurposed the mathematics behind phase-space representation theory to generate new light fields instead of distributions [17]. We summarize their approach, define the class of quasi light fields, describe how quasi light fields extend traditional radiometry, and show how quasi light fields can be conveniently expressed as filtered Wigner distributions.

The key insight of Agarwal *et al.* was to introduce a position operator ${\widehat{\mathbf{r}}}_{\perp}$ and a direction operator ${\widehat{\mathbf{s}}}_{\perp}$ that obey the commutation relations [26]

*et al.*not only provides us with different ways of expressing the light field for coherent radiation, but also explains how these differences arise as the wavelength becomes nonnegligible.

We now summarize the phase-space representation calculus that Agarwal and Wolf invented [27] to map operator orderings to functions, which Agarwal *et al.* later applied to radiometry [17], culminating in a formula for extended light fields. The phase-space representation theory generates a function ${\stackrel{\u0303}{L}}^{\Omega}$ from any operator $\widehat{L}$ for each distinct way *Ω* of ordering collections of ${\widehat{\mathbf{r}}}_{\perp}$ and ${\widehat{\mathbf{s}}}_{\perp}$. So by choosing a specific $\widehat{L}$ defined by its matrix elements using the Dirac notation [26],

*Ω*, so that ${L}^{\Omega}$ can be factored from Eq. (6). Finally, there is an explicit formula for ${L}^{\Omega}$ [27], which in the form of Fri berg

*et al.*[12] reads

*Ω*.

Previous research has related the extended light fields ${L}^{\Omega}$ to the traditional light field by examining how the ${L}^{\Omega}$ behave for globally incoherent light of a small wavelength, an environment technically modeled by a quasi-homogeneous source in the geometric optics limit where $\lambda \to 0$. As $\lambda \to 0$, ${\widehat{\mathbf{r}}}_{\perp}$ and ${\widehat{\mathbf{s}}}_{\perp}$ commute per relations (10), so that all orderings *Ω* are equivalent and all of the extended light fields ${L}^{\Omega}$ collapse to the same function. Since, in the source plane, Foley and Wolf showed that one of those light fields behaves like traditional radiance [28] for globally incoherent light of a small wavelength, all of the ${L}^{\Omega}$ behave like traditional radiance for globally incoherent light of a small wavelength. Furthermore, Friberg *et al.* showed that many of the ${L}^{\Omega}$ are constant along rays for globally incoherent light of a small wavelength [12]. The ${L}^{\Omega}$ thereby subsume the traditional light field, and *globally incoherent light of a small wavelength* is the environment in which traditional radiometry holds.

To more easily relate ${L}^{\Omega}$ to the signal processing literature, we conveniently express ${L}^{\Omega}$ as a filtered Wigner distribution. We introduce a function *Π* and substitute

**u**, then over

**a**, and finally substitute $\mathbf{b}={\mathbf{s}}_{\perp}^{\prime}-{\mathbf{s}}_{\perp}$:

*Π*yields a different light field. There are only minor restrictions on

*Π*, or equivalently on $\stackrel{\u0303}{\Omega}$. Specifically, Agarwal and Wolf’s calculus requires that [27]

We call the functions ${L}^{\Omega}$, the restricted class of extended light fields that we have systematically generated, quasi light fields, in recognition of their connection with quasi-probability distributions in quantum physics.

#### 3C. Characterization from Signal Processing

Although we have identified quasi light fields and justified how they extend the traditional light field, we must still show that we have found all possible ways to extend the light field to coherent radiation, and we must indicate how to select a quasi light field for a specific application. We address both concerns by relating quasi light fields to bilinear forms of *U* and ${U}^{*}$ that are parameterized by position and direction. First, such bilinear forms reflect all the different ways to represent the energy distribution of a complex signal in signal processing, and therefore contain all possible extended light fields, allowing us to identify any unaccounted for by quasi light fields. Second, we may use the signal processing classification of bilinear forms to characterize quasi light fields and guide the selection of one for an application.

To relate quasi light fields to bilinear forms, we must express the filtered Wigner distribution in Eq. (15) as a bilinear form. To this end, we first express the filter kernel ∏ in terms of another function *K*:

**v**, and finally substitute

*U*and ${U}^{*}$, with kernel indicated by the braces.

The structure of the kernel of the bilinear form in Eq. (20) limits *L* to a shift-invariant energy distribution. Specifically, translating the scalar field in Eq. (20) in position and direction orthogonal to the *z*-axis according to

*quadratic class*of time–frequency distributions, which is sometimes misleadingly referred to as Cohen’s class [20].

The quasi light fields represent *all* possible ways of extending the light field to coherent radiation. This is because any reasonably defined extended light field must be shift-invariant in position and direction, as translating and rotating coordinates should modify the scalar field and light field representations in corresponding ways. Thus, on the one hand, an extended light field must be a quadratic time–frequency distribution. On the other hand, Eq. (20) implies that quasi light fields span the entire class of quadratic time–frequency distributions, apart from the constraints on ∏ described at the end of Subsection 3B. Constraint (17) is necessary to satisfy the power constraint implied by Eq. (6), which any extended light field must satisfy. In contrast, constraint (16) is a technical detail concerning analyticity and the location of zeros; extended light fields strictly need not satisfy this mild constraint, but the light fields that are ruled out are well-approximated by light fields that satisfy it.

We obtain a concrete sensor array processing interpretation of quasi light fields by grouping the exponentials in Eq. (20) with *U* instead of *K*:

*K*to estimate the correlation $E\left[U\left({\mathbf{r}}^{\mathrm{R}}\right){U}^{*}\left({\mathbf{r}}^{\mathrm{C}}\right)\right]$ by

*K*serves a similar role as the covariance matrix taper that gives rise to design features such as diagonal loading [29]. But for our purposes, the sensor array processing interpretation in Eq. (23) allows us to cleanly separate the choice of quasi light field in

*K*from the plane wave focusing in the exponentials.

Several signal processing texts meticulously classify the quadratic class of time–frequency distributions by their properties and discuss distribution design and use for various applications [20, 25]. We can use these resources to design quasi light fields for specific applications. For example, if we desire a light field with fine directional localization, we may first try the Wigner quasi light field in Eq. (8), which is a popular starting choice. We may then discover that we have too many artifacts from interfering spatial frequencies, called *cross terms*, and therefore wish to consider a reduced interference quasi light field. We might try the modified B-distribution, which is a particular reduced interference quasi light field that has a tunable parameter to suppress interference. Or, we may decide to design our own quasi light field in a transformed domain using ambiguity functions. The resulting tradeoffs can be tailored to specific application requirements.

## 4. CAPTURING QUASI LIGHT FIELDS

To capture an arbitrary quasi light field, we sample and process the scalar field. In incoherent imaging, the traditional light field is typically captured by instead making intensity measurements at a discrete set of positions and directions, as is done in the plenoptic camera [2]. While it is possible to apply the same technique to coherent imaging, only a small subset of quasi light fields that exhibit poor localization properties can be captured this way. In comparison, all quasi light fields can be computed from the scalar field, as in Eq. (15). We therefore sample the scalar field with a discrete set of sensors placed at different positions in space and subsequently process the scalar field measurements to compute the desired quasi light field. We describe the capture process for three specific quasi light fields in Subsection 4A and demonstrate the different localization properties of these quasi light fields via simulation in Subsection 4B.

#### 4A. Sampling the Scalar Field

To make the capture process concrete, we capture three different quasi light fields. For simplicity, we consider a two-dimensional scene and sample the scalar field with a linear array of sensors regularly spaced along the *y*-axis (Fig. 2 ). With this geometry, the scalar field *U* is parameterized by a single position variable *y*, and the discrete light field $\mathcal{l}$ is parameterized by *y* and the direction component ${s}_{y}$. The sensor spacing is $d\u22152$, which we assume is fine enough to ignore aliasing effects. This assumption is practical for long-wavelength applications such as millimeter-wave radar. For other applications, aliasing can be avoided by applying an appropriate prefilter. From the sensor measurements, we compute three different quasi light fields, including the spectrogram and the Wigner.

Although the spectrogram quasi light field is attractive because it can be captured like the traditional light field by making intensity measurements, it exhibits poor localization properties. Zhang and Levoy explain [30] how to capture the spectrogram by placing an aperture stop specified by a transmission function *T* over the desired position *y* before computing a Fourier transform to extract the plane wave component in the desired direction ${s}_{y}$. Previously Ziegler *et al.* used the spectrogram as a coherent light field to represent a hologram [4]. The spectrogram is an important quasi light field because it is the building block for the quasi light fields that can be directly captured by making intensity measurements, since all nonnegative quadratic time–frequency distributions, and therefore all nonnegative quasi light fields, are sums of spectrograms [20]. Ignoring constants and ${s}_{z}$, we compute the discrete spectrogram from the scalar field samples by

The Wigner quasi light field is a popular choice that exhibits good energy localization in position and direction [20]. We already identified the Wigner quasi light field in Eq. (8); the discrete version is

We now introduce a third quasi light field for capture, in order to help us understand the implications of requiring quasi light fields to exhibit traditional light field properties. Specifically, the traditional light field has real nonnegative values that are zero where the scalar field is zero, whereas no quasi light field behaves this way [24]. Although the spectrogram has nonnegative values, the support of both the spectrogram and Wigner spills over into regions where the scalar field is zero. In contrast, the conjugate Rihaczek quasi light field, which can be obtained by substituting Eq. (3) for ${a}^{*}\left(\mathbf{s}\right)$ in Eq. (6) and factoring, is identically zero at all positions where the scalar field is zero and for all directions in which the plane wave component is zero:

#### 4B. Localization Tradeoffs

Different quasi light fields localize energy in position and direction in different ways, so that the choice of quasi light field affects the potential resolution achieved in an imaging application. We illustrate the diversity of behavior by simulating a plane wave propagating past a screen edge and computing the spectrogram, Wigner, and Rihaczek quasi light fields from scalar field samples (Fig. 3 ). This simple scenario stresses the main tension between localization in position and direction: each quasi light field must encode the position of the screen edge as well as the downward direction of the plane wave. The quasi light fields serve as intermediate representations used to jointly estimate the position of the screen edge and the orientation of the plane wave.

Our simulation accurately models diffraction using our implementation of the angular spectrum propagation method, which is the same technique used in commercial optics software to accurately simulate wave propagation [33]. We propagate a plane wave with wavelength $\lambda =3\text{\hspace{0.17em}}\mathrm{mm}$ a distance $R=50\text{\hspace{0.17em}}\mathrm{m}$ past the screen edge, where we measure the scalar field and compute the three discrete light fields using Eqs. (25, 26, 28). To compute the light fields, we set $d=\lambda \u221510$, run the summations over $\left|n\right|\u2a7d10\u2215\lambda $, and use a rectangular window function of width $10\text{\hspace{0.17em}}\mathrm{cm}$ for *T*. We plot ${\mathcal{l}}^{\mathrm{S}}$, $\left|{\mathcal{l}}^{\mathrm{W}}\right|$, and $\left|{\mathcal{l}}^{\mathrm{R}}\right|$ in terms of the two-plane parameterization of the light field [1], so that each ray is directed from a point *u* in the plane of the screen toward a point *y* in the measurement plane, and so that ${s}_{y}=(y-u)\u2215{[{R}^{2}+{(y-u)}^{2}]}^{1\u22152}$.

We compare each light field’s ability to estimate the position of the screen edge and the orientation of the plane wave (Fig. 3). Geometric optics provides an ideal estimate: we should ideally see only rays pointing straight down $(u=y)$ past the screen edge, corresponding to a diagonal line in the upper-right quadrant of the light field plots. Instead, we see blurred lines with ringing. The ringing is physically accurate and indicates the diffraction fringes formed on the measurement plane. The blurring indicates localization limitations. While the spectrogram’s window *T* can be chosen to narrowly localize energy in either position or direction, the Wigner narrowly localizes energy in both, depicting instantaneous frequency without being limited by the classical Fourier uncertainty principle [20].

It may seem that the Wigner light field is preferable to the others and the clear choice for all applications. While the Wigner light field possesses excellent localization properties, it exhibits cross-term artifacts due to interference from different plane wave components. An alternative quasi light field such as the Rihaczek can strike a balance between localization and cross-term artifacts, and therefore may be a more appropriate choice, as discussed at the end of Subsection 3C. If our goal were only to estimate the position of the screen edge, we might prefer the spectrogram; to jointly estimate both position and plane wave orientation, we prefer the Wigner, and if there were two plane waves instead of one, we might prefer the Rihaczek. One thing is certain, however: we must abandon nonnegative quasi light fields to achieve better localization tradeoffs, as all nonnegative quadratic time–frequency distributions are sums of spectrograms and hence exhibit poor localization tradeoffs [20].

## 5. IMAGE FORMATION

We wish to form images from quasi light fields for coherent applications similarly to how we form images from the traditional light field for incoherent applications, by using Eq. (1) to integrate bundles of light field rays to compute pixel values (Fig. 1). However, simply selecting a particular captured quasi light field *L* and evaluating Eq. (1) raises three questions about the validity of the resulting image. First, is it meaningful to distribute coherent energy over surface area by factoring radiant intensity in Eq. (6)? Second, does the far-zone assumption implicit in radiometry and formalized in Eq. (2) limit the applicability of quasi light fields? And third, how do we capture quasi light field rays remotely if, unlike the traditional light field, quasi light fields need not be constant along rays?

The first question is a semantic one. For incoherent light of small wavelength, we *define* an image in terms of the power radiating from a scene surface toward an aperture, and physics tells us that this uniquely specifies the image (Section 3), which may be expressed in terms of the traditional light field. If we attempt to generalize the same definition of an image to partially coherent, broadband light, and specifically to coherent light at a nonzero wavelength, we must ask how to isolate the power from a surface patch toward the aperture, according to classical wave optics. But there is no unique answer; different isolation techniques correspond to different quasi light fields. Therefore, to be well-defined, we must extend the definition of an image for coherent light to include a particular choice of quasi light field that corresponds to a particular factorization of radiant intensity.

The second and third questions speak of assumptions in the formulation of quasi light fields and in the image formation from quasi light fields that can lead to coherent imaging inaccuracies when these assumptions are not valid. Specifically, unless the scene surface and aperture are far apart, the far-zone assumption in Eq. (2) does not hold, so that quasi light fields are incapable of modeling near-zone behavior. Also, unless we choose a quasi light field that is constant along rays, such as an angle-impact Wigner function [34], remote measurements might not accurately reflect the light field at the scene surface [35], resulting in imaging inaccuracies. Therefore, in general, integrating bundles of remotely captured quasi light field rays produces an approximation of the image we have defined. We assess this approximation by building an accurate near-zone model in Subsection 5A, simulating imaging performance of several coherent cameras in Subsection 5B, and showing how our image formation procedure generalizes the classic beamforming algorithm in Subsection 5C.

#### 5A. Near-Zone Radiometry

We take a new approach to formulating light fields for coherent radiation that avoids making the assumptions that (i) the measurement plane is far from the scene surface and (ii) light fields are constant along rays. The resulting light fields are accurate in the near zone, and may be compared with quasi light fields to understand the latter’s limitations. The key idea is to express a near-zone light field $L(\mathbf{r},\mathbf{s})$ on the measurement plane in terms of the infinitesimal flux at the point where the line containing the ray $(\mathbf{r},\mathbf{s})$ intersects the scene surface (Fig. 4 ). First we compute the scalar field at the scene surface, next we compute the infinitesimal flux, and then we identify a light field that predicts the same flux using the laws of radiometry. In contrast to Walther’s approach (Subsection 3A), (i) we do not make the far-zone approximation as in Eq. (2), and (ii) we formulate the light field in the measurement plane instead of in the source plane at the scene surface. Therefore, in forming an image from a near-zone light field, we are not limited to the far zone and we need not relate the light field at the measurement plane to the light field at the scene surface.

The first step in deriving a near-zone light field *L* for the ray $(\mathbf{r},\mathbf{s})$ is to use the scalar field on the measurement plane to compute the scalar field at the point ${\mathbf{r}}^{\mathrm{P}}$ where the line containing the ray intersects the scene surface. We choose coordinates so that the measurement plane is the $xy$-plane, the scene lies many wavelengths away in the negative $z<0$ half-space, and **r** is at the origin. We denote the distance between the source ${\mathbf{r}}^{\mathrm{P}}$ on the scene surface and the point of observation **r** by *ρ*. Under a reasonable bandwidth assumption, the inverse diffraction formula expresses the scalar field at ${\mathbf{r}}^{\mathrm{P}}$ in terms of the scalar field on the measurement plane [36]:

Next, we compute the differential flux $\mathrm{d}\Phi $ through a portion of a sphere at ${\mathbf{r}}^{\mathrm{P}}$ subtending differential solid angle $\mathrm{d}\Omega $. We obtain $\mathrm{d}\Phi $ by integrating the radial component of the energy flux density vector

Finally, we factor out ${s}_{z}$ and an outer integral over surface area from $\mathrm{d}\Phi \u2215\mathrm{d}\Omega $ to determine a near-zone light field. Unlike in Subsection 3A, the nonlinear exponential argument in $\stackrel{\u0303}{a}$ complicates the factoring. Nonetheless, we obtain a near-zone light field that generalizes the Rihaczek by substituting Eq. (34) for ${\stackrel{\u0303}{a}}^{*}$ in Eq. (35). After factoring and freeing **r** from the origin by substituting $\mathbf{r}-\rho \mathbf{s}$ for $-\rho \mathbf{s}$, we obtain

*ρ*reminds us of this near-zone light field’s dependence on distance.

${L}_{\rho}^{\mathrm{R}}$ is evidently neither the traditional light field nor a quasi light field, as it depends directly on the scene geometry through an additional distance parameter. This distance parameter *ρ* is a function of **r**, **s**, and the geometry of the scene; it is the distance along **s** between the scene surface and **r**. We may integrate ${L}_{\rho}^{\mathrm{R}}$ over a bundle of rays to compute the image pixel values just like any other light field, as long as we supply the right value of *ρ* for each ray. In contrast, quasi light fields are incapable of modeling optical propagation in the near zone, as it is insufficient to specify power flow along rays: we must also know the distance between the source and point of measurement along each ray.

We can obtain near-zone generalizations of all quasi light fields through the sensor array processing interpretation in Subsection 3C. Recall that each quasi light field corresponds to a particular choice of the function *K* in Eq. (23). For example, setting $K(\mathbf{a},\mathbf{b})=\delta \left(\mathbf{b}\right)$, where *δ* is the Dirac delta function, yields the Rihaczek quasi light field ${L}^{\mathrm{R}}$ in Eq. (27). To generalize quasi light fields to the near zone, we focus at a point instead of a plane wave component by using a spatial filter with impulse response $\mathrm{exp}(-ik|\mathbf{r}-\rho \mathbf{s}|)$ instead of $\mathrm{exp}(ik\mathbf{s}\cdot \mathbf{r})$ in Eq. (23). Then, choosing $K(\mathbf{a},\mathbf{b})=\delta \left(\mathbf{b}\right)$ yields ${L}_{\rho}^{\mathrm{R}}$, the near-zone generalization of the Rihaczek in Eq. (36), and choosing other functions *K* yields near-zone generalizations of the other quasi light fields.

#### 5B. Near-Zone Diffraction Limitations

We compute and compare image pixel values using the Rihaczek quasi light field ${L}^{\mathrm{R}}$ and its near-zone generalization ${L}_{\rho}^{\mathrm{R}}$, demonstrating how all quasi light fields implicitly make the Fraunhofer diffraction approximation that limits accurate imaging to the far zone. First, we construct coherent cameras from ${L}^{\mathrm{R}}$ and ${L}_{\rho}^{\mathrm{R}}$. For simplicity, we consider a two-dimensional scene and sample the light fields, approximating the integral over a bundle of rays (Fig. 1) by the summation of discrete rays directed from the center ${\mathbf{r}}^{\mathrm{P}}$ of the scene surface patch to each sensor on a virtual aperture of diameter *A*, equally spaced every distance *d* in the measurement plane [Fig. 5a ]. Ignoring constants and ${s}_{z}$, we compute the pixel values for a far-zone camera from the Rihaczek quasi light field in Eq. (27),

By comparing the exponentials in Eq. (37) with those in Eq. (38), we see that the near-zone camera aligns the sensor measurements along spherical wavefronts diverging from the point of focus ${\mathbf{r}}^{\mathrm{P}}$, while the far-zone camera aligns measurements along plane wavefront approximations [Fig. 5b]. Spherical wavefront alignment makes physical sense in accordance with the Huygens–Fresnel principle of diffraction, while approximating spherical wavefronts with plane wavefronts is reminiscent of Fraunhofer diffraction. In fact, the far-zone approximation in Eq. (2) used to derive quasi light fields follows directly from the Rayleigh–Sommerfeld diffraction integral by linearizing the exponentials, which is precisely Fraunhofer diffraction. Therefore, all quasi light fields are valid only for small Fresnel numbers, when the source and point of measurement are sufficiently far away from each other.

We expect the near-zone camera to outperform the far-zone camera in near-zone imaging applications, which we demonstrate by comparing their ability to resolve small targets moving past their field of view. As a baseline, we introduce a third camera with nonnegative pixel values ${P}_{\rho}^{\mathrm{B}}$ by restricting the summation over *m* in Eq. (38) to $\left|md\right|<A\u22152$, which results in the beamformer camera used in sensor array processing [5, 37]. Alternatively, we could extend the summation over *n* in Eq. (38) to the entire array, but this would average anisotropic responses over a wider aperture diameter, resulting in a different image. We simulate an opaque screen containing a pinhole that is backlit with a coherent plane wave (Fig. 6 ). The sensor array is $D=2\text{\hspace{0.17em}}\mathrm{m}$ wide and just $R=1\text{\hspace{0.17em}}\mathrm{m}$ away from the screen. The virtual aperture is $A=10\text{\hspace{0.17em}}\mathrm{cm}$ wide and the camera is focused on a fixed $1\text{\hspace{0.17em}}\mathrm{mm}$ pixel straight ahead on the screen. The pinhole has width $1\text{\hspace{0.17em}}\mathrm{mm}$, which is smaller than the wavelength $\lambda =3\text{\hspace{0.17em}}\mathrm{mm}$, so the plane wavefronts bend into slightly spherical shapes via diffraction. We move the pinhole to the right, recording pixel values $\left|{P}^{\mathrm{R}}\right|$, $\left|{P}_{\rho}^{\mathrm{R}}\right|$, and ${P}_{\rho}^{\mathrm{B}}$ for each camera at each pinhole position. Because of the nature of the coherent combination of the sensor measurements that produces the pixel values, each camera records a multilobed response. The width of the main lobe indicates the near-zone resolution of the camera.

The near-zone camera is able to resolve the pinhole down to its actual size of $1\text{\hspace{0.17em}}\mathrm{mm}$, greatly outperforming the far-zone camera, which records a blur $66\text{\hspace{0.17em}}\mathrm{cm}$ wide, and even outperforming the beamformer camera. Neither comparison is surprising. First, with a Fresnel number of ${D}^{2}\u2215R\lambda \approx 1333$, the Fraunhofer approximation implicitly made by quasi light fields does not hold for this scenario, so we expect the far-zone camera to exhibit poor resolution. Second, the near-zone camera uses the entire $D=2\text{\hspace{0.17em}}\mathrm{m}$ array instead of just the sensors on the virtual aperture that the beamformer camera is restricted to, and the extra sensors lead to improved resolution.

#### 5C. Generalized Beamforming

We compare image formation from light fields with traditional perspectives on coherent image formation by relating quasi light fields and our coherent cameras to the classic beamforming algorithm used in many coherent imaging applications, including ultrasound [5] and radar [37]. The beamforming algorithm estimates a spherical wave diverging from a point of focus ${\mathbf{r}}^{\mathrm{P}}$ by delaying and averaging sensor measurements. When the radiation is narrowband, the delays are approximated by phase shifts. With the sensor array geometry from Subsection 5B, the beamformer output is

where the $T\left(md\right)$ are amplitude weights used to adjust the beamformer’s performance. As ${\mathbf{r}}^{\mathrm{P}}$ moves into the far zone,so that apart from a constant phase offset, Eq. (39) becomes a short-time Fourier transformEvidently, ${\left|{g}^{\infty}\right|}^{2}$ is a spectrogram quasi light field, and we may select*T*to be a narrow window about a point

**r**to capture ${L}^{\mathrm{S}}(\mathbf{r},{\mathbf{s}}^{0})$. We have already seen how quasi light fields generalize the spectrogram.

Beamformer applications instead typically select *T* to be a wide window to match the desired virtual aperture and assign the corresponding pixel value to the output power ${\left|g\right|}^{2}$. We can decompose the three cameras in Subsection 5B into such beamformers. First, we write ${P}_{\rho}^{\mathrm{R}}$ in Eq. (38) in terms of two different beamformers,

*A*and sensor array

*D*, respectively. Next, by constructionFinally, in the far zone, ${\mathbf{s}}^{n}\to {\mathbf{s}}^{0}$ in Eq. (37) so thatwhere ${g}_{1}^{\infty}$ and ${g}_{2}^{\infty}$ are given by Eq. (41) with the windows

*T*used in Eqs. (43, 44). In other words, the near-zone camera is the Hermitian product of two different beamformers and is equivalent to the far-zone camera in the far zone.

We interpret the role of each component beamformer from the derivation of Eq. (38). Beamformer ${g}_{1}^{*}$ aggregates power contributions across the aperture using measurements of the conjugate field ${U}^{*}$ on the aperture, while beamformer ${g}_{2}$ isolates power from the point of focus using all available measurements of the field *U*. In this manner, the tasks of aggregating and isolating power contributions are cleanly divided between the two beamformers, and each beamformer uses the measurements from those sensors appropriate to its task. In contrast, the beamformer camera uses the same set of sensors for both the power aggregation and isolation tasks, thereby limiting its ability to optimize over both tasks.

The near-zone camera achieves a new tradeoff between resolution and anisotropic sensitivity. We noted that the near-zone camera exhibits better resolution than the beamformer for the same virtual aperture (Fig. 6). This is not an entirely fair comparison because the near-zone camera is using sensor measurements outside the aperture, and indeed, a beamformer using the entire array would achieve comparable resolution. However, extending the aperture to the entire array results in a different image, as anisotropic responses are averaged over a wider aperture diameter. We interpret the near-zone camera’s behavior by computing the magnitude

Image formation with alternative light fields uses the conjugate field and field measurements to aggregate and isolate power in different ways. In general, image pixel values do not neatly factor into the product of beamformers, as they do with the Rihaczek.

## 6. CONCLUDING REMARKS

We enable the use of existing incoherent imaging tools for coherent imaging applications by extending the light field to coherent radiation. We explain how to formulate, capture, and form images from quasi light fields. By synthesizing existing research in optics, quantum physics, and signal processing, we motivate quasi light fields, show how quasi light fields extend the traditional light field, and characterize the properties of different quasi light fields. We explain why capturing quasi light fields directly with intensity measurements is inherently limiting, and demonstrate via simulation how processing scalar field measurements in different ways leads to a rich set of energy localization tradeoffs. We show how coherent image formation using quasi light fields is complicated by an implicit far-zone (Fraunhofer) assumption and the fact that not all quasi light fields are constant along rays. We demonstrate via simulation that a pure light field representation is incapable of modeling near-zone diffraction effects, but that quasi light fields can be augmented with a distance parameter for greater near-zone imaging accuracy. We show how image formation using light fields generalizes the classic beamforming algorithm, allowing for new tradeoffs between resolution and anisotropic sensitivity.

Although we have assumed perfectly coherent radiation, tools from partial coherence theory (i) allow us to generalize our results and (ii) provide an alternative perspective on image formation. First, our results extend to broadband radiation of any state of partial coherence by replacing $U\left({\mathbf{r}}^{\mathrm{R}}\right){U}^{*}\left({\mathbf{r}}^{\mathrm{C}}\right)$ with the cross-spectral density $W({\mathbf{r}}^{\mathrm{R}},{\mathbf{r}}^{\mathrm{C}},\nu )$. *W* provides a statistical description of the radiation, indicating how light at two different positions ${\mathbf{r}}^{\mathrm{R}}$ and ${\mathbf{r}}^{\mathrm{C}}$ is correlated at each frequency *ν* [38]. Second, *W* itself may be propagated along rays in an approximate asymptotic sense [39, 40], which forms the basis of an entirely different framework for using rays for image formation, using the cross-spectral density instead of the light field as the core representation.

We present a model of coherent image formation that strikes a balance between utility and comprehensive predictive power. On the one hand, quasi light fields offer more options and tradeoffs than their traditional, incoherent counterpart. In this manner, the connection between quasi light fields and quasi-probability distributions in quantum physics reminds us of the potential benefits of forgoing a single familiar tool in favor of a multitude of useful yet less familiar ones. On the other hand, compared with Maxwell’s equations, quasi light fields are less versatile. Therefore, quasi light fields are attractive to researchers who desire more versatility than traditional energy-based methods, yet a more specialized model of image formation than Maxwell’s equations.

Quasi light fields illustrate the limitations of the simple definition of image formation that is ubiquitous in incoherent imaging. An image is the visualization of some underlying physical reality, and the energy emitted from a portion of a scene surface toward a virtual aperture is not a physically precise quantity when the radiation is coherent, according to classical electromagnetic wave theory. Perhaps a different image definition may prove more fundamental for coherent imaging, or perhaps a quantum optics viewpoint is required for precision. Although we have borrowed the mathematics from quantum physics, our entire discussion has been classical. Yet if we introduce quantum optics and the particle nature of light, we may unambiguously speak of the probability that a photon emitted from a portion of a scene surface is intercepted by a virtual aperture.

## ACKNOWLEDGMENTS

This work was supported, in part, by Microsoft Research, MIT Lincoln Laboratory, and the Focus Center Research Program (FCRP) Center for Circuit & System Solutions (C2S2) of Semiconductor Research Corporation.

**1. **M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of ACM SIGGRAPH 96 (ACM, 1996), pp. 31–42. [CrossRef]

**2. **R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford University, Calif. (2005).

**3. **W. Chun and O. S. Cossairt, “Data processing for three-dimensional displays,” United States Patent 7,525,541 (April 28, 2009).

**4. **R. Ziegler, S. Bucheli, L. Ahrenberg, M. Magnor, and M. Gross, “A bidirectional light field-hologram transform,” Comput. Graph. Forum **26**, 435–446 (2007). [CrossRef]

**5. **T. L. Szabo, *Diagnostic Ultrasound Imaging: Inside Out* (Elsevier, 2004).

**6. **M. J. Bastiaans, “Application of the Wigner distribution function in optics,” in *The Wigner Distribution—Theory and Applications in Signal Processing*, W. Mecklenbräuker and F. Hlawatsch, eds. (Elsevier Science B.V., 1997), pp. 375–426.

**7. **A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. **58**, 1256–1259 (1968). [CrossRef]

**8. **E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. **40**, 749–759 (1932). [CrossRef]

**9. **J. Ville, “Théorie et applications de la notion de signal analytique,” Cables Transm. **2A**, 61–74 (1948).

**10. **K. D. Stephan, “Radiometry before World War II: Measuring infrared and millimeter-wave radiation 1800–1925,” IEEE Antennas Propag. Mag. **47**, 28–37 (2005). [CrossRef]

**11. **S. Chandrasekhar, *Radiative Transfer* (Dover, 1960).

**12. **A. T. Friberg, G. S. Agarwal, J. T. Foley, and E. Wolf, “Statistical wave-theoretical derivation of the free-space transport equation of radiometry,” J. Opt. Soc. Am. B **9**, 1386–1393 (1992). [CrossRef]

**13. **P. Moon and G. Timoshenko, “The light field,” J. Math. Phys. **18**, 51–151 (1939). [Translation of A. Gershun, *The Light Field* (Moscow, 1936)].

**14. **M. Born and E. Wolf, *Principles of Optics*, 7th ed. (Cambridge Univ. Press, 1999).

**15. **A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. **63**, 1622–1623 (1973). [CrossRef]

**16. **E. Wolf, “Coherence and radiometry,” J. Opt. Soc. Am. **68**, 6–17 (1978). [CrossRef]

**17. **G. S. Agarwal, J. T. Foley, and E. Wolf, “The radiance and phase-space representations of the cross-spectral density operator,” Opt. Commun. **62**, 67–72 (1987). [CrossRef]

**18. **E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in *Computational Models of Visual Processing*, M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

**19. **S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH 96 (ACM, 1996), pp. 43–54. [CrossRef]

**20. **B. Boashash, ed. *Time Frequency Signal Analysis and Processing* (Elsevier, 2003).

**21. **R. W. Boyd, *Radiometry and the Detection of Optical Radiation* (Wiley, 1983).

**22. **A. Adams and M. Levoy, “General linear cameras with finite aperture,” in Proc. Eurographics Symposium on Rendering (Eurographics, 2007).

**23. **J. W. Goodman, *Introduction to Fourier Optics* (McGraw-Hill, 1968).

**24. **A. T. Friberg, “On the existence of a radiance function for finite planar sources of arbitrary states of coherence,” J. Opt. Soc. Am. **69**, 192–198 (1979). [CrossRef]

**25. **P. Flandrin, *Time-Frequency/Time-Scale Analysis* (Academic, 1999).

**26. **D. J. Griffiths, *Introduction to Quantum Mechanics* (Pearson Education, 2005).

**27. **G. S. Agarwal and E. Wolf, “Calculus for functions of noncommuting operators and general phase-space methods in quantum mechanics. I. Mapping theorems and ordering of functions of noncommuting operators,” Phys. Rev. D **2**, 2161–2186 (1970). [CrossRef]

**28. **J. T. Foley and E. Wolf, “Radiometry as a short-wavelength limit of statistical wave theory with globally incoherent sources,” Opt. Commun. **55**, 236–241 (1985). [CrossRef]

**29. **J. R. Guerci, “Theory and application of covariance matrix tapers for robust adaptive beamforming,” IEEE Trans. Signal Process. **47**, 977–985 (1999). [CrossRef]

**30. **Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in Proceedings of ICCP 09 (IEEE, 2009).

**31. **J. G. Kirkwood, “Quantum statistics of almost classical assemblies,” Phys. Rev. **44**, 31–37 (1933). [CrossRef]

**32. **A. Rihaczek, “Signal energy distribution in time and frequency,” IEEE Trans. Inf. Theory **14**, 369–374 (1968). [CrossRef]

**33. **ZEMAX Development Corporation, Bellevue, Wash., Optical Design Program User’s Guide (2006).

**34. **M. A. Alonso, “Radiometry and wide-angle wave fields. I. Coherent fields in two dimensions,” J. Opt. Soc. Am. A **18**, 902–909 (2001). [CrossRef]

**35. **R. G. Littlejohn and R. Winston, “Corrections to classical radiometry,” J. Opt. Soc. Am. A **10**, 2024–2037 (1993). [CrossRef]

**36. **J. R. Shewell and E. Wolf, “Inverse diffraction and a new reciprocity theorem,” J. Opt. Soc. Am. **58**, 1596–1603 (1968). [CrossRef]

**37. **H. L. Van Trees, *Optimum Array Processing* (Wiley, 2002). [CrossRef]

**38. **L. Mandel and E. Wolf, *Optical Coherence and Quantum Optics* (Cambridge Univ. Press, 1995).

**39. **A. M. Zysk, P. S. Carney, and J. C. Schotland, “Eikonal method for calculation of coherence functions,” Phys. Rev. Lett. **95**, 043904 (2005). [CrossRef] [PubMed]

**40. **R. W. Schoonover, A. M. Zysk, P. S. Carney, J. C. Schotland, and E. Wolf, “Geometrical optics limit of stochastic electromagnetic fields,” Phys. Rev. A **77**, 043831 (2008). [CrossRef]