Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging dark objects with intensity interferometry

Open Access Open Access

Abstract

We have developed a technique for imaging dark, i.e. non-radiating, objects by intensity interferometry measurements using a thermal light source in the background. This technique is based on encoding the dark object’s profile into the spatial coherence of such light. We demonstrate the image recovery using an adaptive error-minimizing Gerchberg-Saxton algorithm in case of a completely opaque object, and outline the steps for imaging purely refractive objects.

© 2014 Optical Society of America

1. Introduction

Intensity interferometry is a powerful technique for achieving high imaging resolution with limited optical apertures. This technique involves measuring the intensity fluctuations of light emitted by a luminous object of interest and constructing their correlation function. By virtue of van Cittert - Zernike theorem this correlation function encodes the Fourier-transform of the source luminosity, whose reconstruction recovers image of the source.

Angular resolution achieved in this type of imaging obeys the Rayleigh criterion. The advantage, comes from the fact that a large array of detectors is much easier to implement than an equally large optical lens or mirror. In this respect, an intensity interferometry array may be compared to a synthetic aperture field interferometer or telescope. However, whereas a synthetic aperture instrument requires positioning of its elements with precision of a fraction of the wavelength, for intensity interferometer it is only a fraction of the coherence length.

The pioneering application of intensity-interferometric measurements in astronomy have been demonstrated by Hanbury Brown and Twiss in 1960’s [1,2]. These measurements involved only a pair of detectors and were aimed at determining a star’s angular size. Later, the technique has evolved into actual imaging and has been considered for observing of solar spots [3], tidal and rotational distortions, limb darkening [46] and other stellar phenomena [710].

The cost of the intensity-interferometric imaging benefits is that only the magnitude of the source luminosity Fourier-transform is captured, while the phase is lost. This denies astronomers the possibility to directly reconstruct the image by inverse Fourier-transform. This situation is sufficiently common in physics [11]. The main two techniques that have been developed for the phase recovery are the Cauchy-Riemann approach [6, 12] and a family of Gerchberg-Saxton algorithms [3, 13, 14]. Both these techniques have been proven efficient for reconstructing a luminous object from intensity interferometer data.

In this work we extend the error-reduction Gerchberg-Saxton method to imaging light-absorbing objects, and discuss the possibility of its applications for imaging purely refractive (phase) objects. We will assume that such an object partially shadows a thermal light source, but the shadow cannot be used for inferring the object’s shape [15]. By imaging a dark absorbing object we will mean reconstruction of the object’s projection (the optical column density) along the line of sight; for a completely opaque object this amounts to finding its contour.

2. Imaging absorbing objects

We have considered a dark object placed between a thermal light source and an array of detectors earlier [15]. Relying on this analysis we will assume that the object is characterized by absorption coefficient 0 ≤ A(ρ⃗o) ≤ 1 and the source is characterized by a local intensity (luminosity) profile Is(ρ⃗), where ρ⃗ and ρ⃗o are transverse coordinates in the source and object planes, respectively. Defining the observable as a correlation function between fluctuations of optical intensities I1 and I2 at two detector locations ρ⃗1 and ρ⃗2 in the detection plane

C(ρ1,ρ2)T1T/2T/2dtI1(t)I2(t),
we have shown that for a pair of detectors symmetric about the line of sight, i.e. such that ρ⃗1 + ρ⃗2 = 0 and ρ⃗dρ⃗1ρ⃗2,
C(ρd)K|𝒯(kρdL+Ls)Is(0)β2𝒜(kρdL)|2.
The constant K and the approximations underlying (2) have been discussed in [15]. For simplicity we will assume K = 1 in the following. In (2) k = 2π/λ, where λ is the optical wave length; Ls and L are the distances from the source to the object and from the object to the detectors plane, respectively; β ≡ 1 + Ls/L; and
𝒜(q)d2ρA(ρ)eiqρ,𝒯(q)d2ρIs(ρ)eiqρ.

We see that observable (2) captures both the source and the object’s images. Assuming that the former is well-known (perhaps from intensity-interferometric imaging without the object), we would like to reconstruct the latter. This problem falls into a general category of phase recovery from an absolute-square Fourier transform measurement. To approach it numerically, we introduce three reciprocal pairs of grids: a and ã in the object plane, b and in the detection plane, and c and in the source plane. Each of these grids is related to its reciprocal by a discrete Fourier transform of an N0 × N0 array, e.g. a = 2π/(N0ã), etc. Furthermore, discretizing of (2) leads to the following relations:

a˜=kb/Landc˜=kb/(L+Ls).

First, we use Eq. (2) to generate correlation arrays C(ρi,j) and C0(ρi,j) of the size N0 × N0 with and without the object, respectively. By virtue of Eqs. (4), N0 = λL/(ab). These arrays represent a measurement with a detectors array with spacing b/2. They are then truncated to the actual size of the detectors array Nd × Nd, which determines the new object grid a′ = λL/(Ndb) and sets the resolution limit for our image reconstruction.

Next, the Gerchberg-Saxton reconstruction takes place. Our algorithm follows the general guidelines [13, 14], however with a few important modifications. To more closely simulate an experimental procedure, we abandon our a priori analytical expression for 𝒯 (q⃗) and restore it from the C0(ρi,j) “measurement”. As an initial guess for the object’s shape we take a Gaussian function whose width is consistent with the total optical power absorbed by the object:

A(ρ)=exp{πρ2C0(0)C(0)}.
Following the Gerchberg-Saxton procedure we then compute 𝒜 and substitute its amplitude by the “measurement” C(ρi,j) while retaining its phase. After the inverse Fourier transform this leads to a new estimate for A which we constrain based on sensible assumptions regarding the object. These assumptions are the following:
  1. A(ρ⃗) is real;
  2. A(ρ⃗) = 0 for ρ > ρMax, which means that the object is not too large;
  3. 0 ≤ A(ρ⃗) ≤ 1, which means that the object cannot absorb more than all, or less than none, of the incident light;
  4. A(ρ⃗) = 0 or 1, if the object is completely opaque.

The first constraint can be enforced by taking either the absolute value, or the real part of A. Both methods work, as well as their alternation, converging to the same result. We prefer the alternation method because it provides an indication of the successful image reconstruction, as we will see in the following. It also sometimes leads to a slightly faster convergence.

In the second constraint, the limit ρMax can be determined e.g. from low-resolution observations [14], from additional knowledge such as the object mass, etc. In the absence of such data, we set ρMax to six times the initial Gaussian width (5).

In the third constraint, setting the upper limit on the reconstructed function is a new requirement, specific to a dark object. Indeed, reconstructing a light source one cannot be sure that it does not have very bright spots, so the upper limit is not applicable.

The last constraint is specific to completely opaque objects, in which case we are limited to reconstructing only its contour. When applied, it supersedes constraint 3. We have found that applying this constraint directly often disrupts the iteration process instead of helping it, especially for complex objects. A more subtle way of injecting a large amount of information implied in the last constraint into the reconstruction process is needed. Such a way can be facilitated by modifying the input-output transfer function An−1(ρ⃗) → An(ρ⃗) which relates the previous and the next images after all other constraints have been applied. We have empirically studied several types of such transfer functions. One particularly successful example is shown in Fig. 1. As the reconstruction iterations continue, the darker pixels are gradually driven towards unity (opaque), and lighter pixels towards zero (transparent).

 figure: Fig. 1

Fig. 1 A transfer function (solid line) applied to the object’s absorption profile between n − 1th and nth Gerchberg-Saxton iterations gradually drives it towards black-and-white solution. A dashed line represents an identity transfer function An = An−1.

Download Full Size | PDF

The reconstruction process dynamics can be studied by monitoring the normalized variances

σni,j|An(ρi,j)An1(ρi,j)|2i,j|An1(ρi,j)|2,σ˜ni,j|𝒜n(ρi,j)𝒜n1(ρi,j)|2i,j|𝒜n1(ρi,j)|2,
whose square-roots give the fractional change of the object and its Fourier transform at the n-th step. It is useful to separately calculate a part σn(o) of σn which is due to the “opaque object” constraint alone.

Following a long-standing tradition in the field of Ghost imaging, we demonstrate the performance of our modified Gerchberg-Saxton algorithm using the initial letters of our institution, JPL, as a test object. This numeric simulation is carried out in a typical optical lab settings. The object, which is shown in the inset of Fig. 2, was assumed to have 2 mm in length. It is placed at L = Ls = 36 cm between the source and the detectors array. The array has 1280×1024 pixels with 4.65μm spacing (i.e., b = 9.3μm), to match parameters of a real CCD camera. The source is assumed to have Gaussian distribution with Rs = 2 mm, radiating at λ = 532 nm. Geometrical shadow of the object can be found as

I(ρ)=1β2πRs2d2ξIs(βξ(β1)ρ)A(ξ).
This shadow, only some 14% deep, is completely featureless and is not useful for the image reconstruction.

 figure: Fig. 2

Fig. 2 A logarithmic plot of the correlation function C(ρi,j) reveals the structure which encodes the shape of the object of interest. The solid contour lines correspond to C(ρi,j)/C0(0) = 2−12 (red) and C(ρi,j)/C0(0) = 2−13 (green). The object is shown on the inset.

Download Full Size | PDF

The correlation function, on the other hand, has a rich structure which encodes the object image. In Fig. 2 we plot C(ρi,j) on a logarithmic scale in order to highlight the weak features. The outer green and inner red contour lines correspond to C(ρi,j)/C0(0) = 2−13 and C(ρi,j)/C0(0) = 2−12, respectively.

As a first step, we realized a simple version of Gerchberg-Saxton algorithm for reconstruction of our test object. Constraint 4 was disabled by permanently setting the slope of the transfer function in Fig. 1 to unity. The real part of the object function was taken when applying constraint 1. Representative images of a 200-iteration cycle are shown in Fig. 3. These images have been rotated by 180° for convenience; the actual results of this particular run turned out up side down, which is a normal situation resulting from the Gerchberg-Saxton algorithm ambiguity.

 figure: Fig. 3

Fig. 3 The results of a simple image recovery algorithm. The step numbers are given in the low left corner of each frame.

Download Full Size | PDF

Variances (6) for this process are shown in Fig. 4 (left). Their general behavior is similar to the one observed [13] for luminous objects: alternating periods of rapid improvement and of relative stagnation. Convergence of the process can be improved if one alternates taking the real part of the image and its absolute value as the first image constraint. The variances behavior and representative images corresponding to this scenario are shown in Fig. 4 (right). Here, the same simulated input data has been used, however the second quality transition occurs sooner, and the final image is better. Notice that the steps when the “absolute value” method was used in constraint 1 have larger variances than those when the “real part” method was used.

 figure: Fig. 4

Fig. 4 Left: fractional change (given by square root of variances (6)) of the test object’s image and its Fourier transform in a simple image recovery corresponding to Fig. 3. Right: the same in the alternating algorithm; shown are images 40, 130 and 200.

Download Full Size | PDF

So far we have not used our knowledge that the test object is in fact an opaque mask. To take advantage of this information, we will increment the transfer function slope (see Fig. 1) by small steps as soon as both the image and its Fourier transform evolution becomes stagnant, but decrement it if either one of the variances (6) starts to grow. The results of such an adaptive Gerchberg-Saxton algorithm with the same input data are shown in Fig. 5. Its realization leads to even faster convergence with even better result. In fact, the image in Fig. 5 stops appreciably changing already after some 125 iteration, at which point it looks practically indistinguishable from the original.

 figure: Fig. 5

Fig. 5 Left: fractional change of the test object’s image and its Fourier transform in the alternating adaptive algorithm; shown are images 10, 75, 125 and 200. Right: the transfer function slope.

Download Full Size | PDF

Several interesting observations can be made regarding this result. First is that the variances oscillations due to alternating method of handling the complex to real conversion disappear as the image improves. This indicates that the image becomes purely real and suggests that with the adaptive algorithm, alternating the methods may be redundant. We have confirmed it in a separate run using only the “Real” method which has produced equally good result. Second is that the image and its Fourier transform variances become equal when a high-quality image has been obtained and no further progress is achieved. This may indicate that the process has gone into a loop where the object- and Fourier-space constraints repeatedly reverse each other’s effect. Third is that most of the object variance is due to the “opaque” constraint, which is consistent with the image solutions being predominantly real and well confined in space.

An important question of practical intensity interferometry imaging is the algorithm tolerance to additive and multiplicative noise. The additive noise is most important in the dark areas of Fig. 2 where the correlation observable C is small. This type of noise can be suppressed by applying a threshold. Therefore the detrimental effect of this type of noise is limited by such effect of the threshold. To study this effect we have repeated the reconstruction with the same correlation data thresholded at C(ρi,j)/C0(0) = 2−13 and C(ρi,j)/C0(0) = 2−12, which corresponds to discarding the data outside the green (outer) and red (inner) contour lines in Fig. 2, respectively. The restored images are shown in Fig. 6. These images cannot be improved with more iterations.

 figure: Fig. 6

Fig. 6 The image reconstruction results with the threshold set at 2−13 (left) and 2−12 (right).

Download Full Size | PDF

The multiplicative noise was introduced by multiplying each pixel value of C(ρi,j) and C0(ρi,j) by a random Gaussian function with the mean value equal to unity and a variable width σnoise. The reconstruction process failed at σnoise = 0.01C0(0) but converged to a practically ideal image at σnoise = 0.001C0(0).

3. Speckles of darkness

The following relations need to be satisfied for successful image reconstruction:

bλLRobNd,
where Ro is the object mean radius. These requirements can be understood in terms of the lowest and highest spatial frequencies sampled with a given array of detectors. The low frequency is determined by the detectors spacing b and the high limit is determined by the array full size bNd. If the former is too high, the imaging field a′Nd is smaller than the object; if the latter is too low, the imaging pixel size a′ is greater than the object. A double strong inequality (8) may lead to impractically large total number of detectors Nd2. However, this difficulty may be circumvented by using sparse, non-uniformly distributed arrays such as discussed in [5].

The middle part of inequality (8) can be interpreted as the speckle size the object would produce had it been emitting light. This interpretation becomes inspiring in a limit when the source angular size greatly exceeds the object angular size. In this case the detectors grid b can be chosen large enough to under-sample the source speckle but still small enough to satisfy the inequalities (8). Then the source term in (2) vanishes and we are left with

C(ρd)K|𝒜(kρdL)|2
where all constants have been absorbed into K′.

Correlation observable (9) is fully analogous to (2) in the absence of the object, both expressing van Cittert - Zernike theorem. However if the latter describes speckles of light emitted by the source, (9) might be said to describe the “speckles of darkness” cast by the object. It can also be viewed as a higher-order realization of Babinet principle.

The large-source limit (9) has great practical advantages over the general case (2). One advantage is that the source does not need to be carefully characterized and its possible shape and power fluctuations are irrelevant. Furthermore, it becomes easy to switch from distance to angular variables in the object description, which eliminates the need to know L and Ls. On the other hand, information regarding displacement of the object from the observer’s line of sight to the source becomes inaccessible in this limit.

4. Imaging phase objects

Phase, or purely refractive, objects are the objects that absorb no light from the source while their main optical effect is concerned with refraction. Such objects provide a useful model for many astrophysical phenomena related to gravitational lensing and microlensing [16, 17], and interstellar phase screens due to cold gas clouds [18, 19]. They also may be useful for investigating remote atmospheric phenomena and for other applications. Therefore there is a strong motivation for achieving high-resolution optical imaging of phase objects by means of intensity interferometry. However, best to our knowledge, this problem has not been discussed yet.

We approach this problem in our general framework discussed in [15]. However the approximation we had adopted for an amplitude object’s transfer function T(ρ⃗′),

T*(ρ)T(ρ)|T(ρs)|2,
is not useful for a phase object. We have to retain one more term in the power expansion series, which yields
T*(ρ)T(ρ)=ei[ϕ(ρ)ϕ(ρ)]eiρdϕ(ρs).
In (10) and (11) we make our usual approximation that the speckle size the source casts on the object is much smaller than the object feature, and define ρ⃗dρ⃗″ρ⃗′, ρ⃗s ≡ (ρ⃗″ + ρ⃗′)/2. For the correlation observable this leads to
C(ρd)K|d2ρsIs[βρs+Lskϕ(ρs)]eikLρdρs|2.

We recognize (12) as a Fourier-transform of the source intensity distribution with a modified argument, which can be viewed as a generalized van Cittert - Zernike theorem. For a sufficiently smooth phase object it is reasonable to treat ϕ(ρ⃗s) as a polynomial with decreasing weights of higher-order terms. Remarkably, the zeroth-order term is removed by the gradient: the overall phase is not observable in this type of measurement, which is physically reasonable. The linear phase term (such as may be produced by an optical wedge) yields a constant argument shift in Is(ρ⃗) which amounts to an apparent shift of the object from the line of sight, or equivalently, a phase shift of its Fourier transform. Since our observable is phase-insensitive, this effect vanishes as well.

The next-order, quadratic, contribution can be exemplified by a thin lens with a focal distance f. For such a lens ∇⃗ϕ (ρ⃗s) = −ρ⃗s′k/f, so it has the effect of linear scaling of the intensity argument: βρ⃗s → (βLs/f)ρ⃗s, which can be viewed as a change in the effective propagation distance: Ls + LLs + LLsL/f. In particular, when 1/L + 1/Ls = 1/f, this distance becomes zero. This of course corresponds to imaging of the source onto the detection plane, in which case we recover our initial assumption of δ-correlation of the thermal source field.

Analysis of the higher-order phase terms is less straightforward. However we can see that the problem of reconstructing a phase object can be reduced to recovering a source intensity distribution from its Fourier transform absolute square, and then taking an inverse of the result in order to retrieve the object signature. The following example is designed to demonstrate the first step, while the second step is intentionally made trivial.

Consider a thin lens with a focal distance f =11.36 cm centered on the line of sight between the source and detectors array so that L = Ls = 50 cm. Let the source have Gaussian intensity distribution Is(ρ)=exp(ρ2/Rs2) with Rs=0.1cm. As a phase object, we take a rectangular x = 0.118 cm by y = 0.25 cm piece of this lens centered at Δx = 0.293 cm from the line of sight, and remove the rest of the lens. Then, according to (12), the source argument is linearly scaled within the rectangle, and unperturbed elsewhere. The resulting modified intensity is shown in Fig. 7 on the left.

 figure: Fig. 7

Fig. 7 Left: numerically modeled intensity distribution of a Gaussian source modified by a rectangular phase object. Right: the same reconstructed by Gerchberg-Saxton algorithm after 5000 iterations. Side of each image is 2 cm, array size 680 × 680 pixels.

Download Full Size | PDF

Our phase object reconstruction algorithm is similar to the algorithm we used for the absorbing objects, except that now we abandoned constraints 2 and 4, and set the upper limit to the unperturbed intensity function rather than unity in constraint 3. This considerably weakened our set of object constraints, and the image reconstruction took significantly longer. The reconstruction result obtained after 5000 iterations is shown in Fig. 7 on the right. While the image turned out inverted, it has adequate fidelity.

To characterize the phase object we use the reconstructed source function IG(ρ), in the area G of the phase object (the dark rectangle in Fig. 7). In this example we assume that we have the a priori knowledge that the object is a piece of a thin positive lens. Then we know that IG(ρ)=exp([ρ(βLs/f)]2/Rs2) and we can fit it for f. The fitting yields fout = 11.37 cm with the standard deviation σf = 0.19 cm, which is in remarkable agreement with the input value f = 11.36 cm.

5. Summary

A dark object back-lit by a thermal light source is a frequent situation in astronomy as well as in ground-based observations. Besides creating a shadow, the object modifies spatial coherence of the source light in a way that encodes the object’s optical opacity distribution. Considering the geometries when the shadow is uninformative, we have developed a modified, adaptive Gerchberg-Saxton algorithm that allows us to reconstruct this distribution, which amounts to high-resolution imaging the object in the line-of-sight direction. Particularly efficiently this technique can be implementated in the large-source limit, which eliminates the need to know the source properties and the object-to-source, object-to-observer distances.

Furthermore, we considered purely refractive, or phase, objects that do not absorb light. We have shown that such objects also modify the source coherence function according to a generalized van Cittert - Zernike theorem. The object signature in this case is expressed in the phase gradient which the object imparts to the transmitted light. Reconstructing this gradient requires, first, recovery of the underlying source’s intensity profile with modified argument; and then inverting this function to retrieve the object’s phase gradient from the argument. Because the phase gradient is a vector, and because inverting the source function has multiple solutions, imaging a generic phase object is a much more complicated problem than imaging an amplitude object. We have demonstrated its solution with a special example designed to circumvent the second step, and were able to accurately recover an input parameter describing the phase object. A more general analysis of this problem requires further study and will be given elsewhere.

6. Acknowledgments

This work was funded by the NIAC program and carried out at the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration.

References and links

1. H. R. Brown and R. Q. Twiss, “Interferometry of the intensity fluctuations in light I. Basic theory: the correlation between photons In coherent beams of radiation,” Proc. R. Soc. London A 242300–324 (1957). [CrossRef]  

2. H. R. Brown and R. Q. Twiss, “Interferometry of the intensity fluctuations in light II. An experimental test of the theory for partially coherent light,” Proc. R. Soc. London A 243291–319 (1958). [CrossRef]  

3. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

4. D. Dravins, S. LeBohec, H. Jensen, and P. D. Nunez, “Stellar intensity interferometry: Prospects for sub-milliarcsecond optical imaging,” New Astronomy Rev. 56, 143–167 (2012). [CrossRef]  

5. D. Dravins, S. LeBohec, H. Jensen, and P. D. Nunez, “Optical intensity interferometry with the Cherenkov Telescope Array,” Astroparticle Phys. 43, 331–347 (2013). [CrossRef]  

6. P. D. Nunez, R. Holmes, D. Kieda, J. Rou, and S. LeBohec, “Imaging submilliarcsecond stellar features with intensity interferometry using air Cherenkov telescope arrays,” Mon. Not. R. Astron. Soc. 424, 1006–1011 (2012). [CrossRef]  

7. I. Klein, M. Guelman, and S. G. Lipson, “Space-based intensity interferometer,” Appl. Opt. 46, 4237 (2007). [CrossRef]   [PubMed]  

8. D. Dravins and S. LeBohec, “Toward a diffraction-limited square-kilometer optical telescope: Digital revival of intensity interferometry,” Proc. SPIE 6986, 698609 (2008). [CrossRef]  

9. S. LeBohec, et al., “Stellar intensity interferometry: Experimental steps toward long-baseline observations,” Proc. SPIE 7734, 77341D (2010). [CrossRef]  

10. R. Holmes, B. Calef, D. Gerwe, and P. Crabtree, “Cramer-Rao bounds for intensity interferometry measurements,” Appl. Opt. 52, 5235–5246 (2013). [CrossRef]   [PubMed]  

11. M. V. Klibanov, P. E. Sacks, and A. V. Tikhonravov, “The phase retrieval problem,” Inverse Problems 11, 1–28 (1995). [CrossRef]  

12. R. B. Holmes and M. S. Belenkii, “Investigation of the CauchyRiemann equations for one-dimensional image recovery in intensity interferometry,” J. Opt. Soc. Am. A 21, 697–706 (2004). [CrossRef]  

13. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl.Opt. 21, 2758–2769 (1982).

14. J. R. Fienup and A. M. Kowalczyk, “Phase retrieval for a complex-valued object by using a low-resolution image,” J. Opt. Soc. Am. A. 7, 450–458 (1990). [CrossRef]  

15. D. V. Strekalov, B. I. Erkmen, and N. Yu, “Intensity interferometry for observation of dark objects,” Phys. Rev. A 88, 053837 (2013). [CrossRef]  

16. J. Wambsganss, “Gravitational Lensing in Astronomy,” Living Rev. Relativity 1, 12 (1998). [CrossRef]  

17. M. Moniez, Gen. Relativ. Gravit., “Microlensing as a probe of the Galactic structure: 20 years of microlensing optical depth studies,” Gen. Realtiv. Gravit. 42, 2047–2074 (2010).

18. M. Moniez, “Does transparent hidden matter generate optical scintillation?” Astron. Astrophys. 412, 105–120 (2003). [CrossRef]  

19. F. Habibi, M. Moniez, R. Ansari, and S. Rahvar, “Searching for Galactic hidden gas through interstellar scintillation: results from a test with the NTT-SOFI detector,” Astron. Astrophys. 525, A108 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 A transfer function (solid line) applied to the object’s absorption profile between n − 1th and nth Gerchberg-Saxton iterations gradually drives it towards black-and-white solution. A dashed line represents an identity transfer function An = An−1.
Fig. 2
Fig. 2 A logarithmic plot of the correlation function C(ρi,j) reveals the structure which encodes the shape of the object of interest. The solid contour lines correspond to C(ρi,j)/C0(0) = 2−12 (red) and C(ρi,j)/C0(0) = 2−13 (green). The object is shown on the inset.
Fig. 3
Fig. 3 The results of a simple image recovery algorithm. The step numbers are given in the low left corner of each frame.
Fig. 4
Fig. 4 Left: fractional change (given by square root of variances (6)) of the test object’s image and its Fourier transform in a simple image recovery corresponding to Fig. 3. Right: the same in the alternating algorithm; shown are images 40, 130 and 200.
Fig. 5
Fig. 5 Left: fractional change of the test object’s image and its Fourier transform in the alternating adaptive algorithm; shown are images 10, 75, 125 and 200. Right: the transfer function slope.
Fig. 6
Fig. 6 The image reconstruction results with the threshold set at 2−13 (left) and 2−12 (right).
Fig. 7
Fig. 7 Left: numerically modeled intensity distribution of a Gaussian source modified by a rectangular phase object. Right: the same reconstructed by Gerchberg-Saxton algorithm after 5000 iterations. Side of each image is 2 cm, array size 680 × 680 pixels.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

C ( ρ 1 , ρ 2 ) T 1 T / 2 T / 2 d t I 1 ( t ) I 2 ( t ) ,
C ( ρ d ) K | 𝒯 ( k ρ d L + L s ) I s ( 0 ) β 2 𝒜 ( k ρ d L ) | 2 .
𝒜 ( q ) d 2 ρ A ( ρ ) e i q ρ , 𝒯 ( q ) d 2 ρ I s ( ρ ) e i q ρ .
a ˜ = k b / L and c ˜ = k b / ( L + L s ) .
A ( ρ ) = exp { π ρ 2 C 0 ( 0 ) C ( 0 ) } .
σ n i , j | A n ( ρ i , j ) A n 1 ( ρ i , j ) | 2 i , j | A n 1 ( ρ i , j ) | 2 , σ ˜ n i , j | 𝒜 n ( ρ i , j ) 𝒜 n 1 ( ρ i , j ) | 2 i , j | 𝒜 n 1 ( ρ i , j ) | 2 ,
I ( ρ ) = 1 β 2 π R s 2 d 2 ξ I s ( β ξ ( β 1 ) ρ ) A ( ξ ) .
b λ L R o b N d ,
C ( ρ d ) K | 𝒜 ( k ρ d L ) | 2
T * ( ρ ) T ( ρ ) | T ( ρ s ) | 2 ,
T * ( ρ ) T ( ρ ) = e i [ ϕ ( ρ ) ϕ ( ρ ) ] e i ρ d ϕ ( ρ s ) .
C ( ρ d ) K | d 2 ρ s I s [ β ρ s + L s k ϕ ( ρ s ) ] e i k L ρ d ρ s | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.