Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Approximate Fourier phase information in the phase retrieval problem: what it gives and how to use it

Open Access Open Access

Abstract

This work evaluates the importance of approximate Fourier phase information in the phase retrieval problem. The main discovery is that a rough phase estimate (up to π/2rad) allows development of very efficient algorithms whose reconstruction time is an order of magnitude faster than that of the current method of choice—the hybrid input–output (HIO) algorithm. Moreover, a heuristic explanation is provided of why continuous optimization methods like gradient descent or Newton-type algorithms fail when applied to the phase retrieval problem and how the approximate phase information can remedy this situation. Numerical simulations are presented to demonstrate the validity of our analysis and success of our reconstruction method even in cases where the HIO algorithm fails, namely, complex-valued signals without tight support information.

© 2011 Optical Society of America

1. INTRODUCTION

The reconstruction of a signal from the magnitude of its Fourier transform (FT), also known as the phase retrieval problem, has long been playing a central role in certain branches of astronomy, crystallography, and optics [1]. The recent advent of nanotechnology has created a demand for extremely high resolution imaging so as to allow visualization of objects of nanometer scale. One of the more promising techniques for such a visualization is coherent diffraction imaging that also relies directly upon our ability to perform successful phase retrieval [2].

The prevailing iterative computational method for phase retrieval is the HIO algorithm introduced by Fienup [3]. This method can be viewed as an ingenious modification of the alternating projection technique due to Gerchberg and Saxton (GS) [4]. Despite its overwhelming success, HIO has several limitations that leave the quest for a new method open. For successful reconstruction the HIO algorithm requires a good estimation of the sought object support area, i.e., the locus of nonvanishing signal parts. Support information is especially crucial in the case of complex-valued signals [5]. Two more important drawbacks of HIO are a relatively slow convergence rate, and the lack of flexibility that makes it difficult to incorporate additional information into the computational scheme. The speed of convergence is important in many applications. For example, in microscopy a real-time algorithm would have a clear advantage. The ability to incorporate additional information may have an even greater effect: starting from a vast improvement in the reconstruction speed and going all the way to the very chances of successful reconstruction. The sources and nature of this additional information can vary: in some experiments a low-resolution image may be readily available; in others, we may know that the image is piecewise smooth so we can add a regularization term like total variation (TV) [6] to improve the reconstruction quality from noisy measurements [7, 8]. Incorporation of the TV minimization requirement into algorithms like GS and HIO seems to be impossible.

The present work gravitates around the situation where a rough estimate of the Fourier phase is available. Such information can be obtained, for example, from rough interferometric measurements. Or else, it may be available from other sources (e.g., see the Discussion section in [8]). More examples of phase retrieval enhanced by additional information can be found in [9, 10].

Probably the most compelling approach to address the above problems is to use continuous optimization techniques, such as Newton-type methods. These methods are known for their fast convergence and well developed machinery allowing the introduction of a wide gamut of prior information into the computational scheme. Unfortunately, these optimization methods fail miserably when applied to the phase retrieval problem. For example, in [11] the author investigates the applicability of one of the most successful Newton-type methods—the Levenberg–Marquardt algorithm. According to the study: “… It is seen that the nonlinearity and number of local minima of the cost function increases dramatically with the size of the object array, making these methods of little practical use for sizes greater than 6×6”. Despite numerous observations of such failure, no analysis has been done of why these methods fail and under what conditions they may be applied successfully. In this work we give a simple heuristic explanation why a very wide family of optimization methods, namely monotone line-search algorithms, fail when applied to the phase retrieval problem. Notwithstanding this failure in the general case, we argue that a rough phase estimate leads to an important property: local minima of a functional associated with the phase retrieval problem are likely to be global minima. Hence, chances are that any algorithm capable of finding a local minimum will successfully reconstruct the image. We also demonstrate numerical results that support our theoretical derivations. Using new algorithms we were able to reconstruct signals that cannot be successfully reconstructed by the current method of choice—the HIO algorithm.

The rest of the paper is organized as follows. In Section 2 we provide a short review of current reconstruction methods. In Section 3 we give an explanation why continuous optimization methods fail and how approximate phase information can change the situation. Based on this theoretical development we provide a reconstruction algorithm whose efficacy is confirmed by simulation results in Section 4. Concluding remarks are given in Section 5.

2. PREVIOUS WORK

Let us start with the notation used throughout this paper. The signals (two-dimensional complex-valued images) will be denoted by lowercase letters, such as x and y. Their counterparts in the Fourier domain will be denoted by the same letter with a hat, e.g., x^ and y^. The (unitary) Fourier transform operator is denoted by F, and we use the matrix-vector product notation to represent an application of the transform,

x^Fx.
The letter z is reserved to represent the true (unknown) signal, with z^ being its Fourier transform. In iterative methods we use a superscript to designate the iteration number, e.g., xk is the approximation obtained after k iterations. An element (in the object or Fourier domain) is enumerated by a subscript, e.g., xi or x^i. Finally, to designate the phase of a complex number x, we use the angle symbol: (x). Using this notation, the classical phase retrieval problem reads: find x such that |Fx|=|z^| subject to certain constraints in the object domain. In this work we address a slightly different setup where the Fourier phase estimates are available. Hence, the problem we solve here is: find x such that |Fx|=|z^| and α(Fx)β subject to certain constraints in the object domain. Note that such an x is not necessarily equal to z because, even when the Fourier domain data is sufficiently “oversampled”, the reconstruction in the classical phase retrieval is not unique [12]. Moreover, it may not be unique even when the Fourier phase estimates are available.

We next review current reconstruction methods. Fienup’s HIO algorithm [3] is based on alternating projections. Given a current estimate xk, it is transformed into the Fourier domain (giving x^k), where the correct magnitude is then imposed resulting in y^k [see the discussion after Eq. (4) explaining the way y^k is obtained from x^k]. Then, y^k is converted back to the object domain by the inverse Fourier transform, where a new estimate is obtained by the following rule:

xtk+1={ytk,txtkγytk,t.
Here, ∨ denotes the set of coordinates where yk violates the object domain constraints and γ is a damping factor. According to Fienup (the inventor of HIO), γ is usually chosen to be anywhere between 0.5 and 1.0 as the performance of the algorithm is not highly sensitive to the choice of this constant [5]. Hence, we set γ to 0.75 in all our experiments. A schematic description of one HIO step in the Fourier domain is shown in Fig. 1a. As already mentioned, the HIO algorithm is currently the most widely used method for phase retrieval. However, a new quasi-Newton optimization algorithm was presented in [7, 8] for the situation where the Fourier phase was known to lie within an interval [α,β]. In this case, the true value z^ must lie on the arc A^B^, shown in bold in Fig. 1b. Instead of trying to impose that constraint directly, the authors suggested a two-stage approach. In the first stage the Fourier domain requirement was replaced with a convex one: find x such that FxC, where C was the convex hull of the arc A^B^ [see Fig. 1b]. At the second stage the original Fourier magnitude constraint was reintroduced and the new problem formulation became: find x such that |Fx|=|z^| and FxC. The authors demonstrated that this approach led to a new reconstruction method whose convergence rate was orders of magnitude faster than that of the classical HIO method. These results were obtained on real-valued (nonnegative) signals and the authors claimed that the reconstruction succeeded in virtually all cases where the phase uncertainty interval was below πrad. In Section 3 we try to explain this success and refine the previous claim: global convergence is expected when the phase uncertainty interval is limited by π/2rad. The mismatch stems from the experimental setup in [7, 8], where the sought signals were real-valued and the phase bounds were drawn independently for each frequency discarding the conjugate symmetry of the FT of real-valued signals. Hence, the effective phase uncertainty was on average half of the declared one.

3. OPTIMIZATION METHODS: THE PROBLEM AND THE REMEDY

We next explain why the classical Newton-type and gradient descent methods fail for the phase retrieval problem. Actually, we can address a very wide family of optimization methods—monotone line-search algorithms. All these algorithms share a common three-step template: find a descent direction, along that direction find a step-length that sufficiently decreases the objective function value, and move to the new location. Descent direction is defined as a vector whose inner product with the gradient is negative (for obvious reasons). This guarantees that there exists a step along that direction that decreases the objective function value.

To be specific, we choose the most popular objective function for the discrepancy minimization in the Fourier domain:

f(x)=12|Fx||z^|2.
The gradient of f(x) is given by
f(x)=xF1(|z^|Fx|Fx|),
where ab and ab denote the Hadamard (elementwise) product and quotient, respectively. It is interesting to note that the signal F1(|z^|Fx|Fx|) has a clear physical meaning: it is obtained from x by substituting the (wrong) Fourier magnitude |Fx| with the correct one |z^|. Thus, it is nothing but the signal denoted by y in Fig. 1a. However, the main observation about the gradient is that f(x)=0 if and only if |Fx|=|z^|, i.e., if and only if x is a solution. Of course, this is true only if there are no additional constraints. In practice, however, the optimization is done while respecting certain restrictions on x. The constraints are often implemented as penalty functions that augment the original functional. Hence, the augmented gradient may vanish when |Fx||z^|. Also, we may find ourselves in a situation where no feasible descent direction exists. Such situations usually indicate a local minimum and make further progress by such standard optimization methods impossible. In this discussion we are being deliberately vague about the exact nature of the object domain constraints and enforcement thereof in the optimization process. We only assume that imposing these constraints on a signal estimate will take that estimate to be closer to the true signal—a natural assumption for monotone optimization.

Let us now consider a single element in the Fourier domain. Using our notation, the true value is z^i, whose magnitude |z^i| is known and whose phase θi is unknown. We distinguish three possible scenarios. First, the Fourier magnitude of the current estimate is smaller than |z^i| and the phase error αi is greater than π/2rad. Second, the current Fourier magnitude is greater than |z^i| (phase error is unimportant in this case). And, finally, the third possibility: the current estimated Fourier magnitude is less than |z^i| and the phase error αi is less than π/2rad. These scenarios are illustrated in Figs. 2a, 2b, 2c, respectively.

Recall also that, by Eq. (4), we know that the gradient descent direction is always taking us from x^ toward y^ [see Eq. (4) and Fig. 2]. Let us consider the first case. In the Fourier domain, when we move from x^i toward y^i we are actually moving away from the correct value z^i [see Fig. 2a]. Therefore, due to the unitarity of F, x is pulled farther away from z. On the other hand, object domain constraints shall pull us toward the correct value, as assumed before. Hence, the two forces may cancel each other, resulting thereby in the stagnation of the algorithm. In numerical tests this stagnation is observed in all but very tiny problems, i.e., whenever the number of unknown pixels surpasses a few dozen. More importantly, it happens at very early stages, long before the reconstruction algorithm gets close to the correct value. Hence, the results are usually worthless. In the second case, where the magnitude of the current estimate is greater than the correct value z^i, moving from x^i toward y^i will necessarily bring us closer to the correct value z^i [see Fig. 2b]. The last case is more interesting. If the estimated Fourier magnitude is sufficiently smaller than the correct one, and the phase error αi is less than π/2rad, then moving along the gradient descent direction, i.e., from x^i toward y^i, will get us closer to z^i, so long as we do not pass the point z^i which is the projection of z^i onto the line segment [0,y^i], as shown in Fig. 2c. Note that the fact that moving from z^i toward y^i takes us farther away from the correct value z^i does not let us claim that any optimization algorithm will necessarily converge to a solution. However, in the discussion that follows we prove that the situation where an optimization algorithm gets stuck at some x such that all x^i lie between z^i and y^i is impossible. In fact, we argue below that in the situation where the Fourier phase errors are limited by π/2rad, any local minimum is likely to be a global one. That is, any algorithm capable of finding a (constrained) local minimum can be expected to solve the phase retrieval problem in this case. To give a reasoning behind this claim we must be more specific about the constraints in the object domain. As will be evident from the argument below, our assumptions are very general and fit all commonly encountered cases.

Let us consider the most frequent object domain constraint: limited support information.

xo=0,oO,
where O denotes the set of off-support locations, where z is known to be zero. Note that zero-padding is a special case of such support information. In certain situations the sought signal can be assumed to be real nonnegative. In these situations the above constraint can be extended by the nonnegativity requirement
xs0,sO.
What is important for our discussion is that in both cases the set of all feasible signals (Z) is a convex set that contains a proper cone K. That is, if x and y are feasible, then λx+γy is also feasible for any nonnegative scalars λ,γ. Z contains K but not necessarily equals it. However, this is unimportant in the following discussion. Let us now consider the minimization of the objective function from Eq. (3) subject to the following two conditions: first, the set of object domain constraints is convex and contains the proper cone K; second, the phase error (of all elements) in the Fourier domain is bounded by π/2, i.e., |(x^)(z^)|π/2ϵ for some small positive ϵ. The Fourier domain constraints define a convex set, which is a proper cone too (a Cartesian product of proper cones). Hence, the optimization in this case is done over a convex set. Assume now that the algorithm converges at some local minimum x, which is not a solution (global minimum). Following a basic theorem from constrained-optimization theory we conclude that the following inequality must hold for any feasible point w (see, e.g., [13]):
f(x),(wx)0,
where ·,· denotes the usual inner product. In words, that means that no feasible descent direction exists at the point x. Let us consider a (small) subset of all feasible points: w=λz+γx, where λ,γ0 (note that x and z are feasible by definition, hence, x,zK, which implies that wK). The choice of this subset of feasible directions is stipulated by the fact that the phase error of w in all frequencies is less than or equal to the phase error of x (it is strictly smaller when λ0). With w as above, Eq. (7) becomes
f(x),(wx)=λf(x),z+(γ1)f(x),x.
If f(x),x<0, we can set λ=0, γ=2 and get a feasible descent direction (this is, actually, scaling up of x). If f(x),x>0, we can set λ=0, γ=0.5 and, again, get a feasible descent direction (this is scaling down of x). Hence, any local minimum must obey f(x),x=0 (we shall call such x optimally scaled). In this situation, the sign of Eq. (7) depends solely on the sign of the inner product f(x),z. It is more convenient to consider the above inner product in the Fourier domain. Because of unitarity of F we have
f(x),z=Ff(x),Fz
=i(|x^i||z^i|)|z^i|cosαi.
It is important to remember that the above formula is considered in the context of an optimally scaled x, that is,
0=f(x),x=Ff(x),Fx
=i(|x^i||z^i|)|x^i|.
For our following discussion, it is convenient to consider Eqs. (10, 12) as weighted sums of (|xi^||z^i|) with the weights of |z^i|cosαi [in (10)] and |x^i| [in (12)]. Note that, as long as αi<π/2, the weights are strictly positive in Eq. (10) and they are always nonnegative in Eq. (12). Thus, for example, it becomes obvious that an x that belongs to the convex region C [see Fig. 1b] as was required in the original algorithm in [7, 8] cannot be a local minimum unless |x^|=|z^| (which makes it a global one), since this is the only way to get zero by summing nonpositive numbers associated with strictly positive weights. This explains the success of the algorithm. However, even without the restrictions on |x^| used in the original algorithm, we can expect the sum in Eq. (10) to be negative. To understand why let us split it into three disjoint sets of indices according to the relation between |x^i| and |z^i|:
i(|x^i||z^i|)|z^i|cosαi=iB(|x^i||z^i|)|z^i|cosαi+iS(|x^i||z^i|)|z^i|cosαi+iA(|x^i||z^i|)|z^i|cosαi,
where B={i||x^i|>|z^i|}, S={i||x^i||z^i|cosαi}, and A={i||z^i|cosαi<|x^i|<|z^i|}. Of course, there is also the fourth set of indices, where |x^i|=|z^i|, however, its contribution to the sum is zero. With this subdivision it is easy to compare the sum in Eq. (10) (for which a negative sign indicates the presence of a feasible descent direction) and the sum in Eq. (12) (which is zero). The weight in these weighted sums changes from |x^i| in (12) to |z^i|cosαi in (10). Hence, for iB, |x^i||z^i| is positive and its weight has decreased, thus, pulling the total sum toward a negative value. For iS, |x^i||z^i| is negative, and its weight has increased thus, again, contributing to the negativeness of the result. The only subset of indices that increases the sum is iA. From our experience it is very unlikely to encounter a situation where the contribution of iA outweighs the joint contribution of iB and iS. Hence, it is very unlikely to get stuck in a local minimum with no descent direction. Note also that if the phase error of all frequencies of x is strictly less than π/2rad then the sum in Eq. (10) must be zero for x to be a local minimum, because if the direction toward w is an ascent direction, one can simply reverse it to get a descent direction.

This discussion provides a heuristic explanation why a carefully designed optimization method can be expected to converge to a global minimum, i.e., to solve the phase retrieval problem when the Fourier phase is known up to π/2rad and the object domain constraints are given in the form of (possibly loose) support information.

4. SIMULATION RESULTS

The method was tested on many images with consistent results. Here, for demonstration purposes, we chose a natural image with a lot of features so as to allow easy perception of the reconstruction quality by the naked eye. We demonstrate our results on two different cases: one with loose support information, that is, part of the image is zero and we do not know that a priori; another with tight support. Both images are complex-valued and their original size is 128×128 pixels, padded with zeros to the size of 256×256 pixels. The intensity (squared magnitude) of the images (without the zero-padding) is shown in Fig. 3. All our experiments show that the phase distribution in the object domain does not affect the reconstruction. Therefore, the actual phase distribution was chosen to be random to avoid any possible assumptions of smoothness.

We compared three reconstruction methods. First, a slight modification of the quasi-Newton method from our previous work [8]. The original method uses two stages to solve the problem. At the first stage it solves the convex problem

minxd(Fx,C)subject toxO=0,
where d(u,C) denotes the Euclidean distance between a signal u and the convex set C [see Fig. 1b for the definition of C]; and O is the set of indices where the true signal z is known to be zero (off-support locations) as in Eq. (5). At the second stage the Fourier magnitude discrepancy is reintroduced, and the algorithm solves the following optimization problem:
minx|Fx||z^|2subject toxO=0,FxC.
The current modification of the above method uses only one stage. That is, we abandoned the first stage described above and solved the following optimization problem:
minx|Fx||z^|2+μd(Fx,C)subject toxO=0,α(Fx)β.
This change was made following the fact that the phase bounds in the Fourier domain are the main reasons for success. The additional restrictions on |x^| used in the original method can be viewed as heuristic constraints added (with smaller weight μ) for the reasons given in the discussion that follows Eq. (12). This also made the choice of the initial x straightforward: x0=0. This method is referred to as “our method” in the forthcoming comparisons and figures. Second, we created the following phase-aware modification of the HIO algorithm (PA-HIO). When the Fourier phase is known to lie in the interval [α,β], PA-HIO’s correction in the Fourier domain forces the current estimate x^ to lie on the arc A^B^ [see Fig. 1b]; hence, y^ [see Fig. 1a] is the point closest to x^ that lies on the arc A^B^. The third algorithm is the classical HIO method without any alterations. In fact, we tested also a phase-aware modification of the GS algorithm; however, its results were consistently worse than those of PA-HIO. Hence, we omit them.

We first demonstrate how the phase uncertainty interval affects our ability to reconstruct the images. A set of 51 uncertainty intervals was chosen in the range from 0 to 2.5rad. For each interval of uncertainty its endpoints were chosen at random so that the true phase was uniformly distributed inside it. We ran our optimization algorithm 100 times (each time generating new phase bounds), checking each run whether it was successful or not. A run was considered successful if the error in the Fourier domain [as defined by Eq. (3)] was below 0.5×106 after 1000 iterations. As is evident from Fig. 4, the reconstruction always succeeded as long as the phase uncertainty was below π/2rad, in perfect agreement with our analysis. It is also evident that, for images with tight support information, successful reconstruction can be expected even for significantly rougher phase estimation. Moreover, the algorithm converges very fast and the above threshold is usually reached after 250–300 iterations for the loose-support image and only 80 iterations for the image with tight support, as is apparent from Figs. 6, 7.

Next, we demonstrate in detail the reconstruction results in the case where the Fourier phase uncertainty interval is 1.57rad. Note that without the phase bounds the HIO method cannot reconstruct images with loose support. Images with tight support information are usually reconstructed successfully, though they may undergo some trivial transformation, e.g., axes reversal. As is evident from Fig. 5, our methods (quasi-Newton and PA-HIO) produce very good visual quality. HIO, on the other hand, has problems with the loose-support image (as expected) but the second case seems to yield acceptable quality. However, visual assessment does not provide full insight into the quality of reconstruction and tells nothing about its speed. Quantitative results are given in Figs. 6, 7, from which it is evident that our method significantly outperforms HIO and PA-HIO in terms of speed.

It is important to point out two things before evaluating the quantitative results presented in Figs. 6, 7. First, all presented methods are iterative by nature and every one of them uses two Fourier transforms per iteration. Hence, comparing the number of iterations is well justified and gives a good estimation of the reconstruction speed since the Fourier transforms constitute the major computational burden. Second, it is obvious that images with loose support lead to nonunique solutions (due to possible shifts, for example). Hence, a small value of the objective function does not necessarily mean small error in the object domain. This explains the results in Fig. 6. Another important observation is that the HIO and PA-HIO methods do not enforce the off-support areas (padding) to be zero. Hence, these methods may give a large error in the Fourier domain, while the error in the object domain (after we discard the off-support parts) may be small. This phenomenon is also evident in Figs. 6, 7.

5. CONCLUSIONS

A new analysis is introduced explaining why continuous optimization methods fail when applied to the phase retrieval problem. On the basis of this observation we give a heuristic explanation why local minima of a functional associated with the phase retrieval problem can be expected to be global ones in the situation where the Fourier phase error does not surpass π/2rad. This, in turn, opens the door for continuous optimization methods whose rate of convergence and ability to incorporate additional information in the computational scheme significantly exceeds that of HIO. We also present such an algorithm and demonstrate that its reconstruction speed is significantly faster than that of HIO, even when the phase constraints are also employed in the latter.

 figure: Fig. 1

Fig. 1 Current reconstruction methods.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Three possible scenarios (in the Fourier domain).

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Test images (object plane intensity).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Reconstruction success rate of our method as a function of phase uncertainty interval.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Reconstruction results after 1000 iterations with phase uncertainty interval of 1.57rad. Top row, loose support; bottom row, tight support.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Error behavior in the case of loose support.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Error behavior in the case of tight support.

Download Full Size | PDF

1. R. P. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Am. A 7, 394–411 (1990). [CrossRef]  

2. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400, 342–344 (1999). [CrossRef]  

3. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

4. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

5. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]  

6. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60, 259–268 (1992). [CrossRef]  

7. E. Osherovich, M. Zibulevsky, and I. Yavneh, “Signal reconstruction from the modulus of its Fourier transform,” Tech. Rep. CS-2009-09 (Technion, 2008).

8. E. Osherovich, M. Zibulevsky, and I. Yavneh, “Fast reconstruction method for diffraction imaging,” in “Advances in Visual Computing,” Vol. 5876 of Lecture Notes in Computer Science (Springer, 2009), pp. 1063–1072. [CrossRef]  

9. E. Osherovich, M. Zibulevsky, and I. Yavneh, “Simultaneous deconvolution and phase retrieval from noisy data,” Tech. Rep. CS-2010-10 (Technion—Israel Institute of Technology, 2010).

10. E. Osherovich, M. Zibulevsky, and I. Yavneh, “Numerical solution of inverse problems in optics: Phase retrieval, holography, deblurring, image reconstruction from its defocused ver sions, and combinations thereof,” Tech. Rep. CS-2010-15 (Technion—Israel Institute of Technology, 2010).

11. M. Nieto-Vesperinas, “A study of the performance of nonlinear least-square optimization methods in the problem of phase retrieval,” J. Mod. Opt. 33, 713–722 (1986). [CrossRef]  

12. M. Hayes, “The reconstruction of a multidimensional sequence from the phase or magnitude of its Fourier transform,” IEEE Trans. Acoust. Speech Signal Process. 30, 140–154 (1982). [CrossRef]  

13. D. P. Bertsekas, Nonlinear Programming, 2nd ed. (Athena Scientific, 1999).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Current reconstruction methods.
Fig. 2
Fig. 2 Three possible scenarios (in the Fourier domain).
Fig. 3
Fig. 3 Test images (object plane intensity).
Fig. 4
Fig. 4 Reconstruction success rate of our method as a function of phase uncertainty interval.
Fig. 5
Fig. 5 Reconstruction results after 1000 iterations with phase uncertainty interval of 1.57 rad . Top row, loose support; bottom row, tight support.
Fig. 6
Fig. 6 Error behavior in the case of loose support.
Fig. 7
Fig. 7 Error behavior in the case of tight support.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

x ^ F x .
x t k + 1 = { y t k , t x t k γ y t k , t .
f ( x ) = 1 2 | F x | | z ^ | 2 .
f ( x ) = x F 1 ( | z ^ | F x | F x | ) ,
x o = 0 , o O ,
x s 0 , s O .
f ( x ) , ( w x ) 0 ,
f ( x ) , ( w x ) = λ f ( x ) , z + ( γ 1 ) f ( x ) , x .
f ( x ) , z = F f ( x ) , F z
= i ( | x ^ i | | z ^ i | ) | z ^ i | cos α i .
0 = f ( x ) , x = F f ( x ) , F x
= i ( | x ^ i | | z ^ i | ) | x ^ i | .
i ( | x ^ i | | z ^ i | ) | z ^ i | cos α i = i B ( | x ^ i | | z ^ i | ) | z ^ i | cos α i + i S ( | x ^ i | | z ^ i | ) | z ^ i | cos α i + i A ( | x ^ i | | z ^ i | ) | z ^ i | cos α i ,
min x d ( F x , C ) subject to x O = 0 ,
min x | F x | | z ^ | 2 subject to x O = 0 , F x C .
min x | F x | | z ^ | 2 + μ d ( F x , C ) subject to x O = 0 , α ( F x ) β .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.