## Abstract

Lens-free microscopy multispectral acquisitions are processed with an inverse problem approach: a multispectral total variation criterion is defined and minimized with the conjugate gradients method. Reconstruction results show that the method is efficient to recover the phase image of densely packed cells.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Lens-free microscopy [1] is a minimalist microscopy technique, which consists in acquiring diffraction patterns of a thin sample and computing (reconstructing) the optical properties of an object which diffraction pattern would match the measurements. By its simple design, the system is cheap, robust and incubator-proof. It can easily provide videos with a large field of view. Moreover, biological samples can be imaged without staining [2].

Lens-free acquisition is however problematic. While a complete description of light requires complex value fields, only their absolute values (intensities) are measured with a lens-free setup featuring a basic sensor. The consequence of this lack of information is to create artifacts in reconstructed images known as twin images [3].

A first strategy to reduce such artifacts consists in using priors on the sample, such as non negativity [4], or priors related to sparsity [3, 5]. Another strategy consists in enriching the measurements set by acquiring diffraction images at multiple object-to-detector distances [6,7], or at multiple wavelengths [2,8].

This article relies on both strategies, i.e. the use of an enriched measurements set (multispectral acquisitions) associated with physically valid priors of the unknown sample. Hence, we developed an algorithm that minimizes a multispectral total variation (MS-TV) criterion. The algorithm uses the MS-TV criterion to reconstruct phase images that show shape similarities along the different acquisition wavelengths.

The paper is organized in three parts. The method part describes the models used (i.e. the optical field, its propagation and its measurement), the multispectral total variation criterion, the computation of its gradient and an overview of the reconstruction process. The material part describes the 3-chromatic RGB acquisition system, which sequentially acquires three diffraction images of biological samples and the results part shows an example of a reconstruction obtained from a culture of A549 mammalian cells. The reconstructed time-lapse movie of this cell culture is provided (field of view of 29.4 *mm*^{2}, 3.7 days) in supplementary material.

## 2. Method

Light is described as a complex scalar field and its propagation from one plane to another is obtained using a convolution kernel (*h*). Therefore, if *A*_{0} is the complex field after the sample, the field *A _{Z}* in the detector plane at the distance

*Z*from the object is:

*h*derived from Fresnel theory, but other propagation models such as the angular spectrum [9] approach could be employed with no difficulties. In the following,

_{Z}*Z*is assumed to be known. In practice, the focusing distance has to be estimated with a precision of approximately 10

*μm*and it can be evaluated by inspecting Z-stack of back-propagated measurements or with an automated method as described in [10].

The propagation kernel is ${h}_{Z}^{\lambda}(\overrightarrow{r})=(1/i\lambda Z)\text{exp}\left(i\pi {\overrightarrow{r}}^{2}/\lambda Z\right)$, *i* being the imaginary unit. Note that the quantities *A _{Z}*,

*A*

_{0},

*h*, as well as the measured intensity

_{Z}*I*, the phase in the detector plane

_{Z}*φ*and the objective function gradient ∇

_{Z}*ε*(that are introduced in the following), are functions of the wavelength

*λ*and the spatial position

*r⃗*in a plane. Hence, for example, the convolution of Eq. (1) can be explicitly written as:

*r⃗*is the coordinate in the sample plane and

*r⃗′*is the one in the detector plane. Comments about spatial discretization of Eq. (1) and Eq. (2) and computation using Fourier transforms can be found in Appendix A.

The quantity measured by a standard detector is the intensity of the light field: *I _{Z}* = |

*A*|

_{Z}^{2}. At this stage, we see that the modulus of

*A*is known but its phase is undetermined. Whatever the phase map

_{Z}*φ*, ${A}_{Z}=\sqrt{{I}_{Z}}.\text{exp}(i.{\phi}_{Z})$ and the corresponding field

_{Z}*A*

_{0}at the sample plane, ${A}_{0}=\left(\sqrt{{I}_{Z}}.\text{exp}(i.{\phi}_{Z})\right)*{h}_{-Z}$, would perfectly match the data. The approach presented here consists in finding the phase field

*φ*so that

_{Z}*A*

_{0}(

*φ*) minimizes the MS-TV criterion

_{Z}*∊*(

*φ*):

_{Z}*ε*is a small valued coefficient (10

^{−4}) used to make Eq. (3) differentiable and prevent division by 0 in the gradient derivation presented afterwards.

MS-TV is a *L*_{1}-norm applied to the 2*N _{λ}* fields ($\partial {A}_{0}^{\lambda}/\partial x$ and $\partial {A}_{0}^{\lambda}/\partial y$) where

*N*is the number of wavelengths used (here

_{λ}*N*= 3). A comment about discretization of operators

_{λ}*∂*/

*∂x*and

*∂*/

*∂y*can be found in Appendix A. As illustrated in Fig. 1, MS-TV criterion is lower when high values of these 2

*N*fields are colocalized. With this consideration plus the fact that

_{λ}*L*

_{1}-norm tends to promote sparsity, we can see that minimization of the MS-TV criterion leads to results with similar sharp edges at all wavelengths. Contrary to [2, 8] which present another way to data process multi-spectral acquisitions, our MS-TV criterion is not designed to produce sample fields

*A*

_{0}with the same values for different wavelengths (see Appendix D) but favors colocalization of edges across wavelengths.

The optimization is performed using the non-linear conjugate gradients iterative method as described by the following pseudocode:

- 1/ initialization
*k*← 0,*D*^{(k)}← 0, ${\phi}_{Z}^{(k)}\leftarrow 0$ - 2/
*k*←*k*+ 1 - 3/ computation of gradient ∇
*∊*^{(k)}, of advancement direction*D*^{(k)}and advancement step*σ*^{(k)} - 4/ phase update: ${\phi}_{Z}^{(k)}\leftarrow {\phi}_{Z}^{(k-1)}+{\sigma}^{(k)}.{D}^{(k)}$
- 5/ test of convergence, if not reached go back to 2/

For step 3/, the gradient ∇*∊*^{(k)} is analytically computed as detailed in Appendix C. The advancement direction *D*^{(k)} is obtained by conjugating the gradient ∇*∊*^{(k)} with the previous direction of advancement *D*^{(k−1)}, using Hestenes-Stiefel formula [11]. The advancement step *σ*^{(k)} (a scalar) is obtained by developing the MS-TV criterion at order 2 in the advancement direction (Hessian computation) and by minimizing a second-order polynomial of variable *σ*^{(k)}. For the test of step 5/, we simply stop the algorithm when the number of iterations reaches 100.

## 3. Material

Measurements were carried out with a Cytonote (Iprasense) lens-free video microscope (Fig. 2(b)) which geometry is illustrated in Fig. 2(a). Workable diffraction measurements must exhibit fringes. Therefore, the optical system is designed to provide partial coherence. The source is a 3-chromatic RGB-LED spatially filtered by a pinhole illuminating a thin sample on a microscope slide at 50 *mm* of the source. A CMOS detector of 6.4 × 4.6 *mm*^{2} with 3840 × 2748 pixels (pixel pitch of 1.67 *μm*) is used to measure diffraction patterns at a distance of ≈ 1 *mm* from the sample. In the configuration used, optical magnification is close to 1.

For the aim of observing living cells during a long period of time without significant disturbance, 1 frame per 5 minutes is the maximum sampling rate achievable on this setup. Indeed, due to its short cell-to-sensor distance, the heat of the detector generated during a frame acquisition is partially transferred to the biological sample. A faster acquisition rate would lead to the overheating of the sample and may cause cell death. The system spatial resolution limitations are discussed in Appendix B. We show that the limiting physical factor in this system is the temporal coherence of the source.

## 4. Results

To illustrate the capabilities of the system, a video of the diffraction of a biological sample (A549 cells observed during 3.7 days, 1065 frames with a sampling of 1 frame per 5 minutes) was acquired. A typical diffraction measurement is presented on Fig. 3(d) and Fig. 3(g). Since such samples are transparent (non-absorbing), most of the contrast is expected to be due to phase variations.

An optimal distance *Z* (1340 *μm*) was first determined by visual inspection of retropropagation (${A}_{0}=\sqrt{{I}_{Z}}*{h}_{-Z}$) of measurements for various *Z*. This distance is used for the entire set of 1065 frames. We then performed the reconstruction with the proposed method (100 iterations) using either tri-chromatic acquisition, or only one acquisition channel (red). An example of reconstruction obtained with the proposed method from a tri-chromatic acquisition is presented on Fig. 3(a), Fig. 3(b) and Fig. 3(e). The reconstruction time of one frame is about 40 seconds with a script running on Matlab R2017b and a desktop PC equipped with a Nvidia Titan XP graphic card. The reconstruction results for all the 1065 frames of the acquisition set are presented as a movie in supplementary material (see
Visualization 1). For comparison, the reconstruction obtained with the proposed method but using only the red channel acquisition, is presented on Fig. 3(c) and Fig. 3(f) (18 seconds runtime).

We note that the algorithm is successful in retrieving sharp results with the lens-free setup even when fringes of various cells are intermixed (Fig. 3(g)). Fig. 3(c) and Fig. 3(f) show that the proposed algorithm used with only one wavelength is efficient in recovering a valid phase image of the sample even in the difficult case of densely packed biological cells. Furthermore, the detailed image of reconstructed phase (Fig. 3(e)) shows the benefit of using wavelengths diversity since results from tri-chromatic acquisitions are less impaired by twin image residues (see arrows in Fig. 3(e) and (f), as well as the flatter area void of cells).

In Appendix D, reconstructions performed from the same set of acquisition data but using the standard multispectral method as described in [8] are presented for comparison purpose. We show that the two methods provide results with the same spatial resolution. However the MS-TV algorithm allows to reduce twin-image residuals and artifacts on this example.

## 5. Conclusion

In this work, we developed an algorithm for processing multispectral lens-free microscopy acquisitions. This technique is well adapted for visualizing transparent samples such as unstained cell cultures since it relies on acquisitions of diffraction images. The algorithm is based on the definition of a novel multispectral total variation criterion. Its minimization is performed through a standard non-linear conjugated gradient method, which uses analytical computation of the criterion gradient also presented in the paper.

Acquisitions from a densely packed culture of A549 mammalian cells are presented. In such a case, algorithms based on the determination of the support of the object would have failed since the later occupies most of the field of view. On the contrary, the proposed algorithm is able to reconstruct valid phase images of the sample over a wide field of view. Detailed inspection of reconstructions shows that the data-processing of multispectral acquisitions is advantageous compared to one channel acquisitions since it provides results with less artifacts.

## Appendix A: Comments on discretization and the Fourier transform

For the sake of simplifying the text, fields are written as functions of the continuous spatial variable *r⃗*, but, since data acquisitions are performed on discrete grids corresponding to the detector sampling (on the example, grid of size 3840 × 2748 with a pixel pitch of 1.67 *m*), actual computations are discrete operations.

Integrals as the one of Eq. (2) are to be read as a discrete sum; for example the integral of a field *f* is:

*i*and

*j*being indexes of the matrix

*f*.

For the same reason, partial derivatives, as appearing in Eq. (3), are discretized. For example the partial derivative with respect to *x* of a field *f* is:

*S*

_{x0}= 1 and

*S*

_{x1}= −1.

For computational performance reasons, operations involving convolution with Fresnel kernel, as appearing in Eq. (1), are performed in the Fourier space (using the Fast Fourier Transform) as described in Eq. (5) of reference [12]. We utilize the fact that the Fourier transform *h̃ _{Z}* of

*h*is:

_{Z}*f̃*(

*μ⃗*) = ∫

*dr⃗*

*f*(

*r⃗*) exp(−2

*iπ*

*μ⃗*.

*r⃗*). This way to compute Eq. 1 is valid as long as the Nyquist theorem is not violated (see Eq. 51 of [13]); it is the case of our experimental conditions since

*Z*<

*N*Δ

*x*

^{2}/

*λ*, where

*Z*is the sample to detector distance (1340

*μm*), Δ

*x*the detector pitch (1.67

*μm*),

*N*the number of pixels in the smallest dimension of the detector (

*N*= 2748) and

*λ*is around 0.5

*μm*.

## Appendix B: Comments on system resolution due to partial coherence

Coherent wave formulation is used throughout this article, although the system source of light are RGB LEDs, which are only partially coherent. This lack of coherence leads to a decrease of the spatial resolution of the system and may result in specific artifacts due to modeling errors.

The three main factors that limit the spatial resolution are the detector pixel pitch and the source temporal and spatial coherences as discussed in the article [14]. Using Eq. (4), Eq. (5), Eq. (6) and Eq. (7) of this reference, we compute the half pitch resolution Δ*x* of these three limiting factors; we get Δ*x* = 1.67 *μm* due to the detector pixel pitch, Δ*x* = 1.12 *μm* due to source spatial coherence and Δ*x* = 2.40 *μm* due to source temporal coherence. (Results obtained with the values: *λ* = 0.5 *μm*, source-sample distance=50 *mm*, sample-detector distance=1340 *μm*, source size=50 *μm*, source spectral width=20 *nm*). Therefore, the limiting factor of our system is the temporal coherence of the source and we expect a complete loss of contrast for lines with a width around 2.40 *μm* on phase test targets.

Such a loss of contrast is observed on a standard USAF1951 phase test target (SiO_{2} material, 320 *nm*-height pillars) as shown in Fig. 4. The algorithm described in the present article was applied to a tri-chromatic acquisition of this test target; group 6 is perfectly well contrasted whereas lines of group 7 are gradually less visible with a complete loss of contrast for element 4 of group 7. As experimentally evaluated on this phase test target, the system half pitch resolution is around 2.8 *μm*, which is good agreement with the theoretical value.

The main artifact of the reconstruction is the non homogeneity of the recovered phase values visible on large structures (such as the biggest square in Fig. 4), which is caused by the fact that intensity measurements are not sensitive to small spatial frequencies of phase samples.

## Appendix C: Derivation of the gradient of the problem

The goal of this annex is to give mathematical details on how the gradient of the objective function *∊* is calculated. As explained in the part 2, for any detector phase *φ _{Z}*, a multispectral TV criterion

*∊*(

*φ*) of the reconstructed object field

_{Z}*A*

_{0}can be computed. The operations chain linking

*φ*and

_{Z}*∊*are the following Eq. 7–10:

To obtain the gradient of *∊*, we use the differential notion. If we have a differentiable function *f* : *A* → *B*, for every *a* and *h* of *A*, there exists a linear function *D _{a}*

*f*called differential of

*f*around

*a*, such that:

*a*is implied and where the “progress”

*h*(noted

*δa*) of the variable and the differential are noted with the symbol

*δ*: Note that this syntax can be used for a scalar

*a*or a field such as

*φ*. Since

_{Z}*∊*is a scalar which depends on

*φ*, a field of indexes

_{Z}*λ*and spatial coordinates

*r⃗*, the differential of

*∊*is a linear form that can be written as:

*∊*is called the gradient and has the same dimension as

*φ*, the variable of

_{Z}*∊*. To derive its analytical formula, let us differentiate Eq. 7–10 to obtain an equation of the shape of Eq. 13. Knowing that $\delta \sqrt{\epsilon +a}=1/2.\delta a/\sqrt{\epsilon +a}$, that

*δ*|

*a*|

^{2}= 2

*Re*(

*a*.

*δa*

^{*}) (

*Re*being the “real part” operator) and that

*δ∂f*/

*∂x*=

*∂δf*/

*∂x*, Eq. 13 becomes:

*δA*

_{0}=

*δA**

_{Z}*h*

_{−Z}and

*δA*=

_{Z}*A*. Hence:

_{Z}iδφ_{Z}*f*and

*g*, ∫

*dr⃗*

*f*.

*g*

^{*}= ∫

*dμ⃗*

*f̃*.

*g̃*

^{*}, by taking $g=\left({A}_{Z}^{\lambda}.\delta {\phi}_{Z}^{\lambda}\right)*{h}_{-Z}$, we get: $\int d\overrightarrow{r}\hspace{0.17em}f.{\left(\left({A}_{Z}^{\lambda}.\delta {\phi}_{Z}^{\lambda}\right)*{h}_{-Z}\right)}^{*}=\int d\overrightarrow{\mu}.\tilde{f}.\left({\left(\tilde{{A}_{Z}^{\lambda}.{\phi}_{Z}^{\lambda}}\right)}^{*}.{\tilde{h}}_{-Z}^{*}\right)=\int d\overrightarrow{\mu}.\tilde{f}.{\tilde{h}}_{-Z}^{*}.{\left(\tilde{{A}_{Z}^{\lambda}.{\phi}_{Z}^{\lambda}}\right)}^{*}=\int d\overrightarrow{r}\left(f*{h}_{Z}\right).{\left({A}_{Z}^{\lambda}.\delta {\phi}_{Z}^{\lambda}\right)}^{*}$ where we used the fact that ${\tilde{h}}_{-Z}^{*}={\tilde{h}}_{Z}$ (see Eq. 6). Considering also the fact that

*Re*(

*i f*) = −

*Im*(

*f*),

*Im*being the “imaginary part” operator, Eq. 16 becomes:

## Appendix D: Standard multispectral algorithm

A standard algorithm [8] used for dealing with multi-wavelength lens-free acquisition, consists in:

- 1/ back-propagating the field from the detector plane to the object plane,
- 2/ averaging over wavelengths in the object plane,
- 3/ propagating the result to the detector plane,
- 4/ replacing the modulus of the field at the detector plane by the modulus acquired,
- 5/ going back to 1/ until convergence.

We show that such an algorithm can be understood with the tools developed in this article. Let us consider the following criterion:

*A*

_{0}with respect to the

*N*wavelengths. Eq. (19) is the integral on the object plane of the variance of light amplitude values at the different wavelengths. Such a criterion implements the preference of solutions with similar values at each wavelength. By using the same steps as in Appendix C, we get the gradient of the problem:

_{λ}*∊*(

^{λ}*r⃗*) = 0. Therefore, $\left(\overline{{A}_{0}}*{h}_{Z}^{\lambda}\right).{A}_{Z}^{\lambda}{(\overrightarrow{r})}^{*}$ is real. Therefore:

*φ*() is the phase function. By adopting a fixed-point strategy to reach the minimum, we see that a possible update of ${\phi}_{Z}^{\lambda (k)}(\overrightarrow{r})$ at iteration k can be chosen as:

For the sake of comparison, the tri-chromatic acquisitions of the biological sample shown in section 4 and the phase test target shown in Appendix B have also been data processed using the standard multispectral algorithm described [8] in this part (33 *s* runtime). Results of these two cases are presented on Fig. 5. On the figure, we see that the resolution limit of the standard method is equivalent to the MS-TV algorithm (complete loss of contrast for element 4 of group 7). The main differences come from the nature of specific artifacts. The standard method tends to produce dark halos surrounding phase elements and the twin-image residuals are more intense than on the output of our MS-TV method.

## Acknowledgements

We thank Pierre Joly (CEA-LETI) for the design of the phase test target and Jean-Pierre Alcaraz and Yves Usson de (TIMC-IMAG, Univ. Grenoble Alpes) for AFM measurements performed on it.

## Disclosures

L. Hervé, O. Cioni and C. Allier are co-inventors of a patent devoted to the holographic reconstruction of lens-free microscopy acquisitions as described in this manuscript.

## References

**1. **A. Greenbaum, W. Luo, T.-W. Su, Z. Göröcs, L. Xue, S. O. Isikman, A. F. Coskun, O. Mudanyali, and A. Ozcan, “Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy,” Nat. Methods **9**, 889 (2012). [CrossRef] [PubMed]

**2. **C. Allier, S. Morel, R. Vincent, L. Ghenim, F. Navarro, M. Menneteau, T. Bordy, L. Hervé, O. Cioni, and X. Gidrol *et al.*, “Imaging of dense cell cultures by multiwavelength lens-free video microscopy,” Cytom. Part A **91**, 433–442 (2017). [CrossRef]

**3. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef]

**4. **T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. **98**, 233901 (2007). [CrossRef]

**5. **Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. **6**, 37862 (2016). [CrossRef]

**6. **L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. **199**, 65–75 (2001). [CrossRef]

**7. **A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express **20**, 3129–3143 (2012). [CrossRef] [PubMed]

**8. **M. Sanz, J. A. Picazo-Bueno, J. García, and V. Micó, “Improved quantitative phase imaging in lensless microscopy by single-shot multi-wavelength illumination using a fast convergence algorithm,” Opt. Express **23**, 21352–21365 (2015). [CrossRef] [PubMed]

**9. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts and Company Publishers, 2005).

**10. **Y. Zhang, H. Wang, Y. Wu, M. Tamamitsu, and A. Ozcan, “Edge sparsity criterion for robust holographic autofocusing,” Opt. Lett. **42**, 3824–3827 (2017). [CrossRef]

**11. **M. R. Hestenes and E. Stiefel, *Methods of conjugate gradients for solving linear systems*, vol. 49 (NBSWashington, DC, 1952).

**12. **S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by digital holography,” Opt. Express **9**, 294–302 (2001). [CrossRef] [PubMed]

**13. **D. Claus, D. Iliescu, and P. Bryanston-Cross, “Quantitative space-bandwidth product analysis in digital holography,” Appl. Opt. **50**, H116–H127 (2011). [CrossRef]

**14. **A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. **18**, 77–102 (2016). [CrossRef]