## Abstract

The paper is devoted to a computational super-resolution microscopy. A complex-valued wavefront of a transparent biological cellular specimen is restored from multiple intensity diffraction patterns registered with noise. For this problem, the recently developed lensless super-resolution phase retrieval algorithm [Optica, 4(7), 786 (2017)] is modified and tuned. This algorithm is based on a random phase coding of the wavefront and on a sparse complex-domain approximation of the specimen. It is demonstrated in experiments, that the reliable phase and amplitude imaging of the specimen is achieved for the low signal-to-noise ratio provided a low dynamic range of observations. The filterings in the observation domain and specimen variables are specific features of the applied algorithm. If these filterings are omitted the algorithm becomes a super-resolution version of the standard iterative phase retrieval algorithms. In comparison with this simplified algorithm with no filterings, our algorithm shows a valuable improvement in imaging with much smaller number of observations and shorter exposure time. In this way, presented algorithm demonstrates ability to work in a low radiation photon-limited mode.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Digital holography (DH) is a universal technique applied to a wide range of tasks for objects investigation and control. To name some of them: shape measurement [1, 2], vibration control [3], label-free analysis [4], quantitative biological samples phase imaging (QPI) [5], etc. DH techniques have a serious advantage for biological samples because many of them are transparent and, for visualization in conventional microscopy, coloring [6] or fluorescent agents [7] are often used which can be harmful especially for live objects [8]. DH allows a harmless introvision by reconstruction 3D complex-valued wavefront of transparent objects without additional preparations of samples [9].

One of the DH’s advantages is an ability to reconstruct the object phase, which brings information about a thickness and refractive index of the object [10]. Estimation of these parameters opens great possibility for science in investigation of live creatures. For example, knowing the sample refractive index and basing on the phase information it is possible to calculate dry mass of the samples [11]. Additionally, based on the phase information, DH allows to plot 3D movement maps of the objects in an investigated media that is a great benefit for ecological aims [12] and behavioral descriptions of live objects [13].

Phase retrieval is another technique to reconstruct a complex-valued wavefront of objects based only on intensity measurements without using reference beams. The iterative Gerchberg-Saxton (GS) algorithms based on alternating projections of wavefronts between the object and sensor planes are prominent instruments for phase retrieval [14, 15]. For higher accuracy and fast convergence of iterations the phase retrieval techniques need multiple measurements and their significant diversity. The sufficient diversity can be achieved in many different ways, in particular: by varying a propagation distance between object and registration plane [16]; by using different wavelengths [17]; by defocussing [18]; by introduction of wavefront modulation masks (amplitude [19] or phase [20]); etc. Phase retrieval as compared to DH has an advantage in the areas where DH cannot be applied due to complexity of imaging systems and, therefore, compelled absence of the reference wavefront (x-rays [21] or low-energy electron microscopy [22]). Another great advantage of phase retrieval is that it could be realized in a lensless configuration contrary tostate-of-the-art DH and QPI methods [5, 11].

In work with live tissues, it is always need to keep in mind that used radiation could harm an object or change its behavior. Especially it is crucial for X-rays with high-power radiation. Therefore, to minimize radiation influence on the object the lowest possible radiation power should be used for measurements. In turn, a low radiation power always drops down an intensity of the registered holograms, which results in a low signal-to-noise ratio (SNR) and erroneous and noisy reconstructions of objects. In these cases, an intelligent filtering should be applied to eliminate noises without losses in a true signal. Nowadays, the advanced noise suppression techniques are successfully embedded in phase reconstruction algorithms both in DH [23] and phase retrieval [24–26].

One of the state-of-the-art noise filtering algorithms is Block Matching 3 Dimensional (BM3D) filtering [27]. It is based on sparse modeling of objects and successfully applied in various optical setups: off-axis DH [28], phase shifting DH [29] and phase retrieval [30]. The recent Super-Resolution Sparse Phase Amplitude Retrieval (SR-SPAR) algorithm is developed for lensless setup [31]. BM3D filtering of phase and amplitude are important components of this algorithm promising for microscopic imaging and multiwavelength absolute phase retrieval [32, 33].

The contribution of this paper concerns application and demonstration of high-efficiency of the SR-SPAR algorithm for super-resolution imaging of biological cellular specimens in a noisy conditions. The algorithm is especially modified and tuned to deal with low radiation of laser beams and, respectively, extremely noisy observations.

The paper is organized as follows. The phase retrieval problem in noisy conditions is posed in section 2. The SR-SPAR algorithm is presented in section 3. This section is focused on the structure and the details of the algorithm. The experimental setup is a topic of section 4. Section 5 demonstrates application of SR-SPAR for imaging of epithelial cells and erythrocytes. These results are given for different exposure time, i.e. for different levels of noise in observed diffractive patterns. The conclusions are given in section 6.

## 2. Problem formulation

In the computational lensless phase microscopy, it is commonly assumed that the whole information about object scattered coherent wave passed through the object is needed to reconstruct the object image. It implies the reconstruction not only amplitude but the phase as well. Contrary, the phase retrieval is a problem of complex-valued object’s wavefront (*u* = *b* · *exp*(*jφ*), where *b* is an amplitude, and *φ* is a phase) reconstruction provided that only intensities of light radiation are registered. In our wavefront manipulation modeling, we assume that the object is flat and the planes of the object and sensor are parallel, the laser beam is perpendicular to the object plane.

Following to [31], we formalize the measured intensities in the form:

*u*(

*x*,

*y*, 0)∈ ℂ

^{N}^{×}

*is an*

^{N}*N*×

*N*complex-valued object’s wavefront;

*x*,

*y*are spatial coordinates, 0 means that the free propagation distance

*d*equals to zero, that corresponds to the object plane; ${\mathrm{P}}_{s}:{\u2102}^{N\times N}\mapsto {\u2102}^{M\times M}$ is a complex-valued forward propagation operator from object to sensor planes, ${y}_{s}(x,y,d)\in {\mathbb{R}}_{+}^{M\times M}$ are

*M*×

*M*intensity measurements at the sensor plane placed on the distance

*d*from the object plane,

*L*is a number of experiments.

It is useful to note, that the object’s wavefront *u*(*x*, *y*, 0) in this notation is the wavefront just after the object plane. It coincides with the object transfer function provided that the input light beam is coherent monochromatic parallel to the optical axis of the system and with the magnitude equal to 1. In our setup as in [31], the object wavefront is modulated by phase masks, such that *u*(*x*, *y*, 0) is changed for ℳ* _{s}* (

*x*,

*y*) ·

*u*(

*x*,

*y*, 0). Then the forward propagation from the object to the sensor plane can be represented as

Here *u _{s}*(

*x*,

*y*,

*d*) is a complex-valued wavefront at the sensor plane at distance

*d*from the object plane, ℳ

*∈ ℂ*

_{s}

^{N}^{×}

*are phase-only modulation masks, ℳ*

^{N}*(*

_{s}*x*,

*y*) = exp(

*jϕ*(

_{s}*x*,

*y*)).

Then, Eq. (1) takes the form:

The phases *ϕ _{s}*(

*x*,

*y*) in ℳ

*(*

_{s}*x*,

*y*) are generated as random. It results in the observations known as “coded diffraction patterns” (e.g. [34]). The phase modulation is able to change vastly the diffraction pattern of $\mathrm{P}\{u(x,y,0)\}$ enabling redistribution of observed intensities from low to higher frequencies.

For noisy observations Eq. (3) is changed for

where $\mathcal{G}$ stands for a generator of random variables (observations).The wavefront intensity registration process in optics amounts to count the photons hitting the sensor’s elements and is well modeled by independent Poisson random variables (e.g. [35]). The mean and the variance of Poisson random variable *z _{s}* are equal and given by

*y*, i.e.,

_{s}*E*{

*z*} = var{

_{s}*z*=

_{s}}*y*, here

_{s}*E*{} is a mathematical expectation, var{} is a variance. Defining the observation signal-to-noise ratio (SNR) as the ratio between the square of the mean and the variance of

*z*, we have

_{s}*SNR*=

*E*

^{2}{

*z*/var{

_{s}}*z*=

_{s}}*y*. In real experiments the intensities

_{s}*y*are scaled by the exposure time

_{s}*t*, which essentially controls noisiness of observations. Smaller values of

_{exp}*t*lead to smaller

_{exp}*SNR*, i.e. to noisier observations

*z*(

_{s}*x*).

For the wavefront propagation $\mathrm{P}$ from the object to the sensor plane, we use the Rayleigh-Sommerfeld model with the transfer function defined in the Fourier domain through the angular spectrum (AS) [31].

## 3. Super-resolution algorithm for phase retrieval

The SR-SPAR algorithm derived from the variational formulation of the phase retrieval problem for optimal reconstruction of the object from noisy data is presented in [31]. In this section, we go through the structure of this algorithm in descriptive manner following the successive operations step-by-step. For details of the algorithm, we refer to [31].

The computational wavefront reconstruction is going from the continuous domain wavefront propagation to the corresponding discrete models based on pixelation of the object’s wavefront distribution *u*(*x*, *y*, 0), thus, the discrete modeling of the system at hand links the discrete values of sensor output (observations) with the discrete values of the object, where the integral Fourier Transform (FT) is replaced by Fast Fourier Transform (FFT). This discretization is necessary both for digital data processing as well as for modeling of wavefront propagation and image formation. Contrary to the sensor pixels Δ* _{S}* size defined by the corresponding optical-electronic devices, the object pixels Δ

*are computational Δ*

_{o}*= Δ*

_{o}*, which maybe be taken arbitrary small.*

_{c}In the pixel-wise phase retrieval Δ* _{c}* = Δ

*, therefore the resolution of the object*

_{S}*u*(

*x*,

*y*, 0)reconstruction from the observations

*z*(

_{s}*x*,

*y*,

*d*) is dictated by the pixel size of the sensor Δ

*. In super-resolution Δ*

_{S}*Δ*

_{c}<*. It is conventional to apply an integer up-sampling factor*

_{S}*r*≥ 1 as Δ

_{S}*=*

_{S}*r*· Δ

_{S}*. Thus, a larger*

_{c}*r*means a smaller computational pixel Δ

_{S}*and a larger order of the super-resolution. All wavefronts in our algorithm are calculated with the discretization given by Δ*

_{c}*.*

_{c}To avoid aliasing effects in the AS propagation modeling, the zero-padding of the wavefronts is used of the size defined from the inequality [36]:

which binds the number of pixels in one dimension of the zero-padded object*N*and propagation distance

*d*, for given values of the computation pixel size Δ

*and the wavelength*

_{c}*λ*.

Figure 1 shows the block-scheme of the SR-SPAR algorithm [31]. The inputs ${\tilde{z}}_{s}$ in this algorithm are the observations *z _{s}* upsampled by factor

*r*, and ℳ

_{s}*are known phase masks introduced into the setup. We use the zero-order upsampling giving ${\tilde{z}}_{s}$ with piece-wise invariant values for computational pixels corresponding to each of the larger size pixels of the sensor. At the initialization step the initial estimate of the object wavefront*

_{s}*x*

^{0}is created. It is computed as the backward propagation of the wavefronts which amplitudes equal to the square root of the upsampled observations ${\tilde{z}}_{s}$ and phases of these wavefronts are zeros. Then, the phases of the masks ℳ

*are subtracted from the propagated wavefronts and*

_{s}*x*

^{0}is obtained as the average of the estimates obtained for each of the

*L*observations, the ’◦’ stands for the entry-wise product of two matrices. Here ${\mathcal{M}}_{s}^{*}$ means a complex conjugate ℳ

*. At Step 1 (forward propagation), the object wavefront estimate*

_{s}*x*is multiplied by the phase mask ℳ

^{t}*and propagates by the operator $\mathrm{P}$ to the sensor plane, the result is denoted as ${v}_{s}^{t}$. These wavefronts are calculated for the computational diffraction area*

_{s}*N*×

*N*denoted as

*S*. At Step 2 (Poissonian noise suppression), the wavefronts ${v}_{s}^{t}$ are updated to the variables ${u}_{s}^{t}$ by filtering the amplitudes of ${v}_{s}^{t}$. The filtered amplitudes ${b}_{s}^{t}$ are obtained according to the algorithm optimal for Poissonian observations ${\tilde{z}}_{s}$ [30]:

_{D}All calculations are element wise; *γ>* 0 is the parameter of the algorithm proportional to the exposure time of the experiments. This update is restricted to the sensor area *S _{S}, v_{s}* ∈

*S*, where the observations ${\tilde{z}}_{s}$ are given. For the area out of the sensor (zeropadding area), the wavefront values are preserved,

_{S}*u*=

_{s}*v*for

_{s}*v*∈

_{s}*S*\

_{D}*S*. This is a point of importance, as it defines interpolation of the wavefronts

_{S}*v*for the pixels where there are no observations. At Step 3 (backward propagation), the estimates $\left\{{u}_{s}^{t}\right\}$ backpropagate to the object plane and update the object wavefront estimate

_{s}*x*

^{t.}The filtering at Step 4 (sparse phase and amplitude filtering) is one of the key elements of the SR-SPAR algorithm. It follows from the sparse representations of the object phase and amplitude. In this modification of the algorithm the sparsification of the object is implemented for the wrapped phase. As a result the BM3D phase filtering is produced for the wrapped phase, what makes a difference with the SR-SPAR algorithm in [31], where the sparsification and BM3D filtering are produced for the absolute phase. The step of phase unwrapping is excluded in presented version of the SR-SPAR algorithm [31]. At Step 5 (object wavefront update) the iteration of estimate is updated. SR-SPAR is the iterative algorithm, that needed a number of iterations to produce a reliable result. The number of iterations *T* depends on the object structure and on the level of the noise in the observations, but usually, *T* = 30 is enough. After *T* iterations at the output of the algorithm, final reconstructions for phase and amplitude are obtained.

The success of the sparsity modeling and the corresponding filtering depend on how rich and redundant are dictionaries (sets of basis functions) used for complex-valued image analysis and synthesis. In the SR-SPAR algorithm, the BM3D sparsity is achieved due to a quite artificial procedure, where similar blocks of the images are grouped together and processed jointly [27]. The nonlocal block-wise sparsity imposed on phase and amplitude results in the separate BM3D filtering of phase and amplitude as it is shown in Step 4:

Here $\widehat{\phi}$ and $\widehat{b}$ are sparse approximations of *φ* and *b*; *phase* and *ampl* as indices of BM3D are used in order to emphasize that the parameters of BM3D can be different for the phase and amplitude; *Th _{φ}* and

*Th*are thresholding parameters of the algorithms [27].

_{b}Let the filtering at Steps 2 and 4 are dropped. Then, the SR-SPAR algorithm can be treated as a super-resolution version of the GS algorithms using the conventional alternating projections of wavefronts between the object and sensor planes and our interpolation at the sensor plane for the pixels in the area *S _{D}* \

*S*. In order to demonstrate the importance of this filtering, in what follows, we will also present results obtained with no filtering. To make a difference between the algorithms with and without filtering we name the latter one as the Super-Resolution Gerchberg-Saxton (SR-GS) algorithm.

_{S}## 4. Experimental setup

For the biological samples investigation, we modified the experimental setup that was presented in [31]. The main reason of this modification is a shortening propagation distance *d* between object and registration plane to fulfill the condition of Eq. (5). Figure 2 demonstrates the modified setup: laser LASOS GLK 3250 TS with the wavelength *λ* = 532 nm, digital 8-bit CMOS camera EVS 3264 × 2448 with pixel-pitch Δ* _{S}* = 1.4

*µ*m, and phase only reflective SLM Holoeye LETO 1920 × 1080 with pixel-pitch Δ

*= 6.4*

_{SLM}*µ*m and fill factor 93%. The light beam emitted by the laser source expands in the two lenses

*L*

_{1}and

*L*

_{2}collimator and goes through the beamsplitter (BS) to SLM, where the light wavefront obtains the desired phase retardation corresponding to the phase masks ℳ

*. Next, the wavefront is transfered to the object plane by 4f system (lenses*

_{s}*L*

_{3}and

*L*

_{4}), where it interacts with an object and after the wavefront freely propagates to CMOS camera for registration. In comparison with previous configuration [31], we added 4f system (lenses

*L*

_{3}and

*L*

_{4}) for transferring masks to the object plane, thereby the optical distance between object plane and CMOS is shortened to 1.44 mm, instead of 21.55 mm. According to [31], we implement the phase modulation masks ℳ

*on SLM with the large super-pixels of the size of 8 × 8 of the SLM’s pixels. The super-resolution up-sampling factor*

_{s}*r*is equal to 4, therefore, computational pixel size Δ

_{s}*= 0.35*

_{c}*µ*m. According to Eq. (5) for the given values of Δ

*,*

_{c}*d*, and

*λ*the zero-padded object wavefront should have as a support 6200 × 6200 pixels.

#### 4.1. Samples preparation

Epithelial cells used in the experiments are taken from intact oral cavity mucosa of a 33 year old volunteer having no signs of inflammation. Clean cotton swab was used to scrap the intact oral cavity mucosa of the volunteer mouth, then the swab was placed on a coverslip, where after its removal, oral cavity environment containing epithelial cells left. After it was dried in a room conditions for 30 minutes, it was placed under investigation. No additional preparation was performed on the epithelial cells.

The blood sample with erythrocytes was taken from the peripheral vein of the same volunteer by dropper and placed on the coverslip. It was dried in a room conditions for 30 minutes and placed under investigation. No additional preparation was performed on the blood sample.

## 5. Results and discussion

Two biological objects were investigated in the paper: erythrocyte and cell of human oral cavity epithelium, under different exposure times (*t _{exp})*of the camera. In the Poissonian noise distribution, the noise variance is proportional to the exposure time, therefore, variation of the exposure time results in different noise levels of the registered intensities.

For each exposure time *SNR* in the observed diffraction patterns (observations) was estimated as $SNR=10{\mathrm{log}}_{10}\frac{\frac{1}{{L}_{1}}{\displaystyle {\sum}_{1}^{{L}_{1}}\text{mean}({I}_{s})}}{\text{mean}({I}_{1})-\frac{1}{{L}_{1}}{\displaystyle {\sum}_{1}^{{L}_{1}}\text{mean}({I}_{s})}}\text{dB}$, where *I _{s}* is a single intensity distribution,

*L*

_{1}is a number of experiments used for

*SNR*estimation.

*L*

_{1}should be taken large enough to minimize noise in averaging. For our experiments

*L*

_{1}is equal to 20 since for lager values of

*L*

_{1}the mean $\frac{1}{{L}_{1}}{\displaystyle {\sum}_{1}^{{L}_{1}}\text{mean}({I}_{s})}$ takes steady state value.

#### 5.1. Epithelial cell

Figure 3 shows fragments of the registered intensity distributions with the corresponding histograms of different exposure times for the epithelial cell sample. Top row -observed raw intensity distributions obtained from a single experiment (*s* = 1), from left to right -*t _{exp}* =1 ms, 5 ms and 25 ms (pay your attention to colorbar scales, which demonstrate intensities of different levels); bottom row -the corresponding histograms. Case of

*texp*= 25 ms corresponds to the highest

*SNR*level in the observations,

*SNR*

_{25}

*= 16.8 dB, and the lowest*

_{ms}*SNR*is almost two times smaller with

*SNR*

_{1}

*= 8.8 dB for*

_{ms}*t*= 1 ms case. Histograms show used dynamic range of the camera, and as it can be seen from Fig. 3, there are full range 8 bit, less then 7-bit, and less then 6-bit for 25 ms, 5 ms, and 1 ms exposure cases, respectively.

_{exp}Figure 4 illustrates fragmented amplitudes (top row) and phases (bottom row) of the human cheek epithelial cell reconstructed by SR-SPAR from experiments with different exposure times *t _{exp}* = 1 ms,

*t*= 5 ms, and

_{exp}*t*= 25 ms, from left to right, respectively. For all reconstructions the borders of the cell and its inner structure with nucleus are clearly seen both in amplitudes and phases. Phase distribution maps show that phase delay in a region of cell nucleus is significantly higher than in other parts of the cell, that corresponds to higher refractive index of the nucleus, knowing that thickness of the cell changes slightly [10]. The reconstruction results for amplitudes for all exposure times are similar, but for phases there are differences between

_{exp}*t*= 1 ms and others cases. This difference is cosed by the low value of

_{exp}*SNR*

_{1}

*= 8.8 dB, but even in such noisy case SR-SPAR produce reliable result that can be seen from middle cross-sections plotted in Fig. 5. These cross-sections illustrate phase distribution in reconstructed phases for experiments with different exposures:*

_{ms}*t*= 1 ms -blue circles line,

_{exp}*t*= 5 ms red diamonds line, and

_{exp}*= 25 ms -green squares line. For all cases cross-sections behavior is almost the same, and the peak values for nucleus are nearly identical.*

_{texp}A serious difference in the width of the peaks concerns only the case *t _{exp}* = 1 ms, where the peak width is essentially smaller. This difference can be explained by the data quantization in intensity registration by the sensor. As it can be concluded from the histogram images in Fig. 3 the dynamic range of the sensor is about 6 bit while it is much larger for the larger exposure time. The smaller dynamic range results in a narrower boundary imaging. The ability of SR-SPAR to enable high quality imaging for the low-bit sensors is an interesting feature of this algorithm. For instance, such small errors in phase retrieval with lens optical setup in paper [26] is obtained only with the camera dynamic range equal or more than 12 bits.

Nevertheless, since the peak value for noisy case of *t _{exp}* = 1 ms coincides with values for less noisy cases, it is possible to use the SR-SPAR algorithm even in such noisy conditions for cells visualization and refractive index and/or thickness of the cell estimation.

In order to illustrate image improvement due to filtering in the SR-SPAR algorithm, we present the results obtained by the SR-GS algorithm where as it already discussed above the filtering is omitted. Comparison of the phase reconstructions is produced for the nosiest case with *t _{exp}* = 1 ms. Figure 6 shows SR-SPAR and SR-GS phase reconstructions obtained for the number of experiments

*L*= 6 and

*L*= 18 and plot of middle cross-sections of these reconstructions. Two images for

*L*= 6 (first and third) are obtained by SR-GS and SR-SPAR algorithms, respectively, where the parameters of the filters for SP-SPAR are optimized for the best results. Comparing these two images we may realize that indeed the filtering results in a dramatical improvement of imaging providing clear information on the general shape and its small variations. The first image in Fig. 6 is given by SR-GS for

*L*= 6 without filtering. The quality of this image is quite poor: the inner structure of the cell and the phase variations are corrupted to the range higher than 4 radians, while this range in the second image is lower. The second image is obtained by SR-GS but for much larger number of experiments

*L*= 18. Comparing this image with the third one, that was made by SR-SPAR algorithm with

*L*= 6, we may note that they are nearly identical with clear boundaries and the same range for the phase. Corresponding phase cross-sections in Fig. 6 demonstrate erroneous reconstruction of the SR-GS method with number of experiments

*L*= 6 (red crosses line) in comparison with other two reconstructions by SR-GS

*L*= 18 (green diamonds line) and SR-SPAR

*L*= 6 (blue triangles line). Therefore, the filtering in the SR-SPAR algorithm allows to decrease number of experiments to

*L*= 6 without losses in quality of reconstructions. Furthermore, less number of experiments decreases the illumination time of the investigated object thereby reducing the light influence on it.

#### 5.2. Erythrocyte

In microscopic scales, the epithelial cell is quite a big sample with size of about 60 *µ*m and the super-resolution advantage is not obvious. For a more descriptive super-resolution demonstration, we took a 8 times smaller biological object -erythrocyte, which typical size is about 8 *µ*m. During reconstruction strong halo effect arose and for it suppression we used Hilbert transform approach for filtering it out [37]. Figure 7 shows fragmented amplitudes (first column) and phase maps (second column) of the erythrocyte reconstructed by SR-SPAR with the pixel resolution Δ* _{c}* = 1.4

*µ*m (top row) and the super-resolution Δ

*= 0.35*

_{c}*µ*m (bottom row). In the pixel resolution, it is hard to see details of the erythrocyte inner structure, what could result in errors to distinguish the types of erythrocytes (e.g. discocyte, spherocyte or echinocyte) or the pressure conditions (orhypotonic and hypertonic) [38]. Presented type of the erythrocyte is hypertonic discocyte as it has a disk shape with a zone of central smaller thickness and corresponding deformations usually observed for hypertonic cases [38]. The super-resolution allows to achieve a much better imaging.

#### 5.3. Parameters of the SR-SPAR algorithm

The performance of the SR-SPAR algorithm essentially depends on tuning of its parameters. The following values are selected in our experiments. The image patches in BM3D are square 8 × 8 pixels. The group size is limited by 39 patches. The step size between the neighboring patches is equal to 3 pixels. The discrete cosine (for patches) and Haar (for the group length) transforms are used for 3D group data processing in BM3D [31]. The thresholding parameters in BM3D are taken very large Th* _{b}* = 100, Th

*= 80, as compared with the standards used in [27]. The parameters defining the iterations of the algorithm are as follows:*

_{φ}*γ*is adoptable and proportional to exposure time, number of iterations

*T*= 30. For reconstruction we use MATLAB R2016b and the computer with the processor Intel Core i7-3770 CPU @ 3.40 GHz, 32 Gb RAM. We characterize the complexity of the algorithm by the time required for processing. For the number of experiments

*L*= 6, the super-resolution image in the computational pixels 4896 × 4896 and this image zero-padded to 6200 × 6200, this time for a single iteration is equal to 75 sec. Thus computationally, the algorithm is quite demanded manly due to high memory size required for calculations.

## 6. Conclusion

Application of the SR-SPAR algorithm as a tool for computational super-resolution microscopy is demonstrated. A complex-valued transparent biological specimen can be restored from multiple intensity diffraction patterns. The algorithm is developed for lensless optical setup with multiple exposure experiments and random phase-modulation of wavefronts at the object plane.

The filterings in the observation domain and in the specimen variables are important features of the algorithm. If these filterings are omitted the algorithm becomes a super-resolution version of the standard iterative phase retrieval algorithms. In comparison with this simplified algorithm with no filtering, our algorithm demonstrates a valuable improvement in imaging with smaller number of observations and shorter exposure time. In particular, it is shown for for epithelial cell specimen, that this filtering allows to decrease a number of experiments from 18 to 6 with the same quality of imaging. It is demonstrated also for epithelial cells that the proposed method provides good reconstruction results even from intensities distribution with high noise level 8.8 dB of *SNR* and for dynamic range of the camera less than 6 bit. Therefore, in such conditions, illumination radiation could be reduced for minimization of radiation influence on the biological cellular specimen.

The effects of the super-resolution are demonstrated for erythrocyte phase and amplitude imaging. With the super-resolution up-sampling factor 4 the computation resolution is Δ* _{c}* = 0.35

*µ*m, what is four time better than the sensor size resolution. In comparison with the wavelength

*λ*= 532 nm, we way note that the achieved resolution better than the wavelength resolution.

SR-SPAR algorithm reconstructed high quality amplitude and phase images for both epithelial cell and erythrocyte. Comparison of super and pixel resolutions is presented. Hence, lensless phase retrieval SR-SPAR algorithm was successfully applied for biological samples investigation and can be considered as an alternative tool to traditional microscopic systems.

## Funding

Academy of Finland (287150, 2015-2019); Russian Ministry of Education and Science (project within the state mission for institutions of higher education, agreement 3.1893.2017/4.6); Horizon 2020 (TWINN-2015, grant 687328 - HOLO)..

## Disclosures

The authors declare that there are no conflicts of interest related to this article.

## References

**1. **M. Mikuła, T. Kozacki, M. Józwik, and J. Kostencka, “Accurate shape measurement of focusing microstructures in Fourier digital holographic microscopy,” Appl. Opt. **57**, A197–A204 (2018). [CrossRef]

**2. **V. Cazac, A. Meshalkin, E. Achimova, V. Abashkin, V. Katkovnik, I. Shevkunov, D. Claus, and G. Pedrini, “Surface relief and refractive index gratings patterned in chalcogenide glasses and studied by off-axis digital holography,” Appl. Opt. **57**, 507–513 (2018). [CrossRef] [PubMed]

**3. **Y. Fu, G. Pedrini, and W. Osten, “Vibration measurement by temporal Fourier analyses of a digital hologram sequence,” Appl. Opt. **46**, 5719–5727 (2007). [CrossRef] [PubMed]

**4. **A. V. Belashov, A. A. Zhikhoreva, T. N. Belyaeva, E. S. Kornilova, N. V. Petrov, A. V. Salova, I. V. Semenova, and O. S. Vasyutinskii, “Digital holographic microscopy in label-free analysis of cultured cells’ response to photodynamic treatment,” Opt. Lett. **41**, 5035–5038 (2016). [CrossRef] [PubMed]

**5. **H. Majeed, T. H. Nguyen, M. E. Kandel, A. Kajdacsy-Balla, and G. Popescu, “Label-free quantitative evaluation of breast tissue using Spatial Light Interference Microscopy (SLIM),” Sci. Reports **8**, 6875–6884 (2018). [CrossRef]

**6. **A. J. Herrmann, S. Techritz, N. Jakubowski, A. Haase, A. Luch, U. Panne, and L. Mueller, “A simple metal staining procedure for identification and visualization of single cells by LA-ICP-MS,” The Analyst **142**, 1703–1710 (2017). [CrossRef] [PubMed]

**7. **R. Horák, L. Kvapil, K. Motyka, L. Slaninová, M. Grepl, K. Kořistek, M. Urbášek, P. Hradil, and M. Soural, “Synthesis of 2-alkenyl-3-hydroxyquinolin-4(1 H)-ones as promising antimicrobial and fluorescent agents,” Tetrahedron **74**, 366–374 (2018). [CrossRef]

**8. **V. Magidson and A. Khodjakov, “Circumventing photodamage in live-cell microscopy,” Methods cell biology **114**, 545–560 (2013). [CrossRef]

**9. **L. Kastl, M. Isbach, D. Dirksen, J. Schnekenburger, and B. Kemper, “Quantitative phase imaging for cell culture quality control,” Cytom. Part A **91**, 470–481 (2017). [CrossRef]

**10. **A. V. Belashov, A. A. Zhikhoreva, V. G. Bespalov, V. I. Novik, N. T. Zhilinskaya, I. V. Semenova, and O. S. Vasyutinskii, “Refractive index distributions in dehydrated cells of human oral cavity epithelium,” J. Opt. Soc. Am. B **34**, 2538–2543 (2017). [CrossRef]

**11. **P. Cintora, J. Arikkath, M. Kandel, G. Popescu, and C. Best-Popescu, “Cell density modulates intracellular mass transport in neural networks,” Cytom. Part A **91**, 503–509 (2017). [CrossRef]

**12. **V. V. Dyomin and A. S. Olshukov, “Digital holographic video for studying biological particles,” J. Opt. Technol. **79**, 344–347 (2012). [CrossRef]

**13. **M. U. Daloglu, W. Luo, F. Shabbir, F. Lin, K. Kim, I. Lee, J.-Q. Jiang, W.-J. Cai, V. Ramesh, M.-Y. Yu, and A. Ozcan, “Label-free 3D computational imaging of spermatozoon locomotion, head spin and flagellum beating over a large volume,” Light. Sci. & Appl. **7**, 17121–32 (2018). [CrossRef]

**14. **R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Phys. E. ppl. Opt. OPTIK **2**, 237–246 (1969).

**15. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**16. **I. A. Shevkunov and N. V. Petrov, “Experimental comparison of phase retrieval methods which use intensity distribution at different planes,” J. Physics: Conf. Ser. **536**, 012028 (2014).

**17. **P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. **33**, 309–311 (2008). [CrossRef] [PubMed]

**18. **L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. **199**, 65–75 (2001). [CrossRef]

**19. **A. Anand, G. Pedrini, W. Osten, and P. Almoro, “Wavefront sensing with random amplitude mask and phase retrieval,” Opt. Lett. **32**, 1584–1586 (2007). [CrossRef] [PubMed]

**20. **J. Huang, H. Jin, Q. Ye, and G. Meng, “Differential phase retrieval based on phase modulating for wavefront detection,” Appl. Phys. B **124**, 58–65 (2018). [CrossRef]

**21. **G. Williams, M. Pfeifer, I. Vartanyants, and I. Robinson, “Effectiveness of iterative algorithms in recovering phase in the presence of noise,” Acta Crystallogr. Sect. A: Foundations Crystallogr. **63**, 36–42 (2007). [CrossRef]

**22. **T. Latychevskaia, P. Formanek, C. Koch, and A. Lubk, “Off-axis and inline electron holography: Experimental comparison,” Ultramicroscopy **110**, 472–482 (2010). [CrossRef]

**23. **V. Bianco, P. Memmolo, M. Paturzo, A. Finizio, and P. Ferraro, “A method for total noise removal in digital holography based on enhanced grouping and sparsity enhancement filtering,” in “*Proceedings of SPIE - The International Society for Optical Engineering*,”, vol. 10329P. Lehmann, W. Osten, and A. Albertazzi Gonçalves, eds. (International Society for Optics and Photonics, 2017), vol. 10329, pp. 103290E–6.

**24. **A. V. Martin, F. Wang, N.-t. D. Loh, T. Ekeberg, F. R. N. C. Maia, M. Hantke, G. van der Schot, C. Y. Hampton, R. G. Sierra, A. Aquila, S. Bajt, M. Barthelmess, C. Bostedt, J. D. Bozek, N. Coppola, S. W. Epp, B. Erk, H. Fleckenstein, L. Foucar, M. Frank, H. Graafsma, L. Gumprecht, A. Hartmann, R. Hartmann, G. Hauser, H. Hirsemann, P. Holl, S. Kassemeyer, N. Kimmel, M. Liang, L. Lomb, S. Marchesini, K. Nass, E. Pedersoli, C. Reich, D. Rolles, B. Rudek, A. Rudenko, J. Schulz, R. L. Shoeman, H. Soltau, D. Starodub, J. Steinbrener, F. Stellato, L. Strüder, J. Ullrich, G. Weidenspointner, T. A. White, C. B. Wunderer, A. Barty, I. Schlichting, M. J. Bogan, and H. N. Chapman, “Noise-robust coherent diffractive imaging with a single diffraction pattern,” Opt. Express **20**, 16650–61 (2012). [CrossRef]

**25. **F. Soulez, A. Thiébaut, A. Schutz, A. Ferrari, F. Courbin, and M. Unser, “Proximity operators for phase retrieval,” Appl. Opt. **55**, 7412–7421 (2016). [CrossRef] [PubMed]

**26. **C. Shen, X. Bao, J. Tan, S. Liu, and Z. Liu, “Two noise-robust axial scanning multi-image phase retrieval algorithms based on Pauta criterion and smoothness constraint,” Opt. Express **25**, 16235–49 (2017). [CrossRef] [PubMed]

**27. **K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Trans. on Image Process. **16**, 2080–2095 (2007). [CrossRef]

**28. **V. Katkovnik, I. A. Shevkunov, N. V. Petrov, and K. Egiazarian, “Wavefront reconstruction in digital off-axis holography via sparse coding of amplitude and absolute phase,” Opt. letters **40**, 2417–2420 (2015). [CrossRef]

**29. **V. Katkovnik, J. Bioucas-Dias, and N. Petrov, “Digital phase-shifting holography based on sparse approximation of phase and amplitude,” IEEE 3DTV-Conference pp. 1–4 (2014).

**30. **V. Katkovnik and K. Egiazarian, “Sparse superresolution phase retrieval from phase-coded noisy intensity patterns,” Opt. Eng. **56**, 56–67 (2017). [CrossRef]

**31. **V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Egiazarian, “Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,” Optica **4**, 786–794 (2017). [CrossRef]

**32. **V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Eguiazarian, “Multiwavelength surface contouring from phase-coded diffraction patterns,” in “*Unconventional Optical Imaging*,” C. Fournier, M. P. Georges, and G. Popescu, eds. (SPIE, 2018), pp. 106711B–106726B.

**33. **V. Katkovnik, I. Shevkunov, N. Petrov, and K. Eguiazarian, “Multiwavelength Absolute Phase Retrieval from Noisy Diffractive Patterns: Wavelength Multiplexing Algorithm,” Appl. Sci. **8**, 719–738 (2018). [CrossRef]

**34. **E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Analysis **39**, 277–299 (2015). [CrossRef]

**35. **Z. T. Harmany, R. F. Marcia, and R. M. Willett, “Sparsity-regularized photon-limited imaging,” in “*2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro*,” (IEEE, 2010), pp. 772–775.

**36. **N. V. Petrov, M. V. Volkov, and V. G. Bespalov, “Phase retrieval of THz radiation using set of 2D spatial intensity measurements with different wavelengths,” in “*Practical Holography XXVI: Materials and Applications*,”, vol. 8281 (SPIE, 2012), vol. 8281, pp. 82810J-82810J-7.

**37. **M. E. Kandel, M. Fanous, C. Best-Popescu, and G. Popescu, “Real-time halo correction in phase contrast imaging,” Biomed. Opt. express **9**, 623–635 (2018). [CrossRef] [PubMed]

**38. **P. Memmolo, L. Miccio, F. Merola, O. Gennari, P. A. Netti, and P. Ferraro, “3D morphometry of red blood cells by digital holography,” Cytom. Part A **85**, 1030–1036 (2014). [CrossRef]