## Abstract

Inverse dynamical photon scattering (IDPS), an artificial neural network based algorithm for three-dimensional quantitative imaging in optical microscopy, is introduced. Because the inverse problem entails numerical minimization of an explicit error metric, it becomes possible to freely choose a more robust metric, to introduce regularization of the solution, and to retrieve unknown experimental settings or microscope values, while the starting guess is simply set to zero. The regularization is accomplished through an alternate directions augmented Lagrangian approach, implemented on a graphics processing unit. These improvements are demonstrated on open source experimental data, retrieving three-dimensional amplitude and phase for a thick specimen.

© 2016 Optical Society of America

## 1. Introduction

In electron or light microscopy, a general problem is to determine the amplitude and phase changes imparted on the incoming wave by the imaged object from intensity measurements only, e.g. single images, focal series, beam-tilt series or specimen-tilt series. This problem is often called the phase retrieval problem because the phase information is lost upon recording the image intensity. This ill-posed inverse problem can be solved by various algorithms which can be mainly divided into two types [1].

The first type is summarized as input-output methods, later developed into subtypes, like basic input-output (BIO) and hybrid input-output (HIO). The most famous one of this type is the Gerchberg-Saxton-type (GS) algorithm [2]. Although these iterative algorithms appear in many varieties, all of them propagate the intermediate solution back and forth through the optics, while in some way enforcing the constraints that in measurement space, the wave amplitudes must equal the measured amplitudes and that waves that are back-propagated from the measurement planes to the object exit plane must all be equal. The GS method has been modified and extended to different applications, e.g. ptychography [3] and simultaneous illumination estimation [4].

The majority of these GS-type algorithms retrieves two-dimensional images. The reconstructed phases convey information about the specimen’s local refractive index and thus its composition or thickness in the case of specimens thin enough to be homogeneous in the vertical direction. For thicker specimens, the retrieved phases can be considered vertical projections and therefore be used in linear tomographic reconstruction techniques under the assumption of single scattering [5]. The single scattering assumption was relaxed in [6] and later in [7] by explicitly modeling the wave propagation through thick specimens with the multislice algorithm [8–12], which is known as the beam propagation method [13] in the optics community.

It is noted here that algorithms of the GS-type do not have an explicit error metric at their disposal. However, it has been argued [1, 7] that they end up minimizing the sum of squared differences between simulated and measured intensities. Since the error metric cannot be explicitly chosen, it is not straightforward to include regularization in these schemes, [14] being a notable exception.

The second type of reconstruction algorithm sets out to compute the derivatives of an error metric with respect to the amplitudes and phases in order to retrieve the latter with a gradient-based numerical optimization [15, 16]. In [17–19] the multislice algorithm was recast as an artificial neural network, greatly facilitating the computation of the derivatives of the error function. As a result, the inverse dynamical electron scattering (IDES) framework was proposed, which enabled three-dimensional reconstructions from arbitrary detection geometries in transmission electron microscopy.

In this paper, a framework of inverse dynamical photon scattering (IDPS) is developed in the context of thick specimens, to operate on optical data according to the principles of Ref. [17,18]. Three-dimensional reconstruction of the phase and the absorption of a test object from experimental measurements is demonstrated. The usefulness of a free choice of error metric is shown by adapting it to the data set’s dynamic range, and the direct access to the error metric makes for an easy incorporation of regularization of the solution. Furthermore, since derivatives of the error metric with respect to unknown microscope values, for instance defocus or incoherent illumination angle, can be calculated through the chain-rule, these nuisance parameters are estimated together with the unknown object, greatly improving the results. An augmented Lagrangian (AL) method enables constrained optimization and thus a total-variation-like regularization of the solution. Following [20], an alternating directions (AD) method [21] is introduced to efficiently solve the LA. The combination of these two techniques is denoted ADAL in the remainder of the paper.

In Section 2.1 the forward model of image formation is introduced and in Section 2.2 its the gradient based inversion is derived. The regularization is explained in Section 2.3, the implementation is commented on in Section 2.4 and the method of evaluation and experimental verification is presented in Section 2.5. The experimental results are presented in Section 3, and the paper ends with a discussion and the conclusions in Section 4.

## 2. Methods

In this section the forward model, its inversion through an artificial network formulation, the regularization of the solution and the implementation of the experimental verification is treated.

#### 2.1. Forward model

Wave propagation through thick specimens is modeled with the multislice method [12], assuming no backscattering. Although the notation in this paper closely follows the multislice formulation as is customary in transmission electron microscopy, the reader is informed that multislice is formally equivalent to the beam propagation method [13] known in optics. The specimen is divided in *N* thin slices and the interaction of the wave function impinging on the *j*^{th} slice is computed in two steps. Firstly, the incoming wave, *ψ _{j}*, is multiplied with the transmission function,

*t*, and secondly, the transmitted wave,

_{j}*ψ*, is propagated to the next slice by means of a convolution with the Fresnel propagator

_{j}t_{j}*p*,

*V*is the scattering potential within slice

_{j}*j*, and

*V*=

_{j}*v*+

_{j}*iw*. In the case of photons,

_{j}*v*is the phase change caused by the refractive index difference between specimen and medium, and in the case of fast electrons,

_{j}*v*is proportional to the projected electrostatic potential. Furthermore,

_{j}*w*is the attenuation coefficient. The multislice algorithm is illustrated in Fig. 1.

_{j}In this forward model, the image formation through the optical system and the detection are modeled as well. The objective lens converts the exit wave *ψ*_{N+1} into the wave *ψ*_{N+2} at the detector plane where the intensity *I* is recorded:

*I*is the image intensity at the detector.

The problem is discretized by sampling the projected potential in intervals of Δ* _{z}* in the vertical direction, with Δ

*equal to the distance between successive transmission functions; indexed by the index*

_{z}*j*. The sampling distance in the horizontal directions is Δ

*and, for notational simplicity, these data points are indexed with a single index*

_{xy}*k*.

#### 2.2. Artificial neural network formulation

The inverse problem is approached as a gradient-based optimization where the total error *E* is minimized. Without loss of generality, *E* is defined as,

*m*runs over various measurement sets, for example over the beam tilts in case of a beam-tilt series, and where

*I*is the modeled intensity and

*J*the measured intensity. A common choice for

*E*is the sum of squared differences (SSD), with but other expressions are used as well in this paper.

The derivatives of *E* with respect to the potential *V _{j}* in each voxel of the specimen can be computed efficiently, if the multislice algorithm is reinterpreted as an artificial neural network (ANN). The ANN associated with the multislice is depicted in Fig. 2. Following [17, 18], the derivatives

*∂E/∂*(

*ψ*) follow from the so-called backpropagation algorithm [22] with only one extra pass through the multislice algorithm, and are used to retrieve

_{jk}t_{jk}*E*is achieved by one’s gradient-based method of choice [23]. In this paper the conjugate gradient (CG) and alternating directions and augmented Lagrangian (ADAL) methods are used.

By invoking the chain rule further, the derivatives with respect to other imaging parameters can be computed as well. In this paper, specifically the derivatives of *E* with respect to defocus *C*_{1} and the illumination semi-angle *α* are needed. For the computation of *∂E/∂C*_{1} the reader is referred to [18]. The influence of *α* is modeled by an extra convolution of the intensity, *I*, with [24]

*ν*the radial distance in reciprocal space, leading to

*∂E/∂*[

*I*⊗

_{m}*h*]

*is obtained from the backpropagation method and thus is treated as known. The imaging parameters,*

_{k}*C*

_{1}and

*α*, are now estimated with a gradient search method, just like

*V*.

_{jk}#### 2.3. Regularization

In inverse problems, regularization is often necessary to select a meaningful result from solution space [25–27]. In recent years, total variation (TV) regularization has been applied in various research field and has demonstrated success due to its suppressing of noise and promoting of signal sparsity [28–30]. In this paper the object is retrieved by solving the following TV optimization problem:

with ‖.‖_{1}being

*ℓ*

_{1}norm operator,

*V*the projected potential in long vector format,

*D*the discrete gradient of

_{i}V*V*in

*i*in the

*x*and

*y*-directions, and

*μ*is a weighting coefficient balancing the error and the regularization terms. When determining a proper value for

*μ*, a potentially large range of values spanning an unknown amount of orders of magnitude needs to be considered. Therefore, powers of 10 were tried out for

*μ*, and

*μ*was fixed to 0.1 as it was the highest value considered that yielded a realistic result.

By rewriting Eq. (10) as,

*υ*the vector of Lagrangian multipliers,

_{i}*β*the weighting factor for the penalty term and ‖.‖

_{2}the

*ℓ*

_{2}norm operator. It is then readily derived that

*∂E/∂V*is obtained by the ANN backpropagation. The potential

*V*and

*U*are updated separately using the non-monotone alternating direction (AL) approach reported in [20], where in iteration

*ℓ*+ 1 first the help variables ${U}_{i}^{(\ell )}$ are updated through the shrinkage function

*V*

^{(ℓ)}is updated through a steepest-descent step

*α*

^{(ℓ+1)}the step size determined by the non-monotone line search described in [20]; and finally the Lagrangian multipliers are updated through the standard formula [23]

*β*is increased steadily as

#### 2.4. IDPS implementation

This inversion scheme is implemented as IDPS, inverse dynamical photon scattering, and takes advantage of the massive parallelization a graphics processing card (GPU) offers. It is programmed in the CUDA programming language [31] with the CUFFT and CUBLAS libraries.

#### 2.5. Evaluation and experimental verification

Beside the aforementioned error function *E* in Eq. (3), another criteria is used as well to evaluate the agreement between the simulated data *I* and the experimental data *J*. This so-called *R*-factor is often used in crystallography and is defined as

The proposed method is verified using the open source data from [32], one image of which is shown in Fig. 3. Two stacked 1995 US Air Force resolution test charts, 110 *μ*m apart, were adopted as specimen. This dataset was acquired with a microscope with a numerical aperture (NA) of 0.1. An LED array was placed sufficiently far to consider each individual LED to illuminate the specimen with a spatially coherent plane wave. Further, 9 bright field and 284 dark field images were recorded with LEDs up to 0.44 illumination NA, providing an effective NA of 0.54. Since the central wavelength of the LEDs is 643 nm, the 0.54 NA corresponds to a lateral resolution of 1.20 *μ*m. For further details the reader is referred to [7, 32].

Rather than using the incoherent Abbe resolution criterion employed in [7], here the coherent resolution criterion is used which is two times larger [33]. The resolution of the reconstructions is evaluated as the center-to-center distance of the bars in the smallest element in the resolution target that is still discernible; this choice results in resolution numbers twice as large as in [7] for the same resolved element.

The image intensities were rescaled proportionally to the squared distance between the LEDs and the field of view, with the distance to the LED at zero illumination NA taken as 1. Furthermore, the raw images were found to exhibit a noisy background in the dark field measurements due to scattering from within the glass of the resolution targets rather than off the edges of the opaque bars only. Since this is not included in the imaging model, this background was removed by multiplying the intensities *J* with a sigmoid function with inflection point at 0.25 and a width of 0.0375,

Because the intensity range for bright and dark field images differs greatly, the functions *f* in the error *E* are chosen as

*E*is called the sum of normalized absolute differences (SNAD). The numerator is a differentiable approximate to the absolute difference, |

*I*−

_{i}*J*|, and the

_{i}*ε*

_{2}-term in the denominator prevents contributions from the background noise in the dark-field images from dominating

*E*. Since SNAD deemphasizes large absolute differences, it is more robust than SSD.

Two slices, one for each resolution target, are reconstructed. The starting values for *v* and *w* are set to zero. Since the bars in the resolution targets exhibit full absorption, these targets are amplitude objects with the phase below the bars undefined. Therefore, in Sec. 3 mainly reconstructed amplitudes, exp(−*w*), are shown. In the very end also the reconstructed phase is given in order to fully document our results.

## 3. Results

In this section first the influence of the error metric is tested; then it is investigated whether estimation of the nuisance parameters *C*_{1} and *α* improves the results; and finally regularization is introduced. This agenda is accomplished by looking into these four different cases,

These settings and their results are summarized in Table 1.

The starting guess is fixed to zero for all cases and the number of iterations for cases 1, 2, 3 and 4 is set to 4096, 2048, 2048 and 1024, respectively.

#### 3.1. Conjugate gradients, without regularization

First the effect of SSD and SNAD—Eqs. (4) and (20), respectively—is investigated on a reconstruction of the region within the ROI in Fig. 3. No regularization was applied and thus a Polak-Ribière conjugate gradient optimization was used.

The SSD reconstruction is shown in Fig. 4 and is characterized by heavy, resolution-limiting, high-frequency noise. This is in line with the observation in [34] where an intensity-based cost function, corresponding to the SSD error metric in this paper, was found to yield similar artifacts. Element 4 of group 8 of the resolution target can be resolved, corresponding to a resolution of 2.76 *μ*m. In Fig. 5(a) it can be observed that although the SSD steadily decreases with iteration number, the *R*-factor behaves erratically but settles at 36.9% in the end.

This erratic behavior is explained by the different importance that the SSD and the *R*-factor attribute to large deviations between the measurements *J* and the model *I*. While the former weighs large deviations heavily and has them reduced at the cost of introducing many small ones, the latter is dominated by the accumulation of these small deviations because of the square root in its expression.

As illustrated in Fig. 6, when reconstructed with SNAD, the noise is considerably reduced and consequently element 4 of group 9 can be resolved, corresponding to a resolution of 1.38 *μ*m. Fig. 5(b) shows a steady decrease of both SNAD and the *R*-factor, and the latter is, at 16.4%, considerably lower than with SSD.

In Fig. 7 the results for SNAD optimization in concordance with the optimization of *C*_{1} and *α* is shown. The defocus *C*_{1} and illumination angle *α* were estimated in a two-step process. First the object and *C*_{1} were estimated simultaneously while *α* was fixed to 0, yielding *C*_{1} = 22 *μ*m, in correspondence with [7]. Then the object and *α* were reconstructed simultaneously while *C*_{1} was fixed to 22 *μ*m, yielding *α* = 4.3°. Note how the cross-talk between slices is significantly reduced, which is most apparent in the upper left corner. Fig. 5(b) and Table 1 furthermore show that the *R*-factor has improved further to 11.4%.

#### 3.2. Alternating directions augmented Lagrangian, with regularization

Here, a regularized solution is estimated with ADAL using the SNAD error metric, the results are shown in Figs. 8 and 9

The object and *α* are estimated simultaneously, yielding *α* = 4.7°, while *C*_{1} was fixed to the value retrieved from the previous non-regularized optimization,. Although at 14.6% the *R*-factor is higher than the best non-regularized result, the background is smoother and the spurious oscillations apparent in Fig. 7 have diminished greatly. This is especially apparent in group 8. The resolution target of group 9, element 4 can be resolved and, correspondingly, a resolution of 1.38 *μ*m is achieved,

In the interest of reporting the results completely, the reconstructed phases are displayed in Fig. 10. It can be seen that in the glass parts of the resolution targets the phases are reconstructed relatively evenly. Underneath the opaque bars, however, the phase often exhibits rapid oscillations and abrupt changes. Since the phase is undefined in those areas, no interpretation of this behavior is attempted here. These phase images are similar to the two-dimensional reconstructions of the same type of object given in [35]. In [36] an early version of IDPS was used to retrieve the two-dimensional phase of a biological specimen from a focal series with a lowest intensity of 6% of the vacuum level, showing that effective phase reconstruction is possible when the attenuation is not too strong.

## 4. Discussion and conclusion

In this paper, an algorithm is proposed, inverse dynamical photon scattering (IDPS), that uses as a forward model the propagation of the optical wave through the sample and the objective lens with the multislice or, equivalently, the beam propagation method. By recasting the forward model as an artificial neural network, an error metric can be chosen, and the derivatives of this metric with respect to the unknown values of the discretized object become available at the low computational cost of one extra pass through the network. These gradients are deployed in a derivative-based optimization scheme to retrieve a three-dimensional reconstruction of the specimen; in this paper CG and alternate directions augmented Lagrangian (ADAL) were opted for. IDPS was implemented on the graphics processing unit.

IDPS is verified using open source data [32] of bright-field and dark-field images recorded with illumination angles far surpassing the microscope’s NA. The results are obtained from an initial guess of zero and thus no preprocessing steps such as light field refocusing are necessary.

The free choice of error metric was shown to be important as the standard choice of sum of squared differences (SSD) led to a high amount of resolution limiting noise, and to erratic behavior in the *R*-factor. The alternative sum of normalized absolute differences (SNAD) yielded a better-behaved and lower *R*-factor, as well as a better resolution. This poses a problem for Gerchberg-Saxton-type algorithms since they can be thought of as implicitly minimizing the sum of squared differences [1, 7], but do not have explicit access to their error metric.

Estimation of certain nuisance parameters—the focal value and illumination angle in this case—together with the specimen itself, improved the result further. The *R*-factor decreased even more and the cross-talk between layers was diminished.

Finally, a total variation regularization of the object was achieved with an alternating direction augmented Lagrangian approach. Despite having a somewhat higher *R*-factor, the reconstruction had the same resolution as without regularization and was considerably smoother with strongly diminished spurious oscillations. Adopting a more advanced sparsity preserving algorithm, like a dictionary learning based K-SVD method [37], might improve the results further.

## Acknowledgments

Financial support from the Carl Zeiss Foundation is gratefully acknowledged by all authors. C.T. Koch also acknowledges the German Research Foundation (DFG, Grant no. KO 2911/7-1). The authors sincerely acknowledge the helpful discussion and the open source data provided by L. Waller and L. Tian, University of California, Berkeley.

## References and links

**1. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982). [CrossRef] [PubMed]

**2. **R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik **35**, 237 (1972).

**3. **J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. **85**, 4795–4797 (2004). [CrossRef]

**4. **A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy **109**, 1256–1262 (2009). [CrossRef] [PubMed]

**5. **M. Holler, A. Diaz, M. Guizar-Sicairos, P. Karvinen, E. Färm, E. Härkönen, M. Ritala, A. Menzel, J. Raabe, and O. Bunk, “X-ray ptychographic computed tomography at 16 nm isotropic 3D resolution,” Sci. Rep. **4**, 3857 (2014). [CrossRef] [PubMed]

**6. **A. M. Maiden, M. J. Humphry, and J. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. **29**, 1606–1614 (2012). [CrossRef]

**7. **L. Tian and L. Waller, “3d intensity and phase imaging from light field measurements in an led array microscope,” Optica **2**, 104–111 (2015). [CrossRef]

**8. **J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new Theoretical approach,” Acta Cryst. **10**, 609–619 (1957). [CrossRef]

**9. **J. G. Allpress, E. A. Hewat, A. F. Moodie, and J. V. Sanders, “n-Beam lattice images. I. experimental and computed images from W_{4}Nb_{26}O_{77},” Acta Cryst. **A28**, 528–536 (1972). [CrossRef]

**10. **D. F. Lynch and M. A. O’Keefe, “n-Beam lattice images. II. methods of calculation,” Acta Cryst. **A28**, 536–548 (1972). [CrossRef]

**11. **P. Goodman and A. F. Moodie, “Numerical evaluation of N-beam wave functions in electron scattering by the multislice method,” Acta Cryst. **A30**, 280–290 (1974). [CrossRef]

**12. **E. J. Kirkland, *Advanced Computing in Electron Microscopy* (Springer, 2010). [CrossRef]

**13. **J. Van Roey, J. van der Donk, and P. E. Lagasse, “Beam-propagation method: analysis and assessment,” J. Opt. Soc. Am. **71**, 803–810 (1981). [CrossRef]

**14. **A. Migukin, V. Katkovnik, and J. Astola, “Advanced multi-plane phase retrieval using graphic processing unit: augmented lagrangian technique with sparse regularization,” Proc. SPIE **8429**, 84291N (2012). [CrossRef]

**15. **J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. **32**, 1737–1746 (1993). [CrossRef] [PubMed]

**16. **A. J. D’Alfonso, A. J. Morgan, A. W. C. Yan, P. Wang, H. Sawada, A. I. Kirkland, and L. J. Allen, “Deterministic electron ptychography at atomic resolution,” Phys. Rev. B **89**, 064101 (2014). [CrossRef]

**17. **W. Van den Broek and C. T. Koch, “Method for retrieval of the three-dimensional object potential by inversion of dynamical electron scattering,” Phys. Rev. Lett. **109**, 245502 (2012). [CrossRef]

**18. **W. Van den Broek and C. T. Koch, “General framework for quantitative three-dimensional reconstruction from arbitrary detection geometries in tem,” Phys. Rev. B **87**, 184108 (2013). [CrossRef]

**19. **C. T. Koch and W. Van den Broek, “Measuring three-dimensional positions of atoms to the highest accuracy with electrons,” C. R. Phys. **15**, 119–125 (2014). [CrossRef]

**20. **C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented lagrangian method with applications to total variation minimization,” Comput. Optim. Appl. **56**, 507–530 (2013). [CrossRef]

**21. **S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends. Mach. Learn. **3**, 1–122 (2011). [CrossRef]

**22. **C. M. Bishop, *Neural Networks for Pattern Recognition* (Oxford University, 1995).

**23. **J. Nocedal and S. Wright, *Numerical Optimization* (Springer, 2010).

**24. **C. T. Koch, “A flux-preserving non-linear inline holography reconstruction algorithm for partially coherent electrons,” Ultramicroscopy **108**, 141–150 (2008). [CrossRef]

**25. **H. W. Engl, M. Hanke, and A. Neubauer, *Regularization of Inverse Problems* (Springer, 1996). [CrossRef]

**26. **A. Neumaier, “Solving ill-conditioned and singular linear systems: A tutorial on regularization,” SIAM Rev. **40**, 636–666 (1998). [CrossRef]

**27. **Q. Xu, A. Sawatzky, E. Roessl, M. A. Anastasio, and C. O. Schirra, “Sparsity-regularized image reconstruction of decomposed K-edge data in spectral CT,” Phys. Med. Biol. **59**, N65–N79 (2014). [CrossRef] [PubMed]

**28. **E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol. **53**, 4777 (2008). [CrossRef] [PubMed]

**29. **B. Goris, W. Van den Broek, K. Batenburg, H. H. Mezerji, and S. Bals, “Electron tomography based on a total variation minimization reconstruction technique,” Ultramicroscopy **113**, 120–130 (2012). [CrossRef]

**30. **Q. Xu, E. Y. Sidky, X. Pan, M. Stampanoni, P. Modregger, and M. A. Anastasio, “Investigation of discrete imaging models and iterative image reconstruction in differential x-ray phase-contrast tomography,” Opt. Express **20**, 10724–10749 (2012). [CrossRef] [PubMed]

**31. ** NVIDIA, *CUDA Toolkit Documentation*, v6.5 ed. (CUDA2014). http://docs.nvidia.com/cuda/#axzz3EKwkzsvA.

**32. **L. Tian, “3D FPM on LED array microscope,” (2015). https://sites.google.com/site/leitianoptics/open-source.

**33. **X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture fourier ptychography: principle, implementation and characterization,” Opt. Express **23**, 3472–3491 (2015). [CrossRef] [PubMed]

**34. **L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of fourier ptychography phase retrieval algorithms,” Opt. Express **23**, 33214–33240 (2015). [CrossRef]

**35. **A. Migukin, M. Agour, and V. Katkovnik, “Phase retrieval in 4f optical system: background compensation and sparse regularization of object with binary amplitude,” Appl. Opt. **52**, A269–A280 (2013). [CrossRef] [PubMed]

**36. **W. Van den Broek, X. Jiang, and C. T. Koch, in *Proceedings of the 2015 Microscopy Conference*, (DGE – German Society for Electron Microscopy e. V., 2015), pp. 467–468.

**37. **M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation,” IEEE Trans. Signal Process. **54**, 4311–4322 (2006). [CrossRef]