## Abstract

Flexible fiber-optic endoscopes provide a solution for imaging at depths beyond the reach of conventional microscopes. Current endoscopes require focusing and/or scanning mechanisms at the distal end, which limit miniaturization, frame-rate, and field of view. Alternative wavefront-shaping based lensless solutions are extremely sensitive to fiber-bending. We present a lensless, bend-insensitive, single-shot imaging approach based on speckle-correlations in fiber bundles that does not require wavefront shaping. Our approach computationally retrieves the target image by analyzing a single camera frame, exploiting phase information that is inherently preserved in propagation through convnetional fiber bundles. Unlike conventional fiber-based imaging, planar objects can be imaged at variable working distances, the resulting image is unpixelated and diffraction-limited, and miniaturization is limited only by the fiber diameter.

© 2016 Optical Society of America

## 1. Introduction

Flexible optical endoscopes are an important tool in biomedical investigations and clinical diagnostics. They enable imaging at depths where scattering prevents noninvasive microscopic investigation. An ideal microendoscopic probe should be flexible, allow real-time diffraction-limited imaging at various working distances from its distal end, and maintain a minimal cross-sectional footprint [1, 2].

Single-mode fibers (SMF) can be used as the smallest diameter light-guides for endoscopic imaging. However, in order to obtain two-dimensional (2D) images a mechanical scanning head [1, 2] or a spectral disperser [3, 4] should be mounted at the distal end of a fiber, complicating the endoscope fabrication and sacrificing frame rate, probe size, or field of view (FOV). For example, mechanical scanning heads possess a diameter of the order of 1mm [5], and the 2D spectral dispersers of Refs [3,4] are of cm dimensions. While GRIN lens solution with diameters as small as 350 microns have been reported [6] their FOV is usually smaller than their diameter (70 micrometers in the above example). In addition, GRIN lenses suffer from aberrations, a fixed working distance, and when coupled to a fiber bundle, exhibit the typical pixelation artifacts of such bundles. Alternatively, the different modes of a multimode fiber (MMF) can deliver 2D image information, if the complex phase randomization and mode mixing is measured and compensated for, computationally or via wavefront-shaping [7–14]. Unfortunately, the extreme sensitivity of the wavefront-correction to any movement or bending of the fiber necessitates direct access to the distal end for recalibration, or precise knowledge of the bent shape [13].

A robust, widely used, and commercially available type of imaging endoscopes is based on fiber bundles, constructed from thousands of individual cores closely packed together, each core carries one image pixel information. For example, commercially availble bundles pack 3,000 cores in a total outer diameter of 250 microns [15]. Imaging is performed in a straightforward manner if the target object is positioned immediately adjacent to the bundle’s facet (Fig. 1(a)) [1, 16]. While straightforward to implement, conventional fiber bundle endoscopes suffer from limited resolution and pixelation artifacts dictated by the individual cores and cladding diameters, and from a fixed working distance, which locates the imaging plane directly at the bundle’s facet unless distal optics are added. When the object is placed away from this fixed image plane, only a blurred, seemingly information-less image appears at the proximal facet (Fig. 1(b)).

The fundamental reason for the limitation to a fixed imaging plane is that spatial phase information is scrambled upon propagation through the bundle, due to the different random core-to-core phase delays. Although these phase distortions can be measured and compensated for using a spatial light modulator (SLM) [17–22], the sensitivity of the phase correction to fiber bending severely limits applicability, in a similar manner to the case of MMF. As a result, most conventional bundle imaging techniques work under the assumption that phase information is lost and thus rely on intensity-only information transmission by each core.

Despite these seemingly fundamental restrictions, here we take advantage of important spatial phase information that is nevertheless retained in the speckle patterns produced by propagation through any fiber bundle. We demonstrate that this information can be used to overcome both the fixed working distance and the bend-sensitivity limitations of current approaches. Moreover, we demonstrate that all that is required to utilize this information is to computationally analyze a single image of the speckle intenstity pattern that is transmitted through the fiber. We thus present a simple approach that performs widefield imaging of planar objects at a large range of working-distances from a bare fiber bundle, using only a conventional camera, without any phase correction or pre-calibration, or any distal optics. Our single-shot, diffraction-limited, and pixelation-free imaging technique is based on exploiting inherent angular speckle correlations, and is inspired by the recent advancements in imaging through opaque scattering barriers [23–25], and by methods used to overcome atmospheric turbulence in astronomy [26]).

## 2. Principle

The underlying principle of our technique is presented in Fig. 1. The simplest scenario is that of a bundle composed of single-mode cores (the case of MMF cores is treated below). Light propagation in such a bundle is characterized by the fact that each core, *i*, in the bundle preserves the intensity of the light coupled to it, but randomizes the transmitted phase by adding a different phase in each core, *dϕ _{i}*. Therefore, a point source that is placed at a distance

*U*from the bundle input facet (the object plane), will produce a speckle pattern at a distance

*V*from the bundle output facet (Fig. 1(b)), due to the added random phase pattern to the otherwise spherical wavefront. A second point source placed at the same object plane, but shifted in transverse position by a distance

*δX*relative to the first point source, will produce a nearly identical speckle pattern at the image plane, but shifted by

*δY*=

*δX*·

*V/U*, due to the angular tilt of

*δX/U*of the input wavefront (Fig. 1(b)). Thus, within the angular range in which the two speckle patterns are highly correlated, they present a shift-invariant point spread function (PSF) of the fiber bundle. This angular range is analogous to the isoplanatic patch in adaptive optics [27], and to the angular ’memory-effect’ for speckle correlations in scattering media [28, 29]. For an ideal fiber bundle, with randomly positioned single-mode cores and no core-to-core coupling, the angular correlation range is essentially the core’s numerical aperture (NA) (See derivation in Appendix C and discussion below).

As a direct result, when an object that is contained within this angular range is illuminated by spatial incoherent illumination, the light from every point on the object forms correlated, but shifted, speckle patterns at a distance from the output facet (Fig. 1(c)). The image of the light intensity at this plane will be the intensity sum of these identical shifted speckle patterns. Building on the recent results in imaging through opaque barriers [24], the image of the object itself can be computationally recovered from the autocorrelation of the speckle intensity image (Fig. 1(c)). The mathematical justification for this result is straightforward: due to the angular speckle correlations the image of the light intensity measured far enough from the output facet can be described by a simple convolution between the object’s intensity pattern *O*(*r*), and the single (unknown) speckle pattern *PSF*(*r*) [24]:

*I*(

*r*) gives: Since the autocorrelation of a random speckle pattern

*PSF*(

*r*) ★

*PSF*(

*r*) is a sharply-peaked function having a peak with a width of a diffraction limited spot, the autocorrelation of the raw speckle image,

*I*(

*r*) ★

*I*(

*r*), will approximate the autocorrelation of the object itself (up to a statistical average over the number of captured speckle grains and a constant background term [24], see discussion). Thus, the autocorrelation of a single camera image of the light propagated through the bundle is essentially identical to the target object’s autocorrelation, and one can directly reconstruct the original object from this autocorrelation using a phase retrieval algorithm [23, 24, 30] (Fig. 1(c)). While the object’s image is reconstructed, its distance and lateral position remain unknown due to the insensitivity of the autocorrelation to lateral shifts, and its orientation remains unknown due to fiber bending.

## 3. Results

#### 3.1. Single-shot imaging via speckle correlations

To test the proposed speckle-based single-shot approach we have performed several proof-of-concept experiments, whose results are presented in Figs. 2 and 3, using the setup depicted in Fig. 1(c). In these experiments a target object illuminated by a spatially incoherent laser source was placed at various distances (varying between 5–20 mm) from a 530*μm*-diameter fiber bundle having 4500 cores (see Methods and Appendix A). The image of the object was reconstructed from a single image of the light pattern measured at a small distance (several milimeters) from the bundle proximal end.

Figure 2 gives a comparison between our technique and conventional lensless imaging through a fiber bundle. When the object is located at a distance from the bundle’s distal facet, no information is obtainable in the conventional approach (Fig. 2(c)), whereas in our technique the object’s image is retrieved with diffraction-limited resolution (Fig. 2(d)). Several additional experimental examples are presented in Fig. 3, which presents the raw camera speckle images, their autocorrelations, and the images reconstructed from these autocorrelations, side-by-side with the original objects. The technique is not restricted to transmission geometry, and works equally well in reflection geometry, i.e. when the light source is placed adjacent to the bundle end (and in principle can be provided by the fiber itself), as is demonstrated in Appendix B.

The resolution of this speckle-based technique is dictated at far enough working-distances (see discussion) by the speckle grain dimensions, which are diffraction-limited [24]. Thus it provides the same diffraction-limited resolution as an ideal aberration-free optical system having the same aperture diameter [26]. In Figure 4 we experimentally characterize this imaging resolution as a function of the object’s distance from the bundle’s facet, *U*, (see Methods), and compare it to the resolution of a conventional bundle based endoscope with and without distal optics. The speckle grain size, *δx*, (and the resolution) can be estimated by:

*λ*is the wavelength,

*D*is the fiber bundle’s diameter,

_{bundle}*U*is the distance between the object and the fiber facet, and

*NA*is the numerical aperture of a single core. It can be seen that the gradually varying resolution provides a large range of working distances, as we demonstrate in Figs. 4(d)–4(h). Given the minimum working distance for optimum performance is given by

*U*=

_{min}*D*(see discussion), the minimal resolution is given by $\delta {x}_{\mathit{min}}=\sqrt{2}\cdot \lambda /\mathit{NA}$.”

_{bundle}/NA#### 3.2. Applicability to broadband or spatially coherent illumination

To consider the applicability of the approach to broadband illumination and as a step towards fluorescence imaging, the experiments of Figs. 2 and 3 were repeated with a broadband illumination light source (800nm central wavelength, 10nm spectral width). Their results are presented in Fig. 5. Broadband illumination can be used without appreciably affecting the performance of the technique as long as the illumination bandwidth is narrower than the fiber bundle’s speckle spectral correlation bandwidth [7, 31], since the speckle patterns produced by different wavelength within this bandwidth stay well correlated. This spectral bandwidth is Fourier transform related to the time delay spread induced by modal dispersion and inhomegeneity between different cores, caused by fabrication or bending. Figure 5(d) presents the characterization of the spectral correlation width of the imaging bundle used in this experiment. This straightforward characterization procedure is performed by recording speckle patterns at different narrow illumination wavelengths, and calculating the cross-correlation between these patterns [7]. The broadest spectral correlation width is obtained for an unbent bundle with cores that exhibit no modal dispersion, e.g. SMF cores (rather than MMF cores [31]). Working with an illumination bandwidth that is larger than the spectral correlation width is possible and will not affect the imaging resolution, but will reduce the contrast of the raw camera image and its autocorrelation, as was studied numerically in [24]. The limited optimal spectral bandwidth is an important limitation of the technique when fluorescence imaging is considered. Interestingly, even in the commercially available fibers used in this work the spectral correlation bandwidth is about 2–3 times smaller than the fluorescence bandwidth of most quantum dots markers, and similar to some of the narrower fluorescence bandwidths of rare-earth doped luminescent particles.

In the case of spatially coherent illumination, one can follow a similar derivation as in Eqs. (1) and (2) for objects which are placed at the far field of the bundle, by replacing all of the light intensity terms with their complex field amplitudes (see derivation in Appendix F). Interestingly, instead of having to measure the complex speckle field and calculate its autocorrelation, one can simply use the intensity image of the bundle’s facet, which is related to this coherent autocorrelation by a Fourier transform, due to the Wiener-Khinchin theorem. Thus, a complex coherently-illuminated object can be reconstructed via phase-retrieval from a single image of the bundle’s facet intensity, which is the object’s diffraction pattern, in a manner analogous to x-ray coherent diffraction imaging [32]. Appendix F provides a simple experimental proof-of-principle for this approach. Objects that are placed closer to the facet may be reconstructed via Fresnel phase retrieval [33], which may also provide depth information and open the possibility for imaging non-planar objects, where spatially coherent imaging is possible. One can also image closer objects with spatially coherent light by averaging over several coherent illumination as was recently demonstrated by Edrei et al [34]. Unfortunately, Imaging non-planar fluorescent objects is impossible in the presented implementation (see discussion).

## 4. Discussion

We have presented a widefield imaging technique that offers diffraction-limited imaging resolution at a large range of working distances without the use of any distal optics. The simple and calibration-free technique is straightforward to implement as it utilizes essentially the same setup already used in a conventional fiber bundle endoscope, all that is required is to shift the camera imaging plane away from the bundle proximal facet. Unlike conventional microendoscopes, our technique is aberration free and does not suffer from pixelation artifacts. Compared to novel approaches that are based on active correction of the fiber wavefront distortions [11, 13, 17, 19] our technique is insensitive to fiber bending and works inherently with spatially incoherent illumination in a single-shot, without the need for scanning. The presented experiments provide a proof of principle using planar and high contrast test targets, and significant challenges need to be overcome to apply the technique in bio-medical imaging applications. These include most fundamentally the lack of axial resolution for imaging three-dimensional objects (see discussion below), and the limited spectral acceptance. Incorporation of the illumination source into the fiber is a more straightforward technical challenge.

The computational retrieval of an image from a speckle field dramatically alters the influence of the bundle parameters on the imaging performance compared to conventional direct imaging. The major differences are that: (1) The diameter of each core and the spacing between the cores, which conventionally limit the imaging resolution and induce pixelation artifacts, do not directly affect the resolution in the speckle based approach, which is determined by the total diameter of the bundle; (2) The number of cores, which conventionally affects only the number of resolution-cells, dictates also the number of speckles in a single image that can be used for calculating the autocorrelation function. Therefore, local defects that are naturally present in some bundle cores and conventionally lead to spatially localized loss of information, translate in our technique only to a slighty reduced number of speckles. A too low number of speckles can however lead to insufficient ensemble averaging, which in turn reduces the signal to noise ratio (SNR) of the autocorrelation (the autocorrelation background statistical noise is inversely proportional to (*N _{speckles}*)

^{1/2}[24], see Appendix G). This is especially important when imaging large objects whose angular dimensions are comparable to the bundle’s NA, since the larger spatial coordinates in the autocorrelation have less spatial averaging. Another challenging scenario is the imaging of objects containing a large number of bright resolution cells. In this scenario both the raw image contrast and the autocorrelation contrast would be low, since they are inversely proportional to the number of bright resolution cells [24]. These potential difficulties may be overcome by averaging the autocorrelation over multiple shots of different uncorrelated speckle patterns as is done in stellar speckle interferometry [26], which can be easily performed in many ways. The simplest approach is to slightly move or bend the bundle, as is demonstrated experimentally in Fig. 5(c). Alternative approaches for ensemble averaging include using orthogonal polarizations (Appendix D), or different spectral bands (Fig. 5(d)). Still, even with a single shot and only a few thousands cores, we have demonstrated that the spatial averaging is sufficient to perform imaging of simple test targets (Figs. 2 and 3)); (3) The FOV of our technique is not fixed by the bundle outer diameter, but is limited at large enough working distances by the periodicity of the speckle patterns generated by the ordered ’grating-like’ arrangement of cores found in most fiber bundles (including the ones used in this work). In th this case, the FOV angular range is thus fixed at

*θ*=

_{FOV}*λ*/2

*D*, where

_{intercore}*D*is the inter-core distance (7.5

_{intercore}*μm*in the fiber used for Figs. 2–4), and the

*FOV*=

*U*·

*θ*=

_{FOV}*λU*/2

*D*itself thus scales with the working distance,

_{intercore}*U*. The periodicity of the speckle pattern limits the FOV since multiple objects occurring in different periods of the speckle (but still inside the NA of the fiber) will be recovered on top of one another. To demonstrate this limitation we show in Fig. 6 how this periodicity is apparent when the full range of the autocorrelation of the object of Fig. 2 is displayed. The angular spacing between the replicas in the autocorrelation is ∼ 74

*mrad*which matches

*λ/D*= 0.532

_{intercore}*μm*/7.5

*μm*≈ 71

*mrad*. To avoid overlap between these replicas, the FOV is defined as half of this distance. Figures 6(b)–6(d) shows an experimental reconstruction of an object that spans > 85% of this FOV. Interestingly, considering that the imaging resolution is given at large enough distances by

*δx*≈

*λU/D*, one obtains for a periodic arrangements of cores that the number of effective resolution cells is proportional to the number of cores: (

_{bundle}*FOV/δx*)

^{2}≈ (

*D*/2

_{bundle}*D*)

_{intercore}^{2}=

*N*/4. While in the common ordered arrangement of cores analyzed above the FOV is limited by the presence of replicas in the speckle pattern, for a randomly distributed cores arrangement [35] the situation will resemble that obtained in imaging through scattering layers [24]. In such a scenario the angular FOV will be given by

_{cores}*θ*≈

_{FOV}*λ/d*, where

_{core}*d*is the effective diameter of the light intensity appearing from a single core at the fiber facet [35].

_{core}*d*thus limits the memory-effect FOV in the same manner that the scattering medium’s thickness,

_{core}*L*, limits the memory effect range in scattering media:

*θ*≈

_{FOV}*λ/πL*. The reason for this analogy between core diameter and scattering medium thickness is that the core diameter gives the effective transverse spread of light at the output facet of the fiber when a narrow pencil-like beam illuminates the fiber input. This is exactly the analog of the size of the diffusive halo that appears at the output facet of a diffusive medium for a pencil like input beam at the input facet [7, 28]. Interestingly, for single-mode cores this mode field diameter (MFD) is equal roughly to

*d*≈

_{core}*λ/NA*, and thus the FOV is given by the NA of the cores (see Appendix C). In this special case of randomly positioned single-mode cores, all of the light that is guided by the bundle cores is within the memory-effect range and can contribute to the autocorrelation [35]. When multimode cores are considered (as is most commonly the case in commercial imaging bundles),

*d*>

_{core}*λ/NA*and thus

*θ*≈

_{FOV}*λ/d*<

_{core}*NA*. In addition, any cross-talk between neighboring cores, which conventionally reduces resolution and contrast, will affect the FOV by reducing the speckle angular correlation width (see derivation in Appendix C).

As demonstrated in Fig. 4, our approach allows the imaging of a planar object placed at a large range of working distances. However, an important limitation of the approach is that in its current implementation it does not possess any depth sectioning capability. Moreover, for the technique to work, all objects imaged should not extend beyond an axial distance of *δz* = (2*λ/π*)(*u/D*)^{2}, which is the axial decorrelation length of the speckled PSF [28]. Parts of the objects that lie beyond this axial range would produce uncorrelated speckle patterns and would not contribute properly to the calculated autocorrelation. As a results, with spatially incoherent illumination, the current technique is effective only for planar objects and further work is required to find a solution for the case of three-dimensional targets such as thick tissue. A potential relevant imaging scenario may be imaging inside hollow organs, where there is a free space distance between the distal fiber end and the relatively planar target. For optimal performance the working distance needs to be large enough to ensure that the light from each point on the object is collected by all of the bundle’s cores (as in wavefront-shaping based approaches [17–19]). This optimal minimum working distance, *U _{min}* is given by:

*U*=

_{min}*D*·

_{bundle}*d*=

_{mode}/λ*D*, where

_{bundle}/NA*λ*is the wavelength,

*D*is the total diameter of the bundle,

_{bundle}*d*is the mode field diameter of a single core, and NA is the a single core’s numerical aperture. In the bundle used in Figs. 2–4 this minimal working-distance is

_{mode}*U*= 2.6

_{min}*mm*.

*U*can be reduced by using a smaller diameter bundle with higher NA cores [36] (which may also increase the FOV), or by adding a thin scattering layer at the distal end to increase the NA. If a shorter working-distance is desired one may simply splice a glass rod spacer to the distal end. Another possible alternative is to acquire speckle images from subapertures of the fiber, which will decrease

_{min}*U*at the expense of resolution. Interestingly, the information contained in several sub-apertures speckle images may be used to retrieve depth information [25]. Axial sectioning and increased resolution may also be possible by the use of structured illumination [16, 37], or temporal gating [38].

_{min}Additional possible improvements include computational retrieval by exploiting phase information contained in the image bi-spectrum [39], or in speckle images captured under different aperture masks.

Besides these possible improvements the proposed speckle correlation approach has the advantage of being extremely simple to implement, requires no distal optics, no wavefront control, and immediately extend the imaging capabilities of any fiber bundle based endoscopes when planar objects are considered. Quite uniquely, and contrary to many novel endoscopic imaging techniques, fiber movements are not a hurdle but are beneficial, and are exploited rather than fought against to provide better imaging quality.

## 5. Methods

#### Experimental set-up

The complete experimental set-ups for incoherent and coherent imaging are presented in Appendix A. The fiber bundles used were two different commerical fiber bundles by Schott. The first, which was used for the experiments of Figs. 2–4 and Fig. 6, had 4.5k cores with 7.5 *μ*m inter-core distance, 0.53 mm diameter, and a length of 105 cm. The second, which was used for the coherent and broadband experiments, had 18k cores with 8 *μ*m inter-core distance, a diameter of 1.1 mm, and a length of 48.5 cm (Schott part number: 15333385). The imaged objects were taken from a USAF resolution target (Thorlabs R3L3S1N). In the experiments of Figs. 2–4 and Fig. 6, the objects were illuminated by a narrow bandwidth spatially incoherent pseudothermal source at a wavelength of 532 nm, based on a Coherent Compass 215M-50 cw laser and a rotating diffuser (see Appendix A). In the coherent experiments the same laser was used without a rotating diffuser. In the experiment of Fig. 5 a Ti:Sapphire laser with a bandwidth of 12nm around a central wavelength of 800nm (Spectra-physics Mai Tai) and a rotating diffuser was used. The camera used in experiments of Figs. 2–4 and Fig. 6 was a PCO edge 5.5 (2,560×2,160 pixels). Exposures of 10 milliseconds to 2 seconds were used (typically a few hundred milliseconds). The objects were placed at distances of 5 mm–250 mm from the bundle’s input facet and the camera was placed at distances of 5–50 mm from the bundle’s output facet, or behind an objective that imaged the light close to the bundle’s output facet.

#### Image processing

For the incoherent images, the raw camera image was spatially normalized for the slowly varying envelope of the transmitted light pattern by dividing the raw camera image by a low-pass-filtered version of it that estimated its envelope. The autocorrelation of the processed image was calculated by an inverse Fourier transform of its power spectrum (effective periodic boundary conditions). The resulting autocorrelation was cropped to a rectangular window with dimensions ranging between 40×40 pixels and 400×400 pixels (depending on the imaged object dimensions), and the minimum pixel brightness in this window was background-subtracted from the entire autocorrelation trace. In addition, the intensity of the central pixel of the autocorrelation was taken as equal to one of its neighbors, to reduce the effect of camera hot-pixels. A two-dimensional Tukey window applied on the autocorrelation was found to enhance the phase-retrieval reconstruction fidelity in some of the experiments. For the coherent case, the raw image intensity was thresholded to remove background noise and the image was zero-padded before calculating its Fourier transform.

#### Phase-retrieval algorithm block diagram

The phase-retrieval algorithm was implemented according to the recipe provided by Bertolotti and co-authors [23] (for details see Appendix E). The object constraints used were it being real and nonnegative, or belonging to only half of the complex plane, with a positive real part. The algorithms were implemented in Matlab. The reconstructed images were median-filtered, and the images of (Fig. 5) were also Fourier interpolated.

## Appendix A: Experimental setups

The experimental setup is drawn schematically in Fig. 7. The light source in all the experiments except the broadband imaging was a pseudo-thermal spatially-incoherent source, composed of a Coherent Compass 215M-50 532nm CW laser whose beam was expanded approximately X50 times by a home-built telescope, passed through a focusing lens and then through a rapidly rotating diffuser. The light that passed through the object was collected by the fiber bundle and imaged by the camera after forming a speckle pattern, with or without an imaging objective (L2). For the resolution characterization (Fig. 4) the object was replaced with a “point source”, i.e. a pinhole with a diameter smaller than could be resolved by the fiber’s aperture: *d _{pinhole}* <

*U*·

*λ/D*. The imaging plane was translated to different distances from the facet and the speckle grain size was taken as the width of the speckle pattern’s autocorrelation. In the broadband imaging setup, the light source was based on a Spectra-Physics Mai-Tai femtosecond laser, and no focusing lens (L1) were used. In the spatially-coherent imaging experiment (Appendix E), the rotating diffuser was removed, the camera imaged the bundle’s output facet, and the first lens (L1) focused the light onto the bundle’s input facet, to shorten the distance required to attain far-field condition (Fraunhofer diffraction).

_{bundle}## Appendix B: Imaging in reflection geometry

In this configuration, a diffusive reflecting object was placed on an opaque background and illuminated by spatialy incoherent light. The reflected light was collected using the fiber bundle’s input facet, placed at a distance of about 2.2cm from the object, and the resulting speckle patterns were imaged using an sCMOS camera. The setup for these experiments is depicted in Fig. 8(a). Sample results for imaging diffusive reflective objects are presented in Figs. 8(b) and 8(c). Given the results of this proof-of-concept reflection-mode imaging modaility, there is no inherent limitation in incorporating the light source into the bundle itself, e.g. by coupling the laser light into the fiber cores or to its cladding.

## Appendix C: Analysis of the angular correlation width

An ideal fiber bundle with randomly positioned single-mode cores and no core-to-core coupling has a high angular correlation width, dictated by the NA of its cores. This can be derived rigorously by describing the field at the output facet of the fiber bundle *E _{out}*(

**r**):

*E*is the field entering the bundle, * is a convolution, · is a multiplication,

_{in}*F*is the function describing the mode of each core,

*comb*is a set of delta functions describing the grid of the cores centers, and

*R*are random phases describing the bundle’s phase mixing.

By taking the Fourier transform of the last expression, one will obtain an expression for the field at the far field of the fiber bundle:

*S*(

**k**) =

*FT*[

*comb*(

**r**) ·

*exp*(

*iR*(

**r**))]. To estimate the angular correlation width of the fiber bundle, we will take the input field as a plane wave:

*E*(

_{in}**r**) =

*exp*(

*i*

**k**

_{in}**r**), and calculate the cross-correlation of its speckle pattern with respect to the speckle pattern created by plane waves of neighboring wavenumbers:

**k**

*+*

_{in}*δ*

**k**. Since

*Ẽ*(

_{in}**k**) ∝

*δ*(

**k**−

**k**), we find that in the far field:

_{in}**k′**was presented as a dummy variable. The neighboring wavenumber speckle pattern will be taken with a compensation for their geometrical shift:

*Ī*is the mean speckle intensity, and the last equality is known as the factorization approximation, following Freund’s derivation [28]. Following the same derivation, instead of continuing the calculation by introducing a specific speckle pattern, one introduces an ensemble average over different speckle pattern realizations (which here can be considered as averaging over all possible phases,

*R*). In this manner, the speckle pattern intensity turns into the average one, and thus the randomization source becomes independent of

**k**: 〈|

*S*(

**k**)|

^{2}〉 ≡ 〈

*S*

^{2}〉. This allows to factor it out of the integrals, and to receive a relatively simple expression for the ensemble average cross-correlation:

Where *λ* is the wavelength and *d _{mode}* is the mode field diameter. This result is in essence identical to the estimation of the “memory effect” width in scattering media [7]: if a point source is put adjacent to the input of the scattering media, and the size of the light spot in its output is

*L*, the “memory effect” width is estimated by

*λ/L*. If one takes a fiber bundle as the scattering medium,

*L*will now be the core mode field diameter, as in our previous result. An extension of this result for multi-mode cores can be done by using only the mode with the largest field diameter as

*F*(

**r**). When core-to-core coupling exists the angular correlation width can be derived by essentially replacing

*d*with the effective transverse spread of the light on the output facet when the light at the input excites a single core. In addition, if the cores in the bundle are arranged on a periodic lattice grid, this will result in the periodic reciprocal lattice appearing in

_{mode}*S*(

**k**), which will limit the FOV, as analyzed in Fig. 6. In order to experimentally verify the angular correlation width of the fiber bundle used, we used the setup shown in Fig. 7, while replacing the object with a “point source”. Our “point source” consisted of a pinhole illuminated from its back, with a diameter smaller than that which can be resolved by the fiber’s aperture:

*d*<

_{pinhole}*U*·

*λ/D*. We then took images of the speckle patterns created, while translating the point source transversely over the object plane, and calculated the correlation between the different speckle patterns. An example of such measurement is shown in Fig. 9. The FWHM of this trace is in good agreement with the expected angular correlation width, dictated by the diameter of a single core

_{bundle}*δθ*≈

_{acw}*λ/d*when this diameter is taken as

_{core}*d*≈ 5.7

_{core}*μm*.

## Appendix D: Enhanced speckle ensemble averaging

We have focused on single-shot imaging, which is robust, simple and useful in many scenarios. However, as mentioned in the discussion, increased ensemble averaging can very simply be made by averaging the autocorrelation over multiple shots of uncorrelated speckle patterns. This improves the statistical signal-to-noise ratio, as seen in Fig. 5(c). One can introduce different uncorrelated speckle patterns in many ways, including bending the fiber, using two orthogonal polarizations (see Fig. 10), spectral filtering, and more.

## Appendix E: Image retrieval algorithm (phase retrieval)

The object image is retrieved from its autocorrelation, which is calculated from the measured scattered light camera image. Given the processed (smoothed and envelope corrected) camera image *I*(*x*, *y*) (see Methods), the scattered light autocorrelation, *R*(*x*, *y*), is calculated by an inverse two-dimensional Fourier transform of its power-spectrum, building on the Wiener-Khinchin theorem:

According to the Wiener-Khinchin theorem, the object’s power spectrum, *S _{meas}*(

*k*,

_{x}*k*), is the Fourier transform amplitude of its autocorrelation. Therefore we calculate the object’s power spectrum by performing a 2D Fourier transform of the central part of this autocorrelation, after windowing with a windowing function

_{y}*W*(

*x*,

*y*) (e.g. a Tukey window, see Methods):

*R′*(

*x*,

*y*) is the background-subtracted autocorrelation trace

*R*(

*x*,

*y*) (for more information about the pre-processing of the autocorrelation, see Methods). At this point, the ’only’ missing information required to reconstruct the object’s image is the phase of its 2D Fourier transform, which is found by an iterative Fienup-type phase-retrieval algorithm [30]. The phase-retrieval algorithm was implemented according to the recipe given by Bertolotti et al. [23]. A block-diagram of this algorithm is given in Fig. 11. This modified Gerchberg-Saxton algorithm starts with an initial guess for the object pattern

*g*

_{1}(

*x*,

*y*), chosen as a random pattern in our experiments. This initial guess is entered to the algorithm that performs the following four steps at its

*k*iteration:

^{th}*G*(_{k}*k*,_{x}*k*) =_{y}*FT*{*g*(_{k}*x*,*y*)}*θ*(_{k}*k*,_{x}*k*) =_{y}*arg*{*G*(_{k}*k*,_{x}*k*)}_{y}*G′*(_{k}*k*,_{x}*k*) =(_{y}*S*(_{meas}*k*,_{x}*k*))_{y}^{1/2}*exp*(*iθ*(_{k}*k*,_{x}*k*))_{y}*g′*(_{k}*x*,*y*) =*FT*^{−1}{*G′*(_{k}*k*,_{x}*k*)}_{y}

Where the measured information on the object’s autocorrelation is used in the third step.

The input for the next (*k* + 1) iteration, *g*_{k+1}(*x*, *y*), is obtained from the output of the *k ^{th}* iteration,

*g′*(

_{k}*x*,

*y*), by imposing physical constraints on the object image, it being either real and non-negative or limited to half of the complex plane in our implementations. Following Bertolotti et al. [23] we have used two types of implementations of these constraints into the algorithm, termed the “Hybrid Input-Ouput (HIO)” and the “Error reduction” algorithms, as pioneered by Fienup [30]. These algorithms are described by:

*x*,

*y*) on

*g′*(

_{k}*x*,

*y*) that violate the physical constraints, and is a feedback parameter that control the convergence properties of the algorithm. Following Bertolotti et al. [23], first a few thousands iterations of the hybrid input-output (HIO) algorithm [30] were ran with a decreasing beta factor from

*β*= 2 to

*β*= 0, in steps of 0.04. For each

*β*value, 40 iterations of the algorithm were performed (i.e., a total of 2000 iterations). The result of the HIO algorithm was fed as an input to additional 40 iterations of the ’error reduction’ algorithm to obtain the final result. Importantly, to assure faithful reconstruction of each image with these basic phase-retrieval algorithms, several different runs of the algorithm (from 20 up to 400, typically 50) were performed with different random initial conditions, and to each reconstruction we assigned an error metric. The error metric was defined by the mean square difference between the reconstructions Fourier spectrum to the Fourier modulus of the measured autocorrelation,

*S*(

_{meas}*k*,

_{x}*k*). Ideally, the lowest error reconstruction should be the optimal one. However, during our studies using these basic algorithms we found that even though these reconstructions were satisfactory (See example in Fig. 12), they were not always the optimal reconstructions, compared to the original known object. In addition, we note that the results of these phase-retrieval algorithms are known to be sensitive to the preprocessing of the autocorrelation (background subtraction, normalization, windowing function, size of support and smoothing kernel). However, existing superior algorithms are expected to substantially improve reconstruction fidelity and convergence.

_{y}## Appendix F: Extension to coherent imaging

When spatially coherently illuminated objects placed in the farfield of the bundle are considered, one can follow a similar derivation as for incoherent case (Eq. (1–2)), replacing all of the light intensity terms with complex fields:

Where*E*(

*x*) is the field at the imaging plane,

*E*(

_{obj}*x*) is the light field of the object, and

*S*(

*x*) is the complex coherent speckle pattern field of the system. Taking the autocorrelation of the resulting field

*E*(

*x*) gives:

*S*(

*x*) ★

*S*(

*x*), is a sharply-peaked function with the same width as the autocorrelation of the speckle’s intensity, but without its constant background. Once more, this enables us to estimate the object’s complex autocorrelation by calculating the autocorrelation of the output field. Acquiring the output field requires interferometric measurements (e.g. by off-axis holography), which will reduce the robustness and simplicity of the incoherent method. However, building on the Wiener-Khinchin theorem, one can acquire the same information about the field autocorrelation without requiring interferometric detection. This is because the Fourier transform of the complex output field autocorrelation is its power-spectrum: Since the bundle’s input facet is located at the far field of the object, and provided no optical cross-talk exists between the bundle cores, the image on the bundle’s output facet will show a pixelated version of the power spectrum of the object itself (i.e. the intensity of its diffraction pattern). A Fourier transform of the facet image intensity gives the complex object’s autocorrelation. The complex object can be reconstructed with the same phase retrieval algorithm used before (exactly as done in x-ray coherent diffractive imaging [32]). To demonstrate this, an image of the output facet was acquired (Fig. 13(a)), when the object was placed at a distance from the bundle’s input facet. We used an object with a size of the order of 0.5mm and an illuminatrion wavelength of 532 nm. For these parameters the farfield condition is met for a distance of about 1 meter. Instead of expanding the size of our setup, the results shown in Fig. 13 were taken when the object was illuminatde by a spherically converging coherent illumination, obtained by a simple focusing lens (see Appendix A). The lens focused the light on the bundle’s input facet, and by that created on it the object Franhufer diffraction pattern. Thus, as the results of Fig. 13 demonstrate, spatially incoherent illumination is not a strict requirement of speckle-correlation based imaging. In scenarios where the bundle’s input facet is located in the near-field of the object, the facet intensity pattern is the intensity of the Fresnel diffraction pattern of the obejct. The object can be reconstructed from this pattern [33]. However, in order to reconstruct objects from these patterns, the distance between the fiber and the object should be known, or be found by e.g. multiple reconstructions attempts at different distances, in a similar fashion to a computational ’auto-focus’ mechanism.

## Appendix G: Effect of number of cores on imaging resolution

Here we study the effect that a reduced number of cores, *N _{cores}*, in the bundle has on the imaging resolution of our technique. Since the resolution of our technique is determined by the size of the speckle grain (the diffraction limit), which at large enough working distances (

*U*>

*U*) is determined only by the diameter of the bundle:

_{min}*δx*≈

*U*·

*λ/D*, a reduced number of cores will not affect the resolution but will only reduce the number of speckles,

_{bundle}*N*(for single-mode cores

_{speckles}*N*=

_{speckles}*N*). While not affecting the resolution, a lower number of speckles will reduce the contrast and signal to noise ratio (SNR) of the the measured autocorrelation [24], as can be seen in the numerical results displayed in Fig. 14, below. The reduced autocorrelation SNR will effectively limit the complexity (number of bright resolvable resolution cells) of the imaged objects [24] and/or field-of-view of the approach, as is analyzed in the manuscript text referring to Fig. 6.

_{cores}## Funding

This work was supported by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL; the CNRS/Weizmann NaBi European Associated Laboratory; The Aix-Marseille University/Hebrew University of Jerusalem collaborative-research joint program. The work was funded by the European Research Council (grants no. 278025 and 677909). O.K. was supported by an Azrieli Faculty Fellowship and the Marie Curie Intra-European Fellowship for career development (IEF).

## Acknowledgments

The authors thank Mickael Mounaix and Cathie Ventalon for their valuable help.

## References and links

**1. **B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods **2**(12), 941–950 (2005). [CrossRef] [PubMed]

**2. **G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology **19**(6), 760–771 (2013). [CrossRef]

**3. **S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. **40**(4), 534–537 (2015). [CrossRef] [PubMed]

**4. **R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. **5**, 5581 (2014). [CrossRef]

**5. **G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. **5**, 18303 (2015). [CrossRef] [PubMed]

**6. **J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. **40**(2), 378–397 (2012). [CrossRef]

**7. **A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics **6**, 283–292 (2012). [CrossRef]

**8. **R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express **19**(1), 247–254 (2011). [CrossRef] [PubMed]

**9. **S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip **12**(3), 635–639 (2012). [CrossRef]

**10. **T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. **3**, 1027 (2012). [CrossRef] [PubMed]

**11. **I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express **20**(10), 10583–10590 (2012). [CrossRef] [PubMed]

**12. **Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. **109**, 203901 (2012). [CrossRef] [PubMed]

**13. **M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics **9**, 529–535 (2015). [CrossRef]

**14. **S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

**15. ** Standard specifications for imagefibres (Fujikura, 2016), http://www.fujikura.co.uk/media/18438/image%20fibre.PDF

**16. **N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express **16**(11), 8016–8025 (2008). [CrossRef] [PubMed]

**17. **A. J. Thompson, C. Paterson, M. A. A. Neil, C. Dunsby, and P. M. W. French, “Adaptive phase compensation for ultracompact laser scanning endomicroscopy,” Opt. Lett. **36**(9), 1707–1709 (2011). [CrossRef] [PubMed]

**18. **E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. **38**(5), 609–611 (2013). [CrossRef] [PubMed]

**19. **E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. **21**(18), 20713–20721 (2013).

**20. **D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. **39**(7), 1921–1924 (2014). [CrossRef] [PubMed]

**21. **N. Stasio, D. B. Conkey, C. Moser, and D. Psaltis, “Light control in a multicore fiber using the memory effect,” Opt. Express **23**(23), 30532–30544 (2015). [CrossRef] [PubMed]

**22. **N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Express **41**(13), 3078–3081 (2016).

**23. **J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature **491**, 232–234 (2012). [CrossRef] [PubMed]

**24. **O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics **8**, 784–790 (2014). [CrossRef]

**25. **K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express **22**(25), 31426–31433 (2014). [CrossRef]

**26. **A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. **6**, 85–87 (1970).

**27. **R. K. Tyson, *Principles of Adaptive Optics*, 3rd ed. (Academic, 2010). [CrossRef]

**28. **I. Freund, “Looking through walls and around corners,” Physica A **168**(1), 49–65 (1990). [CrossRef]

**29. **I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. **61**, 2328 (1988). [CrossRef] [PubMed]

**30. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**(15), 2758–2769 (1982). [CrossRef] [PubMed]

**31. **B. Redding and H. Cao, “Using a multimode fiber as a high-resolution, low-loss spectrometer,” Opt. Lett. **37**(16), 3384–3386 (2012). [CrossRef]

**32. **H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics **4**, 833–839 (2010). [CrossRef]

**33. **T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control **50**(8), 1035–1045 (2003). [CrossRef]

**34. **E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica **3**(1), 71–74 (2016). [CrossRef] [PubMed]

**35. **S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

**36. **S. Heyvaert, H. Ottevaere, I. Kujawa, R. Buczynski, and H. Thienpont, “Numerical characterization of an ultra-high na coherent fiber bundle part ii: point spread function analysis,” Opt. Express **21**(21), 25403–25417 (2013). [CrossRef] [PubMed]

**37. **C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

**38. **E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express **23**(4), 244–246 (1998).

**39. **J. C. Dainty, *Laser Speckle and Related Phenomena* (Springer, 1984).