Abstract

Flexible fiber-optic endoscopes provide a solution for imaging at depths beyond the reach of conventional microscopes. Current endoscopes require focusing and/or scanning mechanisms at the distal end, which limit miniaturization, frame-rate, and field of view. Alternative wavefront-shaping based lensless solutions are extremely sensitive to fiber-bending. We present a lensless, bend-insensitive, single-shot imaging approach based on speckle-correlations in fiber bundles that does not require wavefront shaping. Our approach computationally retrieves the target image by analyzing a single camera frame, exploiting phase information that is inherently preserved in propagation through convnetional fiber bundles. Unlike conventional fiber-based imaging, planar objects can be imaged at variable working distances, the resulting image is unpixelated and diffraction-limited, and miniaturization is limited only by the fiber diameter.

© 2016 Optical Society of America

1. Introduction

Flexible optical endoscopes are an important tool in biomedical investigations and clinical diagnostics. They enable imaging at depths where scattering prevents noninvasive microscopic investigation. An ideal microendoscopic probe should be flexible, allow real-time diffraction-limited imaging at various working distances from its distal end, and maintain a minimal cross-sectional footprint [1, 2].

Single-mode fibers (SMF) can be used as the smallest diameter light-guides for endoscopic imaging. However, in order to obtain two-dimensional (2D) images a mechanical scanning head [1, 2] or a spectral disperser [3, 4] should be mounted at the distal end of a fiber, complicating the endoscope fabrication and sacrificing frame rate, probe size, or field of view (FOV). For example, mechanical scanning heads possess a diameter of the order of 1mm [5], and the 2D spectral dispersers of Refs [3,4] are of cm dimensions. While GRIN lens solution with diameters as small as 350 microns have been reported [6] their FOV is usually smaller than their diameter (70 micrometers in the above example). In addition, GRIN lenses suffer from aberrations, a fixed working distance, and when coupled to a fiber bundle, exhibit the typical pixelation artifacts of such bundles. Alternatively, the different modes of a multimode fiber (MMF) can deliver 2D image information, if the complex phase randomization and mode mixing is measured and compensated for, computationally or via wavefront-shaping [7–14]. Unfortunately, the extreme sensitivity of the wavefront-correction to any movement or bending of the fiber necessitates direct access to the distal end for recalibration, or precise knowledge of the bent shape [13].

A robust, widely used, and commercially available type of imaging endoscopes is based on fiber bundles, constructed from thousands of individual cores closely packed together, each core carries one image pixel information. For example, commercially availble bundles pack 3,000 cores in a total outer diameter of 250 microns [15]. Imaging is performed in a straightforward manner if the target object is positioned immediately adjacent to the bundle’s facet (Fig. 1(a)) [1, 16]. While straightforward to implement, conventional fiber bundle endoscopes suffer from limited resolution and pixelation artifacts dictated by the individual cores and cladding diameters, and from a fixed working distance, which locates the imaging plane directly at the bundle’s facet unless distal optics are added. When the object is placed away from this fixed image plane, only a blurred, seemingly information-less image appears at the proximal facet (Fig. 1(b)).

 figure: Fig. 1

Fig. 1 Conventional vs. speckle-correlations based fiber bundle endoscopy: (a), In a conventional fiber bundle endoscope, the intensity image of an object placed adjacent to the input facet is transferred to the output facet. (b), When the object is placed at a distance, U, from the input facet, a blurred and seemingly information-less image is formed at the output facet. However, even though two different point sources at the object plane (red and blue) produce indistinguishable patterns at the output facet, the relative tilt between their wavefronts results in a spatial shift of the resulting speckle patterns when observed at a small distance, V, from the output facet. (c), Speckle-based imaging: For an extended object, the image of the light intensity at the distance V from the output facet is the sum of many shifted speckle patterns, and its autocorrelation provides an estimate to the object’s autocorrelation. The diffraction-limited object image is retrieved from this autocorrelation via a phase-retrieval algorithm.

Download Full Size | PPT Slide | PDF

The fundamental reason for the limitation to a fixed imaging plane is that spatial phase information is scrambled upon propagation through the bundle, due to the different random core-to-core phase delays. Although these phase distortions can be measured and compensated for using a spatial light modulator (SLM) [17–22], the sensitivity of the phase correction to fiber bending severely limits applicability, in a similar manner to the case of MMF. As a result, most conventional bundle imaging techniques work under the assumption that phase information is lost and thus rely on intensity-only information transmission by each core.

Despite these seemingly fundamental restrictions, here we take advantage of important spatial phase information that is nevertheless retained in the speckle patterns produced by propagation through any fiber bundle. We demonstrate that this information can be used to overcome both the fixed working distance and the bend-sensitivity limitations of current approaches. Moreover, we demonstrate that all that is required to utilize this information is to computationally analyze a single image of the speckle intenstity pattern that is transmitted through the fiber. We thus present a simple approach that performs widefield imaging of planar objects at a large range of working-distances from a bare fiber bundle, using only a conventional camera, without any phase correction or pre-calibration, or any distal optics. Our single-shot, diffraction-limited, and pixelation-free imaging technique is based on exploiting inherent angular speckle correlations, and is inspired by the recent advancements in imaging through opaque scattering barriers [23–25], and by methods used to overcome atmospheric turbulence in astronomy [26]).

2. Principle

The underlying principle of our technique is presented in Fig. 1. The simplest scenario is that of a bundle composed of single-mode cores (the case of MMF cores is treated below). Light propagation in such a bundle is characterized by the fact that each core, i, in the bundle preserves the intensity of the light coupled to it, but randomizes the transmitted phase by adding a different phase in each core, i. Therefore, a point source that is placed at a distance U from the bundle input facet (the object plane), will produce a speckle pattern at a distance V from the bundle output facet (Fig. 1(b)), due to the added random phase pattern to the otherwise spherical wavefront. A second point source placed at the same object plane, but shifted in transverse position by a distance δX relative to the first point source, will produce a nearly identical speckle pattern at the image plane, but shifted by δY = δX · V/U, due to the angular tilt of δX/U of the input wavefront (Fig. 1(b)). Thus, within the angular range in which the two speckle patterns are highly correlated, they present a shift-invariant point spread function (PSF) of the fiber bundle. This angular range is analogous to the isoplanatic patch in adaptive optics [27], and to the angular ’memory-effect’ for speckle correlations in scattering media [28, 29]. For an ideal fiber bundle, with randomly positioned single-mode cores and no core-to-core coupling, the angular correlation range is essentially the core’s numerical aperture (NA) (See derivation in Appendix C and discussion below).

As a direct result, when an object that is contained within this angular range is illuminated by spatial incoherent illumination, the light from every point on the object forms correlated, but shifted, speckle patterns at a distance from the output facet (Fig. 1(c)). The image of the light intensity at this plane will be the intensity sum of these identical shifted speckle patterns. Building on the recent results in imaging through opaque barriers [24], the image of the object itself can be computationally recovered from the autocorrelation of the speckle intensity image (Fig. 1(c)). The mathematical justification for this result is straightforward: due to the angular speckle correlations the image of the light intensity measured far enough from the output facet can be described by a simple convolution between the object’s intensity pattern O(r), and the single (unknown) speckle pattern PSF(r) [24]:

I(r)=O(r)*PSF(r)
Taking the autocorrelation of this intensity image I(r) gives:
I(r)I(r)=(O(r)O(r))*(PSF(r)PSF(r))
Since the autocorrelation of a random speckle pattern PSF(r) ★ PSF(r) is a sharply-peaked function having a peak with a width of a diffraction limited spot, the autocorrelation of the raw speckle image, I(r) ★ I(r), will approximate the autocorrelation of the object itself (up to a statistical average over the number of captured speckle grains and a constant background term [24], see discussion). Thus, the autocorrelation of a single camera image of the light propagated through the bundle is essentially identical to the target object’s autocorrelation, and one can directly reconstruct the original object from this autocorrelation using a phase retrieval algorithm [23, 24, 30] (Fig. 1(c)). While the object’s image is reconstructed, its distance and lateral position remain unknown due to the insensitivity of the autocorrelation to lateral shifts, and its orientation remains unknown due to fiber bending.

3. Results

3.1. Single-shot imaging via speckle correlations

To test the proposed speckle-based single-shot approach we have performed several proof-of-concept experiments, whose results are presented in Figs. 2 and 3, using the setup depicted in Fig. 1(c). In these experiments a target object illuminated by a spatially incoherent laser source was placed at various distances (varying between 5–20 mm) from a 530μm-diameter fiber bundle having 4500 cores (see Methods and Appendix A). The image of the object was reconstructed from a single image of the light pattern measured at a small distance (several milimeters) from the bundle proximal end.

 figure: Fig. 2

Fig. 2 Experimental demonstration and comparison to conventional bundle imaging. (a), conventional imaging through a fiber bundle when the object is placed at U=0mm. (b), The original object. (c), conventional imaging through a fiber bundle when the object is placed at U=8.5mm from the bundle’s input facet. No imaging information is directly obtainable. (d), Same situation as in (c) but using the presented speckle-based approach; All scalebars are 100μm.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 Experimental single-shot imaging of various objects at different working distances, U: (a), Raw camera image. (b), Autocorrelation of (a). (c), Object reconstruction from (b). (d), original object. (e–l), same as (a–d) for different objects from 1951 USAF target. In (ah) U = 161mm, in (i–l) U = 65mm. Scalebars: (a,e)=10mm, (i)=5mm, (b,c,d,f,g,h)=0.5mm, (j,k,l)=0.25mm.

Download Full Size | PPT Slide | PDF

Figure 2 gives a comparison between our technique and conventional lensless imaging through a fiber bundle. When the object is located at a distance from the bundle’s distal facet, no information is obtainable in the conventional approach (Fig. 2(c)), whereas in our technique the object’s image is retrieved with diffraction-limited resolution (Fig. 2(d)). Several additional experimental examples are presented in Fig. 3, which presents the raw camera speckle images, their autocorrelations, and the images reconstructed from these autocorrelations, side-by-side with the original objects. The technique is not restricted to transmission geometry, and works equally well in reflection geometry, i.e. when the light source is placed adjacent to the bundle end (and in principle can be provided by the fiber itself), as is demonstrated in Appendix B.

The resolution of this speckle-based technique is dictated at far enough working-distances (see discussion) by the speckle grain dimensions, which are diffraction-limited [24]. Thus it provides the same diffraction-limited resolution as an ideal aberration-free optical system having the same aperture diameter [26]. In Figure 4 we experimentally characterize this imaging resolution as a function of the object’s distance from the bundle’s facet, U, (see Methods), and compare it to the resolution of a conventional bundle based endoscope with and without distal optics. The speckle grain size, δx, (and the resolution) can be estimated by:

δx[(λDbundleU)2+(λNA)2]1/2
where λ is the wavelength, Dbundle is the fiber bundle’s diameter, U is the distance between the object and the fiber facet, and NA is the numerical aperture of a single core. It can be seen that the gradually varying resolution provides a large range of working distances, as we demonstrate in Figs. 4(d)–4(h). Given the minimum working distance for optimum performance is given by Umin = Dbundle/NA (see discussion), the minimal resolution is given by δxmin=2λ/NA.”

 figure: Fig. 4

Fig. 4 Imaging resolution at different working distances. (a), Resolution (speckle grain size) as a function of the object’s distance from the bundle facet (U): Measured (blue circles); theoretical diffraction-limit according to Eq. (3) with Dbundle = 570μm and NA = 0.22 (blue line). Calculated resolution of conventional lensless bundle imaging (green line, described also in the inlet in logarithmic scaling); Calculated resolution of conventional lens-based bundle imaging assuming a distal objective with a focus at U = 5mm (red line); Umin is the minimum working distance (see discussion). (b), Test object from the 1951 USAF target, group 3, Scalebar=100μm. (c), Conventional bundle imaging of (b) when placed at U = 0, (d–h), Speckle-based single-shot reconstructions of (b) at distances of 5–13mm.

Download Full Size | PPT Slide | PDF

3.2. Applicability to broadband or spatially coherent illumination

To consider the applicability of the approach to broadband illumination and as a step towards fluorescence imaging, the experiments of Figs. 2 and 3 were repeated with a broadband illumination light source (800nm central wavelength, 10nm spectral width). Their results are presented in Fig. 5. Broadband illumination can be used without appreciably affecting the performance of the technique as long as the illumination bandwidth is narrower than the fiber bundle’s speckle spectral correlation bandwidth [7, 31], since the speckle patterns produced by different wavelength within this bandwidth stay well correlated. This spectral bandwidth is Fourier transform related to the time delay spread induced by modal dispersion and inhomegeneity between different cores, caused by fabrication or bending. Figure 5(d) presents the characterization of the spectral correlation width of the imaging bundle used in this experiment. This straightforward characterization procedure is performed by recording speckle patterns at different narrow illumination wavelengths, and calculating the cross-correlation between these patterns [7]. The broadest spectral correlation width is obtained for an unbent bundle with cores that exhibit no modal dispersion, e.g. SMF cores (rather than MMF cores [31]). Working with an illumination bandwidth that is larger than the spectral correlation width is possible and will not affect the imaging resolution, but will reduce the contrast of the raw camera image and its autocorrelation, as was studied numerically in [24]. The limited optimal spectral bandwidth is an important limitation of the technique when fluorescence imaging is considered. Interestingly, even in the commercially available fibers used in this work the spectral correlation bandwidth is about 2–3 times smaller than the fluorescence bandwidth of most quantum dots markers, and similar to some of the narrower fluorescence bandwidths of rare-earth doped luminescent particles.

 figure: Fig. 5

Fig. 5 Speckle imaging with broadband illumination. (a), Object pattern. (b), Reconstruction of (a) from a single speckle image through the bundle with illumination of 10nm bandwidth (800nm central wavelength). (c), Increased reconstruction fidelity obtained by ensemble averaging of the speckle autocorrelaiton over 20 different camera shots. Each shot provides an independent speckle realization by slight bending of the the bundle. (d), Experimentally measured spectral correlations of the 48.5cm long fiber bundle used (Schott 153333385); Scalebar=100μm.

Download Full Size | PPT Slide | PDF

In the case of spatially coherent illumination, one can follow a similar derivation as in Eqs. (1) and (2) for objects which are placed at the far field of the bundle, by replacing all of the light intensity terms with their complex field amplitudes (see derivation in Appendix F). Interestingly, instead of having to measure the complex speckle field and calculate its autocorrelation, one can simply use the intensity image of the bundle’s facet, which is related to this coherent autocorrelation by a Fourier transform, due to the Wiener-Khinchin theorem. Thus, a complex coherently-illuminated object can be reconstructed via phase-retrieval from a single image of the bundle’s facet intensity, which is the object’s diffraction pattern, in a manner analogous to x-ray coherent diffraction imaging [32]. Appendix F provides a simple experimental proof-of-principle for this approach. Objects that are placed closer to the facet may be reconstructed via Fresnel phase retrieval [33], which may also provide depth information and open the possibility for imaging non-planar objects, where spatially coherent imaging is possible. One can also image closer objects with spatially coherent light by averaging over several coherent illumination as was recently demonstrated by Edrei et al [34]. Unfortunately, Imaging non-planar fluorescent objects is impossible in the presented implementation (see discussion).

4. Discussion

We have presented a widefield imaging technique that offers diffraction-limited imaging resolution at a large range of working distances without the use of any distal optics. The simple and calibration-free technique is straightforward to implement as it utilizes essentially the same setup already used in a conventional fiber bundle endoscope, all that is required is to shift the camera imaging plane away from the bundle proximal facet. Unlike conventional microendoscopes, our technique is aberration free and does not suffer from pixelation artifacts. Compared to novel approaches that are based on active correction of the fiber wavefront distortions [11, 13, 17, 19] our technique is insensitive to fiber bending and works inherently with spatially incoherent illumination in a single-shot, without the need for scanning. The presented experiments provide a proof of principle using planar and high contrast test targets, and significant challenges need to be overcome to apply the technique in bio-medical imaging applications. These include most fundamentally the lack of axial resolution for imaging three-dimensional objects (see discussion below), and the limited spectral acceptance. Incorporation of the illumination source into the fiber is a more straightforward technical challenge.

The computational retrieval of an image from a speckle field dramatically alters the influence of the bundle parameters on the imaging performance compared to conventional direct imaging. The major differences are that: (1) The diameter of each core and the spacing between the cores, which conventionally limit the imaging resolution and induce pixelation artifacts, do not directly affect the resolution in the speckle based approach, which is determined by the total diameter of the bundle; (2) The number of cores, which conventionally affects only the number of resolution-cells, dictates also the number of speckles in a single image that can be used for calculating the autocorrelation function. Therefore, local defects that are naturally present in some bundle cores and conventionally lead to spatially localized loss of information, translate in our technique only to a slighty reduced number of speckles. A too low number of speckles can however lead to insufficient ensemble averaging, which in turn reduces the signal to noise ratio (SNR) of the autocorrelation (the autocorrelation background statistical noise is inversely proportional to (Nspeckles)1/2 [24], see Appendix G). This is especially important when imaging large objects whose angular dimensions are comparable to the bundle’s NA, since the larger spatial coordinates in the autocorrelation have less spatial averaging. Another challenging scenario is the imaging of objects containing a large number of bright resolution cells. In this scenario both the raw image contrast and the autocorrelation contrast would be low, since they are inversely proportional to the number of bright resolution cells [24]. These potential difficulties may be overcome by averaging the autocorrelation over multiple shots of different uncorrelated speckle patterns as is done in stellar speckle interferometry [26], which can be easily performed in many ways. The simplest approach is to slightly move or bend the bundle, as is demonstrated experimentally in Fig. 5(c). Alternative approaches for ensemble averaging include using orthogonal polarizations (Appendix D), or different spectral bands (Fig. 5(d)). Still, even with a single shot and only a few thousands cores, we have demonstrated that the spatial averaging is sufficient to perform imaging of simple test targets (Figs. 2 and 3)); (3) The FOV of our technique is not fixed by the bundle outer diameter, but is limited at large enough working distances by the periodicity of the speckle patterns generated by the ordered ’grating-like’ arrangement of cores found in most fiber bundles (including the ones used in this work). In th this case, the FOV angular range is thus fixed at θFOV = λ/2Dintercore, where Dintercore is the inter-core distance (7.5μm in the fiber used for Figs. 24), and the FOV = U · θFOV = λU/2Dintercore itself thus scales with the working distance, U. The periodicity of the speckle pattern limits the FOV since multiple objects occurring in different periods of the speckle (but still inside the NA of the fiber) will be recovered on top of one another. To demonstrate this limitation we show in Fig. 6 how this periodicity is apparent when the full range of the autocorrelation of the object of Fig. 2 is displayed. The angular spacing between the replicas in the autocorrelation is ∼ 74mrad which matches λ/Dintercore = 0.532μm/7.5μm ≈ 71mrad. To avoid overlap between these replicas, the FOV is defined as half of this distance. Figures 6(b)–6(d) shows an experimental reconstruction of an object that spans > 85% of this FOV. Interestingly, considering that the imaging resolution is given at large enough distances by δxλU/Dbundle, one obtains for a periodic arrangements of cores that the number of effective resolution cells is proportional to the number of cores: (FOV/δx)2 ≈ (Dbundle/2Dintercore)2 = Ncores/4. While in the common ordered arrangement of cores analyzed above the FOV is limited by the presence of replicas in the speckle pattern, for a randomly distributed cores arrangement [35] the situation will resemble that obtained in imaging through scattering layers [24]. In such a scenario the angular FOV will be given by θFOVλ/dcore, where dcore is the effective diameter of the light intensity appearing from a single core at the fiber facet [35]. dcore thus limits the memory-effect FOV in the same manner that the scattering medium’s thickness, L, limits the memory effect range in scattering media: θFOVλ/πL. The reason for this analogy between core diameter and scattering medium thickness is that the core diameter gives the effective transverse spread of light at the output facet of the fiber when a narrow pencil-like beam illuminates the fiber input. This is exactly the analog of the size of the diffusive halo that appears at the output facet of a diffusive medium for a pencil like input beam at the input facet [7, 28]. Interestingly, for single-mode cores this mode field diameter (MFD) is equal roughly to dcoreλ/NA, and thus the FOV is given by the NA of the cores (see Appendix C). In this special case of randomly positioned single-mode cores, all of the light that is guided by the bundle cores is within the memory-effect range and can contribute to the autocorrelation [35]. When multimode cores are considered (as is most commonly the case in commercial imaging bundles), dcore > λ/NA and thus θFOVλ/dcore < NA. In addition, any cross-talk between neighboring cores, which conventionally reduces resolution and contrast, will affect the FOV by reducing the speckle angular correlation width (see derivation in Appendix C).

 figure: Fig. 6

Fig. 6 FOV limitation: (a), a plot of the full autocorrelation of the speckle pattern used to reconstruct the image of Fig. 2(d), showing the periodicity of the speckle pattern generated by the ordered bundle cores. The FOV is limited by half of the angle between the replicas in the autocorrelation: FOV = λ/2Dintercore ≈ 35.5mrad, since it is smaller than the memory effect angle (θmem = λ/dcore ≈ 0.1rad FWHM, see Appendix C). (b), autocorrelation of an object that spans almost the entire available FOV. (c), reconstruction from the autocorrelation of (b), scalebar = 10 mrad; (d), original object imaged in (b–c).

Download Full Size | PPT Slide | PDF

As demonstrated in Fig. 4, our approach allows the imaging of a planar object placed at a large range of working distances. However, an important limitation of the approach is that in its current implementation it does not possess any depth sectioning capability. Moreover, for the technique to work, all objects imaged should not extend beyond an axial distance of δz = (2λ/π)(u/D)2, which is the axial decorrelation length of the speckled PSF [28]. Parts of the objects that lie beyond this axial range would produce uncorrelated speckle patterns and would not contribute properly to the calculated autocorrelation. As a results, with spatially incoherent illumination, the current technique is effective only for planar objects and further work is required to find a solution for the case of three-dimensional targets such as thick tissue. A potential relevant imaging scenario may be imaging inside hollow organs, where there is a free space distance between the distal fiber end and the relatively planar target. For optimal performance the working distance needs to be large enough to ensure that the light from each point on the object is collected by all of the bundle’s cores (as in wavefront-shaping based approaches [17–19]). This optimal minimum working distance, Umin is given by: Umin = Dbundle · dmode = Dbundle/NA, where λ is the wavelength, Dbundle is the total diameter of the bundle, dmode is the mode field diameter of a single core, and NA is the a single core’s numerical aperture. In the bundle used in Figs. 24 this minimal working-distance is Umin = 2.6mm. Umin can be reduced by using a smaller diameter bundle with higher NA cores [36] (which may also increase the FOV), or by adding a thin scattering layer at the distal end to increase the NA. If a shorter working-distance is desired one may simply splice a glass rod spacer to the distal end. Another possible alternative is to acquire speckle images from subapertures of the fiber, which will decrease Umin at the expense of resolution. Interestingly, the information contained in several sub-apertures speckle images may be used to retrieve depth information [25]. Axial sectioning and increased resolution may also be possible by the use of structured illumination [16, 37], or temporal gating [38].

Additional possible improvements include computational retrieval by exploiting phase information contained in the image bi-spectrum [39], or in speckle images captured under different aperture masks.

Besides these possible improvements the proposed speckle correlation approach has the advantage of being extremely simple to implement, requires no distal optics, no wavefront control, and immediately extend the imaging capabilities of any fiber bundle based endoscopes when planar objects are considered. Quite uniquely, and contrary to many novel endoscopic imaging techniques, fiber movements are not a hurdle but are beneficial, and are exploited rather than fought against to provide better imaging quality.

5. Methods

Experimental set-up

The complete experimental set-ups for incoherent and coherent imaging are presented in Appendix A. The fiber bundles used were two different commerical fiber bundles by Schott. The first, which was used for the experiments of Figs. 24 and Fig. 6, had 4.5k cores with 7.5 μm inter-core distance, 0.53 mm diameter, and a length of 105 cm. The second, which was used for the coherent and broadband experiments, had 18k cores with 8 μm inter-core distance, a diameter of 1.1 mm, and a length of 48.5 cm (Schott part number: 15333385). The imaged objects were taken from a USAF resolution target (Thorlabs R3L3S1N). In the experiments of Figs. 24 and Fig. 6, the objects were illuminated by a narrow bandwidth spatially incoherent pseudothermal source at a wavelength of 532 nm, based on a Coherent Compass 215M-50 cw laser and a rotating diffuser (see Appendix A). In the coherent experiments the same laser was used without a rotating diffuser. In the experiment of Fig. 5 a Ti:Sapphire laser with a bandwidth of 12nm around a central wavelength of 800nm (Spectra-physics Mai Tai) and a rotating diffuser was used. The camera used in experiments of Figs. 24 and Fig. 6 was a PCO edge 5.5 (2,560×2,160 pixels). Exposures of 10 milliseconds to 2 seconds were used (typically a few hundred milliseconds). The objects were placed at distances of 5 mm–250 mm from the bundle’s input facet and the camera was placed at distances of 5–50 mm from the bundle’s output facet, or behind an objective that imaged the light close to the bundle’s output facet.

Image processing

For the incoherent images, the raw camera image was spatially normalized for the slowly varying envelope of the transmitted light pattern by dividing the raw camera image by a low-pass-filtered version of it that estimated its envelope. The autocorrelation of the processed image was calculated by an inverse Fourier transform of its power spectrum (effective periodic boundary conditions). The resulting autocorrelation was cropped to a rectangular window with dimensions ranging between 40×40 pixels and 400×400 pixels (depending on the imaged object dimensions), and the minimum pixel brightness in this window was background-subtracted from the entire autocorrelation trace. In addition, the intensity of the central pixel of the autocorrelation was taken as equal to one of its neighbors, to reduce the effect of camera hot-pixels. A two-dimensional Tukey window applied on the autocorrelation was found to enhance the phase-retrieval reconstruction fidelity in some of the experiments. For the coherent case, the raw image intensity was thresholded to remove background noise and the image was zero-padded before calculating its Fourier transform.

Phase-retrieval algorithm block diagram

The phase-retrieval algorithm was implemented according to the recipe provided by Bertolotti and co-authors [23] (for details see Appendix E). The object constraints used were it being real and nonnegative, or belonging to only half of the complex plane, with a positive real part. The algorithms were implemented in Matlab. The reconstructed images were median-filtered, and the images of (Fig. 5) were also Fourier interpolated.

Appendix A: Experimental setups

The experimental setup is drawn schematically in Fig. 7. The light source in all the experiments except the broadband imaging was a pseudo-thermal spatially-incoherent source, composed of a Coherent Compass 215M-50 532nm CW laser whose beam was expanded approximately X50 times by a home-built telescope, passed through a focusing lens and then through a rapidly rotating diffuser. The light that passed through the object was collected by the fiber bundle and imaged by the camera after forming a speckle pattern, with or without an imaging objective (L2). For the resolution characterization (Fig. 4) the object was replaced with a “point source”, i.e. a pinhole with a diameter smaller than could be resolved by the fiber’s aperture: dpinhole < U · λ/Dbundle. The imaging plane was translated to different distances from the facet and the speckle grain size was taken as the width of the speckle pattern’s autocorrelation. In the broadband imaging setup, the light source was based on a Spectra-Physics Mai-Tai femtosecond laser, and no focusing lens (L1) were used. In the spatially-coherent imaging experiment (Appendix E), the rotating diffuser was removed, the camera imaged the bundle’s output facet, and the first lens (L1) focused the light onto the bundle’s input facet, to shorten the distance required to attain far-field condition (Fraunhofer diffraction).

 figure: Fig. 7

Fig. 7 Optical setup used for all experiments in transmission geometry. L1,L2 - focusing lenses.

Download Full Size | PPT Slide | PDF

Appendix B: Imaging in reflection geometry

In this configuration, a diffusive reflecting object was placed on an opaque background and illuminated by spatialy incoherent light. The reflected light was collected using the fiber bundle’s input facet, placed at a distance of about 2.2cm from the object, and the resulting speckle patterns were imaged using an sCMOS camera. The setup for these experiments is depicted in Fig. 8(a). Sample results for imaging diffusive reflective objects are presented in Figs. 8(b) and 8(c). Given the results of this proof-of-concept reflection-mode imaging modaility, there is no inherent limitation in incorporating the light source into the bundle itself, e.g. by coupling the laser light into the fiber cores or to its cladding.

 figure: Fig. 8

Fig. 8 Imaging in reflection geometry: a, Reflection setup. L1,L2,L3 - focusing lenses. b–d, Diffusive reflecting objects and the images reconstructed from measured speckle patterns. Scalebars: 0.5mm. Both objects were imaged with a distance of U2 ≈ 22mm between the object and the bundle’s input facet.

Download Full Size | PPT Slide | PDF

Appendix C: Analysis of the angular correlation width

An ideal fiber bundle with randomly positioned single-mode cores and no core-to-core coupling has a high angular correlation width, dictated by the NA of its cores. This can be derived rigorously by describing the field at the output facet of the fiber bundle Eout(r):

Eout(r)=({[Ein*F]combexp(iR)}*F)(r)
Where Ein is the field entering the bundle, * is a convolution, · is a multiplication, F is the function describing the mode of each core, comb is a set of delta functions describing the grid of the cores centers, and R are random phases describing the bundle’s phase mixing.

By taking the Fourier transform of the last expression, one will obtain an expression for the field at the far field of the fiber bundle:

FT(Eout(r))=({[E˜inF˜]*FT[combexp(iR)]}F˜)(k)
Which due to the random relative phases will produce a random speckle pattern. We will notate this source of randomization by: S(k) = FT [comb(r) · exp(iR(r))]. To estimate the angular correlation width of the fiber bundle, we will take the input field as a plane wave: Ein(r) = exp(ikinr), and calculate the cross-correlation of its speckle pattern with respect to the speckle pattern created by plane waves of neighboring wavenumbers: kin + δk. Since in(k) ∝ δ(kkin), we find that in the far field:
I(k)=|FT(Eout(r))|2=|{[δ(kkin)F˜(k)]*S(k)}F˜(k)|2=|F˜(kin)S(kkin)F˜(k)|2
Where k′ was presented as a dummy variable. The neighboring wavenumber speckle pattern will be taken with a compensation for their geometrical shift:
I(k,kin)=I(k+δk,kin+δk)=|F˜(kin+δk)S(kkin)F˜(k+δk)|2
Then, the normalized cross correlation will give:
C(I,I)=(II¯)(II¯)dk((II¯)2dk(II¯)2dk)1/2|EE*|2dk((II¯)2dk(II¯)2dk)1/2
Where Ī is the mean speckle intensity, and the last equality is known as the factorization approximation, following Freund’s derivation [28]. Following the same derivation, instead of continuing the calculation by introducing a specific speckle pattern, one introduces an ensemble average over different speckle pattern realizations (which here can be considered as averaging over all possible phases, R). In this manner, the speckle pattern intensity turns into the average one, and thus the randomization source becomes independent of k: 〈|S(k)|2〉 ≡ 〈S2〉. This allows to factor it out of the integrals, and to receive a relatively simple expression for the ensemble average cross-correlation:
C(I,I)|EE*|2dk((II¯)2dk((II¯)2)dk)1/2=
|F˜(kin)F˜*(kin+δk)S22|2|F˜(kin)F˜*(kin+δk)S22|2|F˜(k)F˜*(k+δk)|2dk((|F˜(k)|2|F˜(k)¯|2)2dk(|F˜(k+δk)|2|F˜(k+δk)¯|2)2dk)1/2=
|F˜(k)F˜*(k+δk)|2dk(|F˜(k)|2|F˜(k)¯|2)2dk
The end result is that the system’s angular correlation width is on average identical to the collection angle, defined by the mode field diameter of the bundle’s cores. For a bundle of single-mode optical fibers, the width of the cross correlation is equal to the fiber’s NA, which can be easily estimated by:
NA=λdmode

Where λ is the wavelength and dmode is the mode field diameter. This result is in essence identical to the estimation of the “memory effect” width in scattering media [7]: if a point source is put adjacent to the input of the scattering media, and the size of the light spot in its output is L, the “memory effect” width is estimated by λ/L. If one takes a fiber bundle as the scattering medium, L will now be the core mode field diameter, as in our previous result. An extension of this result for multi-mode cores can be done by using only the mode with the largest field diameter as F(r). When core-to-core coupling exists the angular correlation width can be derived by essentially replacing dmode with the effective transverse spread of the light on the output facet when the light at the input excites a single core. In addition, if the cores in the bundle are arranged on a periodic lattice grid, this will result in the periodic reciprocal lattice appearing in S(k), which will limit the FOV, as analyzed in Fig. 6. In order to experimentally verify the angular correlation width of the fiber bundle used, we used the setup shown in Fig. 7, while replacing the object with a “point source”. Our “point source” consisted of a pinhole illuminated from its back, with a diameter smaller than that which can be resolved by the fiber’s aperture: dpinhole < U · λ/Dbundle. We then took images of the speckle patterns created, while translating the point source transversely over the object plane, and calculated the correlation between the different speckle patterns. An example of such measurement is shown in Fig. 9. The FWHM of this trace is in good agreement with the expected angular correlation width, dictated by the diameter of a single core δθacwλ/dcore when this diameter is taken as dcore ≈ 5.7μm.

 figure: Fig. 9

Fig. 9 Measured speckle angular correlation range for a SCHOTT fiber bundle used in Figures 24.

Download Full Size | PPT Slide | PDF

Appendix D: Enhanced speckle ensemble averaging

We have focused on single-shot imaging, which is robust, simple and useful in many scenarios. However, as mentioned in the discussion, increased ensemble averaging can very simply be made by averaging the autocorrelation over multiple shots of uncorrelated speckle patterns. This improves the statistical signal-to-noise ratio, as seen in Fig. 5(c). One can introduce different uncorrelated speckle patterns in many ways, including bending the fiber, using two orthogonal polarizations (see Fig. 10), spectral filtering, and more.

 figure: Fig. 10

Fig. 10 Decorrelation of speckle patterns for enhanced ensemble averaging. a, Decorrelation of the speckle pattern due to changing polarization. The correlations presented are between speckle patterns taken using the setup shown in Fig. 7, where a linear polarizer was added adjacent to the bundle’s output facet. The correlations were taken with respect to the speckle pattern taken at polarizer angle 0. One can see that the two orthogonal polarizations give two uncorrelated speckle patterns. b, Decorrelation of speckle patterns with different fiber bundle’s bending. In the graph are 25 different speckle patterns taken using the setup shown in Fig. 7, where each pattern was created with a different slight bending of the bundle. Each color is showing the correlation between one speckle patterns to all the remaining 24. One can see that virtually no correlation exists between the different patterns.

Download Full Size | PPT Slide | PDF

Appendix E: Image retrieval algorithm (phase retrieval)

The object image is retrieved from its autocorrelation, which is calculated from the measured scattered light camera image. Given the processed (smoothed and envelope corrected) camera image I(x, y) (see Methods), the scattered light autocorrelation, R(x, y), is calculated by an inverse two-dimensional Fourier transform of its power-spectrum, building on the Wiener-Khinchin theorem:

R(x,y)=I(x,y)*I(x,y)=FT1{|FT{I(x,y)}|2}

According to the Wiener-Khinchin theorem, the object’s power spectrum, Smeas(kx, ky), is the Fourier transform amplitude of its autocorrelation. Therefore we calculate the object’s power spectrum by performing a 2D Fourier transform of the central part of this autocorrelation, after windowing with a windowing function W(x, y) (e.g. a Tukey window, see Methods):

Smeas(kx,ky)=|FT{W(x,y)R(x,y)}|
where R′(x, y) is the background-subtracted autocorrelation trace R(x, y) (for more information about the pre-processing of the autocorrelation, see Methods). At this point, the ’only’ missing information required to reconstruct the object’s image is the phase of its 2D Fourier transform, which is found by an iterative Fienup-type phase-retrieval algorithm [30]. The phase-retrieval algorithm was implemented according to the recipe given by Bertolotti et al. [23]. A block-diagram of this algorithm is given in Fig. 11. This modified Gerchberg-Saxton algorithm starts with an initial guess for the object pattern g1(x, y), chosen as a random pattern in our experiments. This initial guess is entered to the algorithm that performs the following four steps at its kth iteration:
  1. Gk(kx, ky) = FT {gk(x, y)}
  2. θk(kx, ky) = arg{Gk(kx, ky)}
  3. G′k(kx, ky) =(Smeas(kx, ky))1/2exp(k(kx, ky))
  4. g′k(x, y) = FT−1{G′k(kx, ky)}

Where the measured information on the object’s autocorrelation is used in the third step.

 figure: Fig. 11

Fig. 11 Block diagram of the iterative phase-retrieval algorithm used. The algorithm used is the Fienup’s HIO phase-retrieval algorithm [30] that was implemented according to Bertolotti et al. [23], followed by Fienup’s error-reduction algorithm. Both algorithms are based on an iterative modified Gerchberg-Saxton algorithm whose block diagram is shown. Smeas(kx, ky) is the Fourier transform amplitude of the experimentally measured autocorrelation, which is used to approximates the object’s power spectrum, as described in Eq. (12) in the article main text.

Download Full Size | PPT Slide | PDF

The input for the next (k + 1) iteration, gk+1(x, y), is obtained from the output of the kth iteration, g′k(x, y), by imposing physical constraints on the object image, it being either real and non-negative or limited to half of the complex plane in our implementations. Following Bertolotti et al. [23] we have used two types of implementations of these constraints into the algorithm, termed the “Hybrid Input-Ouput (HIO)” and the “Error reduction” algorithms, as pioneered by Fienup [30]. These algorithms are described by:

gk+1(x,y)={gk(x,y)(x,y)Γ0(x,y)Γ
For the Error-reduction algorithm, and:
gk+1(x,y)={gk(x,y)(x,y)Γgk(x,y)βgk(x,y)(x,y)Γ
For the HIO algorithm, where is the set of all points (x, y) on g′k(x, y) that violate the physical constraints, and is a feedback parameter that control the convergence properties of the algorithm. Following Bertolotti et al. [23], first a few thousands iterations of the hybrid input-output (HIO) algorithm [30] were ran with a decreasing beta factor from β = 2 to β = 0, in steps of 0.04. For each β value, 40 iterations of the algorithm were performed (i.e., a total of 2000 iterations). The result of the HIO algorithm was fed as an input to additional 40 iterations of the ’error reduction’ algorithm to obtain the final result. Importantly, to assure faithful reconstruction of each image with these basic phase-retrieval algorithms, several different runs of the algorithm (from 20 up to 400, typically 50) were performed with different random initial conditions, and to each reconstruction we assigned an error metric. The error metric was defined by the mean square difference between the reconstructions Fourier spectrum to the Fourier modulus of the measured autocorrelation, Smeas(kx, ky). Ideally, the lowest error reconstruction should be the optimal one. However, during our studies using these basic algorithms we found that even though these reconstructions were satisfactory (See example in Fig. 12), they were not always the optimal reconstructions, compared to the original known object. In addition, we note that the results of these phase-retrieval algorithms are known to be sensitive to the preprocessing of the autocorrelation (background subtraction, normalization, windowing function, size of support and smoothing kernel). However, existing superior algorithms are expected to substantially improve reconstruction fidelity and convergence.

 figure: Fig. 12

Fig. 12 Example of a lowest-error reconstruction out of 100 runs of the reconstruction algorithm with different random initial conditions, for the object of Fig. 2. Although the displayed reconstruction is the one having the lowest mean square error between its Fourier spectrum and the Fourier transform of the measured autocorrelation, in some cases reconstructions of slightly larger calculated errors gave results that better estimate the object.

Download Full Size | PPT Slide | PDF

Appendix F: Extension to coherent imaging

When spatially coherently illuminated objects placed in the farfield of the bundle are considered, one can follow a similar derivation as for incoherent case (Eq. (12)), replacing all of the light intensity terms with complex fields:

E(x)=Eobj(x)*S(x)
Where E(x) is the field at the imaging plane, Eobj(x) is the light field of the object, and S(x) is the complex coherent speckle pattern field of the system. Taking the autocorrelation of the resulting field E(x) gives:
E(x)E(x)=(Eobj(x)Eobj(x))*(S(x)S(x))
In a similar manner, the absolute value of the autocorrelation of the complex speckle pattern, S(x) ★ S(x), is a sharply-peaked function with the same width as the autocorrelation of the speckle’s intensity, but without its constant background. Once more, this enables us to estimate the object’s complex autocorrelation by calculating the autocorrelation of the output field. Acquiring the output field requires interferometric measurements (e.g. by off-axis holography), which will reduce the robustness and simplicity of the incoherent method. However, building on the Wiener-Khinchin theorem, one can acquire the same information about the field autocorrelation without requiring interferometric detection. This is because the Fourier transform of the complex output field autocorrelation is its power-spectrum:
FT{E(x)E(x)}=|E˜(k)|2
Since the bundle’s input facet is located at the far field of the object, and provided no optical cross-talk exists between the bundle cores, the image on the bundle’s output facet will show a pixelated version of the power spectrum of the object itself (i.e. the intensity of its diffraction pattern). A Fourier transform of the facet image intensity gives the complex object’s autocorrelation. The complex object can be reconstructed with the same phase retrieval algorithm used before (exactly as done in x-ray coherent diffractive imaging [32]). To demonstrate this, an image of the output facet was acquired (Fig. 13(a)), when the object was placed at a distance from the bundle’s input facet. We used an object with a size of the order of 0.5mm and an illuminatrion wavelength of 532 nm. For these parameters the farfield condition is met for a distance of about 1 meter. Instead of expanding the size of our setup, the results shown in Fig. 13 were taken when the object was illuminatde by a spherically converging coherent illumination, obtained by a simple focusing lens (see Appendix A). The lens focused the light on the bundle’s input facet, and by that created on it the object Franhufer diffraction pattern. Thus, as the results of Fig. 13 demonstrate, spatially incoherent illumination is not a strict requirement of speckle-correlation based imaging. In scenarios where the bundle’s input facet is located in the near-field of the object, the facet intensity pattern is the intensity of the Fresnel diffraction pattern of the obejct. The object can be reconstructed from this pattern [33]. However, in order to reconstruct objects from these patterns, the distance between the fiber and the object should be known, or be found by e.g. multiple reconstructions attempts at different distances, in a similar fashion to a computational ’auto-focus’ mechanism.

 figure: Fig. 13

Fig. 13 Imaging with spatially coherent illumination. a, The bundle facet image gives the intensity of the Franhufer diffraction pattern of the test object shown in (e), sampled by the bundle cores; b, Fourier transform of (a); c, Reconstructed object from (b); d, Autocorrelation of the object shown in (e); e, The test object used, from the 1951 USAF target, group 2; All scale bars are equal to 0.1mm.

Download Full Size | PPT Slide | PDF

Appendix G: Effect of number of cores on imaging resolution

Here we study the effect that a reduced number of cores, Ncores, in the bundle has on the imaging resolution of our technique. Since the resolution of our technique is determined by the size of the speckle grain (the diffraction limit), which at large enough working distances (U > Umin) is determined only by the diameter of the bundle: δxU · λ/Dbundle, a reduced number of cores will not affect the resolution but will only reduce the number of speckles, Nspeckles (for single-mode cores Nspeckles = Ncores). While not affecting the resolution, a lower number of speckles will reduce the contrast and signal to noise ratio (SNR) of the the measured autocorrelation [24], as can be seen in the numerical results displayed in Fig. 14, below. The reduced autocorrelation SNR will effectively limit the complexity (number of bright resolvable resolution cells) of the imaged objects [24] and/or field-of-view of the approach, as is analyzed in the manuscript text referring to Fig. 6.

 figure: Fig. 14

Fig. 14 Simulations of the effect of the number of cores. (a), Simulated bundle facet containing 32,025 cores. (b), close up on the simulated cores of (a). (c), The speckle pattern created in the far field by the bundle of (a) with random relative phases. (d), The autocorrelation of the speckle pattern presented in (c), displaying a diffraction-limited autocorrelation peak on top of a background, with fluctuations proportional to 1/Nspeckles. (e–h), the same as (a–d) for a bundle containing 1,281 cores. The lower number of cores increases the background fluctuations in the autocorrelation but does not affect the width of its central peak, which determines the imaging resolution.

Download Full Size | PPT Slide | PDF

Funding

This work was supported by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL; the CNRS/Weizmann NaBi European Associated Laboratory; The Aix-Marseille University/Hebrew University of Jerusalem collaborative-research joint program. The work was funded by the European Research Council (grants no. 278025 and 677909). O.K. was supported by an Azrieli Faculty Fellowship and the Marie Curie Intra-European Fellowship for career development (IEF).

Acknowledgments

The authors thank Mickael Mounaix and Cathie Ventalon for their valuable help.

References and links

1. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). [CrossRef]   [PubMed]  

2. G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013). [CrossRef]  

3. S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015). [CrossRef]   [PubMed]  

4. R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014). [CrossRef]  

5. G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015). [CrossRef]   [PubMed]  

6. J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012). [CrossRef]  

7. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

8. R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express 19(1), 247–254 (2011). [CrossRef]   [PubMed]  

9. S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012). [CrossRef]  

10. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012). [CrossRef]   [PubMed]  

11. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express 20(10), 10583–10590 (2012). [CrossRef]   [PubMed]  

12. Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012). [CrossRef]   [PubMed]  

13. M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015). [CrossRef]  

14. S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

15. Standard specifications for imagefibres (Fujikura, 2016), http://www.fujikura.co.uk/media/18438/image%20fibre.PDF

16. N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express 16(11), 8016–8025 (2008). [CrossRef]   [PubMed]  

17. A. J. Thompson, C. Paterson, M. A. A. Neil, C. Dunsby, and P. M. W. French, “Adaptive phase compensation for ultracompact laser scanning endomicroscopy,” Opt. Lett. 36(9), 1707–1709 (2011). [CrossRef]   [PubMed]  

18. E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013). [CrossRef]   [PubMed]  

19. E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).

20. D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014). [CrossRef]   [PubMed]  

21. N. Stasio, D. B. Conkey, C. Moser, and D. Psaltis, “Light control in a multicore fiber using the memory effect,” Opt. Express 23(23), 30532–30544 (2015). [CrossRef]   [PubMed]  

22. N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Express 41(13), 3078–3081 (2016).

23. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]   [PubMed]  

24. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

25. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014). [CrossRef]  

26. A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

27. R. K. Tyson, Principles of Adaptive Optics, 3rd ed. (Academic, 2010). [CrossRef]  

28. I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990). [CrossRef]  

29. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988). [CrossRef]   [PubMed]  

30. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

31. B. Redding and H. Cao, “Using a multimode fiber as a high-resolution, low-loss spectrometer,” Opt. Lett. 37(16), 3384–3386 (2012). [CrossRef]  

32. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010). [CrossRef]  

33. T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003). [CrossRef]  

34. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016). [CrossRef]   [PubMed]  

35. S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

36. S. Heyvaert, H. Ottevaere, I. Kujawa, R. Buczynski, and H. Thienpont, “Numerical characterization of an ultra-high na coherent fiber bundle part ii: point spread function analysis,” Opt. Express 21(21), 25403–25417 (2013). [CrossRef]   [PubMed]  

37. C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

38. E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

39. J. C. Dainty, Laser Speckle and Related Phenomena (Springer, 1984).

References

  • View by:
  • |
  • |
  • |

  1. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
    [Crossref] [PubMed]
  2. G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013).
    [Crossref]
  3. S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015).
    [Crossref] [PubMed]
  4. R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014).
    [Crossref]
  5. G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
    [Crossref] [PubMed]
  6. J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
    [Crossref]
  7. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  8. R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express 19(1), 247–254 (2011).
    [Crossref] [PubMed]
  9. S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012).
    [Crossref]
  10. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012).
    [Crossref] [PubMed]
  11. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express 20(10), 10583–10590 (2012).
    [Crossref] [PubMed]
  12. Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
    [Crossref] [PubMed]
  13. M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
    [Crossref]
  14. S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).
  15. Standard specifications for imagefibres (Fujikura, 2016), http://www.fujikura.co.uk/media/18438/image%20fibre.PDF
  16. N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express 16(11), 8016–8025 (2008).
    [Crossref] [PubMed]
  17. A. J. Thompson, C. Paterson, M. A. A. Neil, C. Dunsby, and P. M. W. French, “Adaptive phase compensation for ultracompact laser scanning endomicroscopy,” Opt. Lett. 36(9), 1707–1709 (2011).
    [Crossref] [PubMed]
  18. E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013).
    [Crossref] [PubMed]
  19. E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).
  20. D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
    [Crossref] [PubMed]
  21. N. Stasio, D. B. Conkey, C. Moser, and D. Psaltis, “Light control in a multicore fiber using the memory effect,” Opt. Express 23(23), 30532–30544 (2015).
    [Crossref] [PubMed]
  22. N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Express 41(13), 3078–3081 (2016).
  23. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref] [PubMed]
  24. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  25. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014).
    [Crossref]
  26. A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).
  27. R. K. Tyson, Principles of Adaptive Optics, 3rd ed. (Academic, 2010).
    [Crossref]
  28. I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990).
    [Crossref]
  29. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
    [Crossref] [PubMed]
  30. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982).
    [Crossref] [PubMed]
  31. B. Redding and H. Cao, “Using a multimode fiber as a high-resolution, low-loss spectrometer,” Opt. Lett. 37(16), 3384–3386 (2012).
    [Crossref]
  32. H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
    [Crossref]
  33. T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003).
    [Crossref]
  34. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016).
    [Crossref] [PubMed]
  35. S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).
  36. S. Heyvaert, H. Ottevaere, I. Kujawa, R. Buczynski, and H. Thienpont, “Numerical characterization of an ultra-high na coherent fiber bundle part ii: point spread function analysis,” Opt. Express 21(21), 25403–25417 (2013).
    [Crossref] [PubMed]
  37. C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).
  38. E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).
  39. J. C. Dainty, Laser Speckle and Related Phenomena (Springer, 1984).

2016 (2)

N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Express 41(13), 3078–3081 (2016).

E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016).
[Crossref] [PubMed]

2015 (4)

N. Stasio, D. B. Conkey, C. Moser, and D. Psaltis, “Light control in a multicore fiber using the memory effect,” Opt. Express 23(23), 30532–30544 (2015).
[Crossref] [PubMed]

S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015).
[Crossref] [PubMed]

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
[Crossref]

2014 (4)

R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014).
[Crossref]

D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
[Crossref] [PubMed]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014).
[Crossref]

2013 (4)

2012 (8)

S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012).
[Crossref]

T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012).
[Crossref] [PubMed]

I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express 20(10), 10583–10590 (2012).
[Crossref] [PubMed]

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

B. Redding and H. Cao, “Using a multimode fiber as a high-resolution, low-loss spectrometer,” Opt. Lett. 37(16), 3384–3386 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

2011 (2)

2010 (1)

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

2008 (1)

2005 (1)

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

2003 (1)

T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003).
[Crossref]

1998 (1)

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

1990 (1)

I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990).
[Crossref]

1988 (1)

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
[Crossref] [PubMed]

1982 (1)

1970 (1)

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Andresen, E. R.

E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013).
[Crossref] [PubMed]

E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

B.s, G.

E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).

Barankov, R.

R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014).
[Crossref]

Batrin, R.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Beaurepaire, E.

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Bianchi, S.

S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012).
[Crossref]

R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express 19(1), 247–254 (2011).
[Crossref] [PubMed]

Bixler, J. N.

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

Blanchot, L.

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Boccara, A. C.

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

Bourg-Heckly, G.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Bouwmans, G.

E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013).
[Crossref] [PubMed]

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Bozinovic, N.

Braud, F.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Brevier, J.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Buczynski, R.

Cao, H.

Chapman, H. N.

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

Cheung, E. L. M.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Choi, W.

D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
[Crossref] [PubMed]

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Choi, Y.

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Chung, E.

D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
[Crossref] [PubMed]

G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013).
[Crossref]

Cižmár, T.

M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
[Crossref]

T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012).
[Crossref] [PubMed]

Cocker, E. D.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Conkey, D. B.

Dainty, J. C.

J. C. Dainty, Laser Speckle and Related Phenomena (Springer, 1984).

Dasari, R. R.

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Dholakia, K.

T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012).
[Crossref] [PubMed]

Druilhe, A.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Ducourthial, G.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Dunsby, C.

Edrei, E.

Emilani, V.

C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

Fabert, M.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Fang-Yen, C.

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Farahi, S.

Feng, S.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
[Crossref] [PubMed]

Fienup, J. R.

Fink, M.

S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015).
[Crossref] [PubMed]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Fleischer, J. W.

Flusberg, B. A.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Ford, T.

French, P. M. W.

Freund, I.

I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
[Crossref] [PubMed]

Gigan, S.

S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015).
[Crossref] [PubMed]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Gilboa, D.

S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

Greenleaf, J. F.

T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003).
[Crossref]

Habert, R.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Heyvaert, S.

Jabbour, J. M.

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

Jung, J. C.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Katz, O.

S. M. Kolenderska, O. Katz, M. Fink, and S. Gigan, “Scanning-free imaging through a single fiber by random spatio-spectral encoding,” Opt. Lett. 40(4), 534–537 (2015).
[Crossref] [PubMed]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

Kim, D.

Kim, J.

Kim, M.

D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
[Crossref] [PubMed]

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Kogan, D.

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Kolenderska, S. M.

Kudlinski, A.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Kujawa, I.

Labeyrie, A.

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Lebec, M.

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

Leclerc, P.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Lee, K. J.

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Leonardo, R. D.

S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012).
[Crossref]

R. D. Leonardo and S. Bianchi, “Hologram transmission through multi-mode optical fibers,” Opt. Express 19(1), 247–254 (2011).
[Crossref] [PubMed]

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

Louradour, F.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Maitland, K. C.

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

Mansuryan, T.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Mertz, J.

R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014).
[Crossref]

N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express 16(11), 8016–8025 (2008).
[Crossref] [PubMed]

C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

Monnere, Serge

E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).

Monneret, S.

Moon, J.

Moser, C.

Mosk, A. P.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Neil, M. A. A.

Nugent, K. A.

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

Oh, G.

G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013).
[Crossref]

Oron, D.

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Ottevaere, H.

Papadopoulos, I. N.

Paterson, C.

Pitts, T.

T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003).
[Crossref]

Piyawattanametha, W.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Plöschner, M.

M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
[Crossref]

Psaltis, D.

Redding, B.

Rigneault, H.

E. R. Andresen, G. B.s, Serge Monnere, and H. Rigneault, “Two-photon lensless endoscope,” Opt. Lett. 21(18), 20713–20721 (2013).

E. R. Andresen, G. Bouwmans, S. Monneret, and H. Rigneault, “Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle,” Opt. Lett. 38(5), 609–611 (2013).
[Crossref] [PubMed]

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Rosen, S.

S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
[Crossref] [PubMed]

Saint-Jalmes, H.

E. Beaurepaire, A. C. Boccara, M. Lebec, L. Blanchot, and H. Saint-Jalmes, “Full field optical coherence microscopy,” Opt. Express 23(4), 244–246 (1998).

Saldua, M. A.

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

Scarcelli, G.

Schnitzer, M. J.

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Silberberg, Y.

S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

Sivankutty, S.

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Stasio, N.

N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Express 41(13), 3078–3081 (2016).

N. Stasio, D. B. Conkey, C. Moser, and D. Psaltis, “Light control in a multicore fiber using the memory effect,” Opt. Express 23(23), 30532–30544 (2015).
[Crossref] [PubMed]

Takasaki, K. T.

Thiberville, L.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Thienpont, H.

Thompson, A. J.

Tsvirkun, V.

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

Tyc, T.

M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
[Crossref]

Tyson, R. K.

R. K. Tyson, Principles of Adaptive Optics, 3rd ed. (Academic, 2010).
[Crossref]

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Ventalon, C.

N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express 16(11), 8016–8025 (2008).
[Crossref] [PubMed]

C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

Vever-Bizet, C.

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Yang, T. D.

D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014).
[Crossref] [PubMed]

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Yoon, C.

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

Yun, S. H.

G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013).
[Crossref]

Ann. Biomed. Eng. (1)

J. M. Jabbour, M. A. Saldua, J. N. Bixler, and K. C. Maitland, “Confocal endomicroscopy: instrumentation and medical applications,” Ann. Biomed. Eng. 40(2), 378–397 (2012).
[Crossref]

Appl. Opt. (1)

Astron. Astrophys. (1)

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by fourier analysing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

IEEE Trans. Ultrason. Ferroelect. Freq. Control (1)

T. Pitts and J. F. Greenleaf, “Fresnel transform phase retrieval from magnitude,” IEEE Trans. Ultrason. Ferroelect. Freq. Control 50(8), 1035–1045 (2003).
[Crossref]

Lab Chip (1)

S. Bianchi and R. D. Leonardo, “A multi-mode fiber probe for holographic micromanipulation and microscopy,” Lab Chip 12(3), 635–639 (2012).
[Crossref]

Nat. Commun. (2)

T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3, 1027 (2012).
[Crossref] [PubMed]

R. Barankov and J. Mertz, “High-throughput imaging of self-luminous objects through a single,” Nat. Commun. 5, 5581 (2014).
[Crossref]

Nat. Methods (1)

B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. L. M. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005).
[Crossref] [PubMed]

Nat. Photonics (4)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9, 529–535 (2015).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

H. N. Chapman and K. A. Nugent, “Coherent lensless x-ray imaging,” Nat. Photonics 4, 833–839 (2010).
[Crossref]

Nature (1)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref] [PubMed]

Opt. Express (8)

Opt. Lett. (6)

Optica (1)

Optical Fiber Technology (1)

G. Oh, E. Chung, and S. H. Yun, “Optical fibers for high-resolution in vivo microendoscopic fluorescence imaging,” Optical Fiber Technology 19(6), 760–771 (2013).
[Crossref]

Phys. Rev. Lett. (2)

Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber,” Phys. Rev. Lett. 109, 203901 (2012).
[Crossref] [PubMed]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328 (1988).
[Crossref] [PubMed]

Physica A (1)

I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990).
[Crossref]

Sci. Rep. (1)

G. Ducourthial, P. Leclerc, T. Mansuryan, M. Fabert, J. Brevier, R. Habert, F. Braud, R. Batrin, C. Vever-Bizet, G. Bourg-Heckly, L. Thiberville, A. Druilhe, A. Kudlinski, and F. Louradour, “Development of a real-time flexible multiphoton microendoscope for label-free imaging in a live animal,” Sci. Rep. 5, 18303 (2015).
[Crossref] [PubMed]

Other (6)

S. Rosen, D. Gilboa, O. Katz, and Y. Silberberg, “Focusing and scanning through flexible multimode fibers without access to the distal end,” arXiv:1506.08586 (2015).

Standard specifications for imagefibres (Fujikura, 2016), http://www.fujikura.co.uk/media/18438/image%20fibre.PDF

S. Sivankutty, V. Tsvirkun, G. Bouwmans, D. Kogan, D. Oron, E. R. Andresen, and H. Rigneault, “Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber,” arXiv:1606.08169 (2016).

C. Ventalon, J. Mertz, and V. Emilani, “Depth encoding with lensless structured illumination fluorescence micro-endoscopy,” in Focus On Microscopy (2009).

R. K. Tyson, Principles of Adaptive Optics, 3rd ed. (Academic, 2010).
[Crossref]

J. C. Dainty, Laser Speckle and Related Phenomena (Springer, 1984).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Conventional vs. speckle-correlations based fiber bundle endoscopy: (a), In a conventional fiber bundle endoscope, the intensity image of an object placed adjacent to the input facet is transferred to the output facet. (b), When the object is placed at a distance, U, from the input facet, a blurred and seemingly information-less image is formed at the output facet. However, even though two different point sources at the object plane (red and blue) produce indistinguishable patterns at the output facet, the relative tilt between their wavefronts results in a spatial shift of the resulting speckle patterns when observed at a small distance, V, from the output facet. (c), Speckle-based imaging: For an extended object, the image of the light intensity at the distance V from the output facet is the sum of many shifted speckle patterns, and its autocorrelation provides an estimate to the object’s autocorrelation. The diffraction-limited object image is retrieved from this autocorrelation via a phase-retrieval algorithm.
Fig. 2
Fig. 2 Experimental demonstration and comparison to conventional bundle imaging. (a), conventional imaging through a fiber bundle when the object is placed at U=0mm. (b), The original object. (c), conventional imaging through a fiber bundle when the object is placed at U=8.5mm from the bundle’s input facet. No imaging information is directly obtainable. (d), Same situation as in (c) but using the presented speckle-based approach; All scalebars are 100μm.
Fig. 3
Fig. 3 Experimental single-shot imaging of various objects at different working distances, U: (a), Raw camera image. (b), Autocorrelation of (a). (c), Object reconstruction from (b). (d), original object. (e–l), same as (a–d) for different objects from 1951 USAF target. In (ah) U = 161mm, in (i–l) U = 65mm. Scalebars: (a,e)=10mm, (i)=5mm, (b,c,d,f,g,h)=0.5mm, (j,k,l)=0.25mm.
Fig. 4
Fig. 4 Imaging resolution at different working distances. (a), Resolution (speckle grain size) as a function of the object’s distance from the bundle facet (U): Measured (blue circles); theoretical diffraction-limit according to Eq. (3) with Dbundle = 570μm and NA = 0.22 (blue line). Calculated resolution of conventional lensless bundle imaging (green line, described also in the inlet in logarithmic scaling); Calculated resolution of conventional lens-based bundle imaging assuming a distal objective with a focus at U = 5mm (red line); Umin is the minimum working distance (see discussion). (b), Test object from the 1951 USAF target, group 3, Scalebar=100μm. (c), Conventional bundle imaging of (b) when placed at U = 0, (d–h), Speckle-based single-shot reconstructions of (b) at distances of 5–13mm.
Fig. 5
Fig. 5 Speckle imaging with broadband illumination. (a), Object pattern. (b), Reconstruction of (a) from a single speckle image through the bundle with illumination of 10nm bandwidth (800nm central wavelength). (c), Increased reconstruction fidelity obtained by ensemble averaging of the speckle autocorrelaiton over 20 different camera shots. Each shot provides an independent speckle realization by slight bending of the the bundle. (d), Experimentally measured spectral correlations of the 48.5cm long fiber bundle used (Schott 153333385); Scalebar=100μm.
Fig. 6
Fig. 6 FOV limitation: (a), a plot of the full autocorrelation of the speckle pattern used to reconstruct the image of Fig. 2(d), showing the periodicity of the speckle pattern generated by the ordered bundle cores. The FOV is limited by half of the angle between the replicas in the autocorrelation: FOV = λ/2Dintercore ≈ 35.5mrad, since it is smaller than the memory effect angle (θmem = λ/dcore ≈ 0.1rad FWHM, see Appendix C). (b), autocorrelation of an object that spans almost the entire available FOV. (c), reconstruction from the autocorrelation of (b), scalebar = 10 mrad; (d), original object imaged in (b–c).
Fig. 7
Fig. 7 Optical setup used for all experiments in transmission geometry. L1,L2 - focusing lenses.
Fig. 8
Fig. 8 Imaging in reflection geometry: a, Reflection setup. L1,L2,L3 - focusing lenses. b–d, Diffusive reflecting objects and the images reconstructed from measured speckle patterns. Scalebars: 0.5mm. Both objects were imaged with a distance of U2 ≈ 22mm between the object and the bundle’s input facet.
Fig. 9
Fig. 9 Measured speckle angular correlation range for a SCHOTT fiber bundle used in Figures 24.
Fig. 10
Fig. 10 Decorrelation of speckle patterns for enhanced ensemble averaging. a, Decorrelation of the speckle pattern due to changing polarization. The correlations presented are between speckle patterns taken using the setup shown in Fig. 7, where a linear polarizer was added adjacent to the bundle’s output facet. The correlations were taken with respect to the speckle pattern taken at polarizer angle 0. One can see that the two orthogonal polarizations give two uncorrelated speckle patterns. b, Decorrelation of speckle patterns with different fiber bundle’s bending. In the graph are 25 different speckle patterns taken using the setup shown in Fig. 7, where each pattern was created with a different slight bending of the bundle. Each color is showing the correlation between one speckle patterns to all the remaining 24. One can see that virtually no correlation exists between the different patterns.
Fig. 11
Fig. 11 Block diagram of the iterative phase-retrieval algorithm used. The algorithm used is the Fienup’s HIO phase-retrieval algorithm [30] that was implemented according to Bertolotti et al. [23], followed by Fienup’s error-reduction algorithm. Both algorithms are based on an iterative modified Gerchberg-Saxton algorithm whose block diagram is shown. Smeas(kx, ky) is the Fourier transform amplitude of the experimentally measured autocorrelation, which is used to approximates the object’s power spectrum, as described in Eq. (12) in the article main text.
Fig. 12
Fig. 12 Example of a lowest-error reconstruction out of 100 runs of the reconstruction algorithm with different random initial conditions, for the object of Fig. 2. Although the displayed reconstruction is the one having the lowest mean square error between its Fourier spectrum and the Fourier transform of the measured autocorrelation, in some cases reconstructions of slightly larger calculated errors gave results that better estimate the object.
Fig. 13
Fig. 13 Imaging with spatially coherent illumination. a, The bundle facet image gives the intensity of the Franhufer diffraction pattern of the test object shown in (e), sampled by the bundle cores; b, Fourier transform of (a); c, Reconstructed object from (b); d, Autocorrelation of the object shown in (e); e, The test object used, from the 1951 USAF target, group 2; All scale bars are equal to 0.1mm.
Fig. 14
Fig. 14 Simulations of the effect of the number of cores. (a), Simulated bundle facet containing 32,025 cores. (b), close up on the simulated cores of (a). (c), The speckle pattern created in the far field by the bundle of (a) with random relative phases. (d), The autocorrelation of the speckle pattern presented in (c), displaying a diffraction-limited autocorrelation peak on top of a background, with fluctuations proportional to 1 / N speckles. (e–h), the same as (a–d) for a bundle containing 1,281 cores. The lower number of cores increases the background fluctuations in the autocorrelation but does not affect the width of its central peak, which determines the imaging resolution.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

I ( r ) = O ( r ) * PSF ( r )
I ( r ) I ( r ) = ( O ( r ) O ( r ) ) * ( PSF ( r ) PSF ( r ) )
δ x [ ( λ D bundle U ) 2 + ( λ NA ) 2 ] 1 / 2
E out ( r ) = ( { [ E in * F ] comb exp ( i R ) } * F ) ( r )
FT ( E out ( r ) ) = ( { [ E ˜ in F ˜ ] * FT [ comb exp ( i R ) ] } F ˜ ) ( k )
I ( k ) = | FT ( E out ( r ) ) | 2 = | { [ δ ( k k in ) F ˜ ( k ) ] * S ( k ) } F ˜ ( k ) | 2 = | F ˜ ( k in ) S ( k k in ) F ˜ ( k ) | 2
I ( k , k in ) = I ( k + δ k , k in + δ k ) = | F ˜ ( k in + δ k ) S ( k k in ) F ˜ ( k + δ k ) | 2
C ( I , I ) = ( I I ¯ ) ( I I ¯ ) d k ( ( I I ¯ ) 2 d k ( I I ¯ ) 2 d k ) 1 / 2 | E E * | 2 d k ( ( I I ¯ ) 2 d k ( I I ¯ ) 2 d k ) 1 / 2
C ( I , I ) | E E * | 2 d k ( ( I I ¯ ) 2 d k ( ( I I ¯ ) 2 ) d k ) 1 / 2 =
| F ˜ ( k in ) F ˜ * ( k in + δ k ) S 2 2 | 2 | F ˜ ( k in ) F ˜ * ( k in + δ k ) S 2 2 | 2 | F ˜ ( k ) F ˜ * ( k + δ k ) | 2 d k ( ( | F ˜ ( k ) | 2 | F ˜ ( k ) ¯ | 2 ) 2 d k ( | F ˜ ( k + δ k ) | 2 | F ˜ ( k + δ k ) ¯ | 2 ) 2 d k ) 1 / 2 =
| F ˜ ( k ) F ˜ * ( k + δ k ) | 2 d k ( | F ˜ ( k ) | 2 | F ˜ ( k ) ¯ | 2 ) 2 d k
NA = λ d mode
R ( x , y ) = I ( x , y ) * I ( x , y ) = FT 1 { | FT { I ( x , y ) } | 2 }
S meas ( k x , k y ) = | FT { W ( x , y ) R ( x , y ) } |
g k + 1 ( x , y ) = { g k ( x , y ) ( x , y ) Γ 0 ( x , y ) Γ
g k + 1 ( x , y ) = { g k ( x , y ) ( x , y ) Γ g k ( x , y ) β g k ( x , y ) ( x , y ) Γ
E ( x ) = E obj ( x ) * S ( x )
E ( x ) E ( x ) = ( E obj ( x ) E obj ( x ) ) * ( S ( x ) S ( x ) )
FT { E ( x ) E ( x ) } = | E ˜ ( k ) | 2

Metrics