Abstract

Prior-free imaging beyond the memory effect (ME) is critical to seeing through the scattering media. However, methods proposed to exceed the ME range have suffered from the availability of prior information of imaging targets. Here, we propose a blind target position detection for large field-of-view scattering imaging. Only exploiting two captured multi-target near-field speckles at different imaging distances, the unknown number and locations of the isolated imaging targets are blindly reconstructed via the proposed scaling-vector-based detection. Autocorrelations can be calculated for the speckle regions centered by the derived positions via low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the complete scene of the multiple targets exceeding the ME range can be reconstructed without any prior information. The effectiveness of the proposed algorithm is verified by testing on a real scattering imaging system.

© 2020 Chinese Laser Press

1. INTRODUCTION

Scattering media widely exist in our daily life, which possess a non-uniform reflective and refractive index distribution. It disturbs the light rays coming from the imaging targets, which hinders the direct analysis of object information behind it using traditional optical systems. To overcome this challenge, nowadays, novel methods based on the memory effect (ME) [14] achieve non-invasive scattering imaging via speckle correlation [57]. Compared with other scattering imaging methods, such as the ballistic-light-based approach [813], wavefront shaping [14,15], and transmission matrix measurement [1621], the speckle correlation technique realizes prior-free imaging only by traditional instruments and has the capability of quick imaging in currently inaccessible scenarios. However, the field-of-view (FOV) of this method is limited by the ME range. Prior-free imaging beyond the ME range is critical to seeing through scattering media.

To exceed the ME range, some techniques have been proposed by introducing the prior information of isolated imaging targets. Li et al. introduced the position prior of each imaging target during the point spread function (PSF) calibration [22]; Sahoo et al. used the wavelength prior of each imaging target during the PSF calibration [23]; and Guo et al. exploited the shape prior or the PSF closing to the imaging target to exceed the ME range [24]. Wang et al. proposed a dual-target non-invasive scattering imaging method via Fourier spectrum guessing and iterative energy constrained compensation [25]. However, this method can only be used for dual-target separation and the number of targets serves as the known prior information before reconstruction. Boniface et al. achieved non-invasive target localization beyond the ME by analyzing the speckle envelope of each target [26], but this localization method was not available when an unknown number of targets were illuminated simultaneously.

Here, we put forward a multi-target large FOV scattering imaging method based on the blind target position detection. It blindly detects the unknown number and positions of the isolated targets only using two multi-target near-field speckles captured at different imaging distances. The theoretical scaling relationship between two speckles is derived and demonstrated and the scaling centers correspond to the positions of imaging targets. Based on the theoretical derivation, scaling-vector-based target position detection is proposed, which recognizes the target position using the length and direction information of scaling vectors. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm [27] to select the optimal recovery with no prior information, especially for the autocorrelation with interference, the complete scene of multiple isolated targets exceeding the ME range is reconstructed. Experiments on a real imaging system demonstrate the effectiveness of the proposed algorithm in multi-target blind reconstruction through scattering media. Visually distinguishable reconstructions are experimentally achieved with the whole scene of multiple isolated targets exceeding the ME range. In principle, the exceeding times can be increased as long as the acquisition equipment can fully collect the scattered speckle in larger FOV. Further, we verified the accuracy for multi-target positioning, capability for multi-target blind reconstruction, and the experimental analysis about limitations and restraints of the proposed method in the end.

2. PRINCIPLE

The principle of the method is depicted in Fig. 1. We excite multiple isolated targets, O1,O2,,On, hidden behind a scattering medium, with a spatially incoherent illumination. Each imaging target falls within the ME range, while the spacing between any two targets is beyond. The large FOV speckle image at Distance 1, Id1, which consists of speckles produced by each target, Id11,Id12,,Id1n, is captured by a 2D camera array. Certainly, the proposed imaging system is applied for the situation that the number and locations of the imaging targets cannot be seen directly from the captured speckle. This position information is hidden in Id1. To extract it, a second speckle Id2 is captured at another imaging distance, Distance 2. The ME range, Δθ, corresponds to the angle FOV of the diffuser, within which the points on the object plane produce random speckles with high correlation. For the spatially incoherent imaging system via speckle correlation, its ME range is constrained by Δθλ/πL [5], where λ denotes the wavelength of the spatially incoherent light source and L is the effective thickness of the scattering media. Since the speckle patterns generated within the ME range are translational invariant, the large FOV speckle of multiple targets recorded by the camera array at Distance 1, Id1 in Fig. 1 can be formulated by (so as d2 for Id2)

Id1=k=1nId1k=k=1nOk*PSFd1k,
where * denotes the convolution operation; n denotes the total number of the imaging targets and Ok(k=1,2,,n) are all the imaging targets; PSFd1k represents the translational invariant point spread function at Distance 1 corresponding to the point where Ok is located in the object plane; and Id1k denotes the speckle pattern generated by Ok with imaging distance d1. The translational invariance of the PSF can be further expressed by the envelope of each PSF, which is normally removed during the image processing and seen as the obstacle for reconstruction [5]. It varies with the distance away from the corresponding point of each PSF. Especially for multi-target scattering imaging, multiple envelopes couple the multi-target information with the hidden target position information, which increases the difficulty for imaging exceeding the ME. The intensity distribution of each PSF can be divided into two parts by
PSFd1k(x,y)=C(xuk,yvk)·Sd1k(x,y),
where (x,y) is 2D coordinates in the sensor plane; (u,v) is 2D coordinates in the object plane, and (u=uk,v=vk) is the location of Ok in the object plane. C denotes the envelope property (or energy distribution of the PSF), which has high value if (x,y) is close to (uk,vk) with relatively low value if (x,y) is far away from (uk,vk) [1] and shows the same property for the PSF produced by a point-like source or the speckle generated by a small size object. Sd1k denotes the system response at Distance 1 corresponding to Ok after removing the envelope and the autocorrelation of Sd1k equals a sharp peak function [5]. Substituting Eq. (2) into Eq. (1), we have
Id1(x,y)=k=1nC(xuk,yvk)·(Ok*Sd1k)(x,y).

 figure: Fig. 1.

Fig. 1. Schematic of our multi-target large FOV scattering imaging system via the blind target position detection. Multiple isolated targets, O1,O2,,On, behind the diffuser form a large FOV scene.

Download Full Size | PPT Slide | PDF

From Eq. (3), the number, the shapes, and the locations of the imaging targets are coupled together in the captured speckle. This is the problem with an infinite number of solutions when reconstructing all the target shapes only via Id1, since there are so many unknown variables in Eq. (3). One effective way for simplifying this problem and reducing the solution space is to detect the total number of the objects n and the location of each object (uk,vk),k=1,2,,n without any prior information. Thus, the blind target position detection algorithm is proposed and introduced in the following.

We exploit a second near-field speckle Id2 captured at another imaging distance for position detection. The theoretical scaling relationship between the two near-field speckles derived by us indicates that some areas (called scaling centers) in the near-field speckles do not scale with the imaging distance and these scaling-invariant areas correspond to the locations of imaging targets. To the best of our knowledge, no work has been proposed to investigate the relationship between two near-field speckles at different imaging distances. Inspired by the existing statistical analysis of the far-field speckles in 3D space [18,28,29], and the novel technique using speckle correlation to improve axial sectioning [30], without loss of generality, we first analyze the scaling relationship between two near-field on-axis PSFs. With an ideal pinhole at the optical axis in the object plane as the corresponding point where one imaging target (supposed as Ok) is located in the object plane, using the Fresnel diffraction formula, the field on the front surface of the scattering media, Us, is expressed by

Us(xs,ys)=ej2πd0/λjλd0+δ(u,v)|u=0v=0·ejπλd0[(uxs)2+(vys)2]dudv=ej2πd0/λjλd0·ejπλd0(xs2+ys2),
where we assume that the on-axis pinhole is located at (u=0,v=0); (xs,ys) is the 2D coordinates corresponding to the diffuser plane. Here, the scattering medium regarded as an unknown random 2D phase disturbance, TM(xs,ys), is introduced into the propagation model. Using the near-field Huygens diffraction theory, the light field on the first sensor plane, h1, can be expressed by
h1(x,y)=ej2πd0/λd1λ2d0xs,ysTM(xs,ys)·ejπλd0(xs2+ys2)·ej2πλd12+(xsx)2+(ysy)2d12+(xsx)2+(ysy)2dxsdysej2π(d0+d1)/λd1λ2d0ejπλd1(x2+y2)xs,ysTM(xs,ys)·ejπλf(xs2+ys2)·ej2π(xλd1xs+yλd1ys)d12+(xsx)2+(ysy)2dxsdys,
where 1/f equals (1/d0+1/d1). The scattered PSF shown in Eq. (2) at Distance 1 plane, PSFd1k (so as d2 for PSFd2k at Distance 2 plane), is the square of the magnitude of h1 as
PSFd1k(x,y)=|h1(x,y)|2=(d1λ2d0)2|xs,ysTM(xs,ys)·ejπλf(xs2+ys2)·ej2π(xλd1xs+yλd1ys)d12+(xsx)2+(ysy)2dxsdys|2.

To describe the scaling relationship, we introduce the correlation function between on-axis PSFd1k and PSFd2k as

T(d1,d2,m)=PSFd1k(x,y)·PSFd2k(mx,my)dxdy,
where m is a constant value which represents that the spatial coordinates of PSFd2k are scaled to PSFd1k. In theory, the optimal m value corresponds to the peak of the correlation function, T, where PSFd1k and the scaled PSFd2k are the most relevant. However, based on the above formula, multiple variables are coupled to each other in near-field PSFs and we cannot derive the analytical solution of m with the peak of the correlation function between PSFd1k and PSFd2k. In this case, we simulated the PSFs by computer via Eq. (6) and search the most relevant areas by block matching to explore the statistical optimal solution of m.

The simulated PSFs are conducted as shown in Fig. 2, where the scattering process is created from a random phase mask based on the projection model [31,32]. A point light source was set at the on-axis position in the object plane as the corresponding point where Ok is located in the object plane. The simulated PSFs (normalized by removing the envelope) at different imaging distances through the modeled scattered layer are shown in Figs. 2(a) and 2(b). Visually, the intensity distribution of the two PSFs is strongly correlated, such as the green rectangles. Inspired by the wavefront slopes used from adaptive optics [33,34], we introduce the scaling vector into the algorithm to explore the detailed relationship between two imaging-distance PSFs, which is estimated by discrete block matching. We traverse PSFd2k to search the optimal area most relevant to a selected block in PSFd1k, and the translational relation from the selected block in PSFd1k to the optimal block in PSFd2k is equivalent to the scaling vector from the center point of the selected block in PSFd1k to PSFd2k. The block matching strategy can be expressed as

(xd2,yd2)=argmax(x,y)[Corr(Mx,y,Nxd1,yd1)],
where Nxd1,yd1 denotes the selected block with (xd1,yd1) as the center in PSFd1k; Mx,y denotes the searching block with (x,y) as the center in PSFd2k to match Nxd1,yd1; and the matched block in PSFd2k is centered around (xd2,yd2). Corr(·) calculates the correlation between two pixel blocks and we use the cross correlation in this paper [5]. In this way, a scaling vector is defined as an arrow with direction from (xd1,yd1) in PSFd1k to (xd2,yd2) in PSFd2k. To balance the computational complexity, we just utilized part of the discrete scaling vectors built by block matching, as shown in Fig. 2(c). The space between any two vectors vertically or horizontally is 20 pixels.

 figure: Fig. 2.

Fig. 2. Simulated experiments to analyze the relationship between two PSFs at different imaging distances (d0=120mm, pixel size=4.8μm, 600×600 pixels). The point light source was set at the optical axis (u=300,v=300), as the corresponding point where Ok located in the object plane. (a) Normalized PSFd1k with d1=17mm. (b) Normalized PSFd2k with d2=19mm. (c) The estimated low-density scaling vectors based on (a) and (b). The space between any two vectors vertically or horizontally is 20 pixels. The green rectangle in (b) is the matched block of the green rectangle in (a) and the enlarged arrow in (c) represents the estimated scaling vector corresponding to these two green rectangles. The blue point in (c) is the location of the light source. (d) The histogram distribution of m values extracted from all the scaling vectors in (c). Scale bar, 50 camera pixels.

Download Full Size | PPT Slide | PDF

Then we calculate the m value of each estimated scaling vector in Fig. 2(c), which equals the distance between (xd2,yd2) and the scaling center divided by the distance between (xd1,yd1) and the scaling center, as shown in Eq. (7). Theoretically, all the scaling vectors which were estimated by block matching with maximum cross correlation share the same m value and the statistical analysis for Fig. 2(c) bears this out. The histogram distribution of m values as shown in Fig. 2(d) demonstrates that m stabilizes around 1.015 as a constant value, which meets the description of Eq. (7). Although the analytical solution of Eq. (7) is hard to derive, the scaling relationship between near-field PSFs statistical analysis does exist, which was proved by the above statistical analysis. We tentatively conclude that the on-axis (u=0,v=0) near-field scaling relationship between PSFd1k at Distance 1 plane and PSFd2k at Distance 2 plane can be expressed as

PSFd1k(x,y)PSFd2k(mx,my),
where denotes that PSFd1k is most relevant to the coordinate-scaled PSFd2k with the constant coordinate-scaled value m. Also, the on-axis scaling relationship in Eq. (9) can be generalized to the off-axis case as
PSFd1k(x,y)PSFd2k[mx(m1)uk,my(m1)vk],
and the corresponding point where Ok is located changes from (u=0,v=0) to (u=uk,v=vk). As described in Eq. (10), for the off-axis scaling relationship situation, the scaling vector that starts at (x,y) in PSFd1k will end at [mx(m1)uk,my(m1)vk] in PSFd2k. This reveals that the scaling vectors have two features representing the location of Ok, which is also regarded as the scaling center from Distance 1 to Distance 2. First, the length of the scaling vector is proportional to the distance between the scaling vector and the scaling center. Second, the scaling center falls on the defined line by each scaling vector and each scaling vector points away from the scaling center (if d2>d1).

Under the spatially incoherent illumination, the speckle generated by one certain imaging target (supposed as Ok) is the linear superposition of many highly correlated PSFs. Actually, each PSF possesses a different scaling center corresponding to one point belonging to the imaging target. Considering the schematic of the proposed scattering imaging system in Fig. 1, the size of each imaging target falls within the ME range but the spacing between any two targets beyond. All the PSFs that form the single-target speckle possess the same scaling center approximately where the imaging target located around. Equation (11) explains the scaling relationship between the single-target near-field speckles as

Id1k(Δx+uk,Δy+vk)Id2k(m·Δx+uk,m·Δy+vk),
where (uk,vk) represents the scaling center of these two speckles which equals the location of Ok; (Δx,Δy) denotes the 2D coordinate differences between the scaling vector and the scaling center. Theoretically, the mentioned feathers between scaling vectors and scaling centers for PSFs still apply to the estimated scaling vectors by near-field speckles from (Δx+uk,Δy+vk) to (mΔx+uk,mΔy+vk).

After that, the scaling-vector-based detection algorithm, as shown in Fig. 3, is proposed based on Eq. (11) for multiple targets in Fig. 1 under the spatially incoherent light source. First, the scaling vectors are estimated by block matching as Eq. (8) from two imaging-distance multi-target speckles. We traverse Id2 to search the optimal area most relevant to a selected block in Id2. The density of estimated scaling vectors vertically or horizontally can be adjusted appropriately with the speckle resolution. Then, multiple targets mean that multiple scaling centers exist around the estimated scaling vectors. In the case that there is an uncertain number of scaling centers, we use the length information of each estimated scaling vector to determine some regions where the targets may locate. Any position whose scaling vector length is below a certain threshold is listed as the possible region. Next, the connected component analysis (8-connected) [35] will be applied for clustering these regions and the number of the connected components equals the number of imaging targets. The algorithm will choose the area with the minimum length of the scaling vector in each connected component as the rough location of each target. Finally, the direction information of the scaling vector would be used for each rough location to adjust the positioned row and column where the imaging target located accurately, because the line defined by each scaling vector belonging to one connected component theoretically passes through the target position in that component. The proposed scaling-vector-based detection algorithm is assisted with the block matching method and the connected component analysis to achieve blind target position detection, only exploiting two captured multi-target near-field speckles at different imaging distances. The detected position information includes the number, n, and the locations of the imaging targets corresponding to the object plane, (uk,vk),k=1,2,,n, like the blue points in the Section 3.

 figure: Fig. 3.

Fig. 3. The block diagram of the scaling-vector-based detection algorithm.

Download Full Size | PPT Slide | PDF

After target position detection, the low-cross-talk region allocation strategy is proposed to extract the autocorrelation of each target, in order to simplify the infinite-solution problem in Eq. (3). We can select a small square region of the captured speckle, ε, in Id1 for autocorrelation calculation, which is centered by one detected target location (O1 as an example) with side length β as

ε={(x,y)|(x,y)(u1,v1)2(x,y)(uk,vk)2(x,y)(u1,v1)β/2,k=2,,n}.

Considering the envelope properties of speckle, the selected region affects the weight of each imaging target in Id1 by

C(xu1,yv1)C(xuk,yvk),k=2,,n,(x,y)ε.

This weight difference would be doubled if Id1 is transferred into the autocorrelation domain [25] as

IεIε(x,y)=k=1nC(xuk,yvk)2·OkOk(x,y)C(xu1,yv1)2·O1O1(x,y)O1O1(x,y),
where Iε denotes the selected square region in Id1 belonging to ε; C(xuk,yvk) is spatially constant when ε is much smaller than the whole speckle to describe the remained weight for Ok caused by the envelope. Via the doubled weight difference, the autocorrelation of O1 can be extracted from the autocorrelation of the selected region in Id1 centered by the location of O1. Repeat the above steps n times for each target location and the autocorrelation of each imaging target will be reconstructed with low cross talk by other autocorrelations as shown in Eq. (14).

However, the extracted autocorrelations of each imaging target do carry some interference from other autocorrelations in theory, which leads to an unstable output after the traditional phase retrieval algorithm with the random phase as the initial input. In order to improve the stability of reconstruction, the modified phase retrieval algorithm is applied in the paper especially for the autocorrelation signal with interference. First, we blindly reconstruct a number of object images by the “hybrid input-output” and the “error-reduction” algorithms via different random initial phases [27]. Then, for each object image, the part whose intensity is less than 20% of the maximum intensity of that image is regarded as unstable noise and the intensity of that part is set to zero. Finally, the object image with the least change in the autocorrelation domain between the unprocessed one and the processed one is taken as the final optimal output.

Working with the modified phase retrieval algorithm, each target can be reconstructed with the help of the detected number and locations of the imaging targets. Then, these targets will be placed at the detected position to form a complete scene of the multiple targets exceeding the ME range without any prior information, eventually. It has been verified by the real experiments that the approximations that appear in this section are acceptable and the extracted position information is sufficient for a visually distinguishable reconstruction. In addition, we discuss the limitations of the proposed algorithm in the next section.

3. EXPERIMENTAL DEMONSTRATION

In the following section, we first describe the optical setup of the scattering imaging system, then detail the tests on a real scattering imaging system, and finally analyze the limitations of the proposed algorithm.

A. Experimental Setup

The multi-target large FOV scattering imaging system setup via the blind target position detection is shown in Fig. 4, which is extended from the single-shot scattering system proposed by Katz et al. [5]. A narrow bandwidth 532 nm single-frequency CW laser (Cobolt SambaTM-100) serves as the light source whose coherence is attenuated via rotating ground glass. One Thorlabs Optics 220-grit diffuser whose effective ME range is 16.6 mrad is placed between multiple targets and the sensor plane [25]. We used a single CMOS camera (Filr, pixel size=4.8μm, 1280×1024 pixels) on a 3D moving platform (DHC, minimum scale=10μm) to capture the large FOV speckles at different imaging distances.

 figure: Fig. 4.

Fig. 4. Multi-target large FOV scattering imaging system setup via the blind target position detection.

Download Full Size | PPT Slide | PDF

B. Tests on a Real Scattering Imaging System

First, various multiple targets were tested to demonstrate the effectiveness of the proposed algorithm. The left column shows the test on the mask “2FL” and the right column shows the test on a larger and more complex scene “01234.” Figures 5(a) and 5(f) describe the detailed parameters of multiple targets. The minimum distance between any two targets is 3.5 mm, and each target size is within 0.5 mm. In this part, the object distance (d0) equals 120 mm. The total scene size is much larger than the ME range (120mm×16.6mrad), but each target size is smaller than it. The real multi-target large FOV near-field speckles through scattering media at different imaging distances are captured via a 3D moving camera. Figures 5(c) and 5(h) are captured with d2=17.0mm and Figs. 5(d) and 5(i) with d1=16.5mm. Visually, no prior information (including the number and locations of the imaging targets) can be seen directly from these multi-target superimposed speckles. The red arrows in Fig. 5(e) show all the estimated scaling vectors using the block matching method based on the near-field speckles for mask “2FL,” which describe the scaling relationship between these two imaging-distance speckles. Obviously, there are three scaling centers that exist among these scaling vectors. Meanwhile, Fig. 5(e) shows the extracted connected components for mask “2FL” in the bottom right and the blue points denote the final detected multi-target locations. The number of the extracted connected components matches the number of the imaging targets and this multi-target scattering imaging problem is specified as the process of reconstructing three objects. After that, the autocorrelation of each imaging target as shown in Fig. 5(d) can be reconstructed via the operations as Eq. (14) in the selected speckle region centered by the detected position information in Fig. 5(e). Working with the modified phase retrieval algorithm, each imaging target can be reconstructed successfully and then put at its location to form a complete scene as shown in Fig. 5(b), which is visually close to the original one. The process of Figs. 5(h)5(j) for target “01234” is the same as the process of Figs. 5(c)5(e). The objective similarity evaluations between the reconstructions and the original targets by peak signal-to-noise ratio (PSNR) are shown in Table 1 and the detected target locations are shown in Figs. 5(b) and 5(g). Compared with the original scene in Figs. 5(a) and 5(f), these reconstructed results demonstrate that the multiple targets can be accurately positioned by scaling-vector-based position detection and the proposed method achieves multi-target large FOV blind reconstruction through scattering media in the real imaging system.

 figure: Fig. 5.

Fig. 5. Tests on a real scattering imaging system. (a) The multi-target mask “2FL” with the detailed parameters as the imaging targets. (b) The final large FOV reconstruction with the detected position information. (c) The captured near-field speckle with d2=17.0mm. (d) The captured near-field speckle with d1=16.5mm and the extracted autocorrelation of each imaging target centered by the detected locations in (e). (e) The estimated scaling vectors (shown as the red arrows) by block matching and the detected locations (shown as the blue points). The connected component analysis result is shown in the bottom right in a smaller scale. (f)–(j) As in (a)–(e) for a larger and more complex scene “01234.” Scale bar, 50 camera pixels.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. PSNRs Between Reconstructions and Targets

Second, to test the applicability of our proposed method for large FOV fluorescent biological observation through scattering media, the neuron-shape scattering imaging experiments are conducted. The neuron-shape mask scaled from the real dendrites of hippocampal neurons [36] has the complex shapes of targets and distribution irregularity of target locations, which increases the difficulty for reconstruction but satisfies the requirements for practical scattering scenes. The mask was set as the imaging targets in the proposed scattering imaging system with other experimental conditions consistent with Fig. 5. Figure 6(a) shows the neuron-shape mask for reconstruction and the detailed parameters. The reconstructed whole scene via the proposed method is shown in Fig. 6(b) and the main features are faithfully recovered. Certainly, no prior information can be seen directly from the captured large FOV speckle, as shown in Fig. 6(c), which was recorded with d1=16.5mm as the Distance 1 plane. In principle, the presented millimeter-scale experiments can be scaled to micrometer- or meter-scale scenarios for scattering imaging exceeding the ME range.

 figure: Fig. 6.

Fig. 6. Real tests for biological scattering observation. (a) The neuron-shape mask with the detailed parameters as the imaging targets. (b) The final reconstructed scene. (c) The captured near-field speckle with d1=16.5mm. Scale bar, 50 camera pixels.

Download Full Size | PPT Slide | PDF

C. Analyzing the Limitations

In the case that multiple targets are separated from each other as shown in Fig. 1, the proposed method blindly achieves multi-target localization and reconstruction exceeding the ME range. However, how to define the effective spacing between any two targets, beyond which the scaling-vector-based detection and scattering imaging algorithm works for multi-target speckles? To evaluate this limitation, we adjust the spacing between any two targets for mask “2FL” in Fig. 7(a) from 3.25 mm to 1.5 mm. Besides spacing, other parameters remain the same as those in Fig. 5(a), with the effective ME range keeping 2 mm in the object plane and the size of each imaging target falling within 0.5 mm. In this experimental environment, it is obvious that the reconstructed targets and locations are visually distinguishable when the spacing changes from 3.25 mm to 2.25 mm, as shown in Fig. 7(b). The estimated scaling vectors and detected locations with a spacing of 2.75 mm are shown in Fig. 7(d) as an example of reconstructions in good quality. As the spacing between any two targets reaches lower than 2.25 mm, the reconstructions are visually distorted, and the quality degrades. Meanwhile, the proposed blind position detection algorithm estimates the wrong number and the wrong locations of the imaging targets from the near-field speckles as shown in Fig. 7(e) with a spacing of 1.75 mm, which serves as misleading for the following reconstruction process. Objectively, the three-target averaged PSNRs curve between the reconstructions and the original targets with respect to the decreasing spacing is provided in Fig. 7(c). Theoretically, the autocorrelation of one selected target like Eq. (14) is noised more by other target autocorrelations with the decreasing spacing between any two targets, which makes the final reconstruction after phase retrieval worse. Meanwhile, according to Eq. (13), the captured multi-target speckle region centered by one detected target location (O1 as an example) is mainly composed of the speckle generated by O1 and the speckles generated by other imaging targets only make a small contribution for this region. Therefore, the scaling vectors in this region regard the location of O1 as the scaling center and there are multiple different scaling centers that exist between two imaging-distance speckles which correspond to the locations of different imaging targets. This is the reason why the proposed blind position detection algorithm can extract multi-target positions from two large FOV imaging-distance speckles, and Figs. 5(e) and 5(j) show the obvious boundaries of the speckle regions serving for different imaging targets. However, with the decreasing spacing, the weight differences in Eq. (13) are reduced and the scaling vectors in the speckle region centered by the location of O1 are noised more by other imaging targets. That will result in the low-accuracy target localization, and even the wrong number of the identified imaging targets.

 figure: Fig. 7.

Fig. 7. Real reconstructions for mask “2FL” when the spacing is decreasing from 3.25 mm to 1.5 mm. (a) The original imaging targets with detailed distance parameters. (b) The final reconstructed large FOV scenes corresponding to (a). (c) The averaged PSNRs curve between reconstructions and original targets with respect to the decreasing spacing. (d) The estimated scaling vectors and locations when spacing equals 2.75 mm as an example of reconstructions in good quality. (e) The estimated scaling vectors and locations when spacing equals 1.75 mm as an example of degraded reconstructions.

Download Full Size | PPT Slide | PDF

As for the times that the proposed algorithm can exceed the ME range, it is not restricted by theory, only if the camera array can capture the full speckles when the number of imaging targets increases.

4. CONCLUSION AND DISCUSSION

To summarize, we developed a multi-target large FOV scattering imaging method based on the blind target position detection. This technique only exploits two captured multi-target near-field speckles at different imaging distances, from which the target position cannot be seen directly. A major advantage of our approach is that target position information including the number and the locations of the imaging targets can be blindly reconstructed via the scaling-vector-based detection algorithm. After that, the autocorrelations can be calculated for speckle regions centered by the derived positions via the low-cross-talk region allocation strategy. Working with the modified phase retrieval algorithm, the whole scene of the multiple isolated targets exceeding the ME range can be successfully recovered. Unlike other methods of exceeding the ME range, no prior information of the target is required, which makes our technique more applicable for prior-information-free large FOV imaging and other scattering imaging techniques can cooperate with our proposed method to further improve the performance [25,26]. The real scattering imaging experiments demonstrate the effectiveness of the proposed method.

Actually, the detected target position information contains some errors compared with the initial multi-target positions. These errors come from three main sources: (1) the discrete sampling during camera acquisition and image processing; (2) the approximations of the theoretical derivation from Eq. (9) to Eq. (11) aiming to simplify the scattering model; and (3) a certain divergence angle of the experimental illumination, which makes the target positions change slightly from the object plane to the sensor plane. Actually, these errors did not affect the effectiveness of the proposed algorithm for target localization and reconstruction. On the other hand, the minimal effective spacing between any two targets for the proposed algorithm is not only limited by the target size and the properties of the scattering medium, but also the distance between the scattering medium and the camera sensor, and even the shape of the imaging target. In the future, a deeper research will be focused on this problem.

Additionally, the main time consumption of our proposed algorithm is spent in the process of estimating scaling vectors from two imaging-distance speckles and the time consumption will increase along with the increasing FOV of the captured speckles. When the resolution of the captured speckles in Figs. 5(c) and 5(d) is 1600×2500 pixels, the length of the selected square region (β) is set as 400 pixels and the total time consumption in MATLAB 2018b is about 6.1 h. In addition, in the image processing part, the raw captured multi-target speckle with the slowly varying envelope is normalized by dividing the raw speckle by a low-pass-filtered version of it before calculating autocorrelations [5]. Especially in the process of estimating scaling vectors, the normalized speckle is further smoothed by a low-pass filter to remove the existing noises by camera which are spatially invariant at different imaging distances and improve the accuracy for localization.

Finally, the imaging distances, d1 and d2, are not fixed in the proposed method, so as the gap between two near-field speckles, d2d1. In addition, these distance parameters were not used as the known information for reconstruction in this paper. In practice, the imaging distance can be adjusted to satisfy the requirements for different scenes, as long as the near-field scaling relationship between two imaging-distance speckles exists, and the light field techniques [37] may be applied to the proposed system to speed up the acquisition process. Meanwhile, the relationship between parameter m and multiple variables (including d0, d1, d2, and TM) via statistical analysis will be the research plan in the future, aiming to dig out more useful information from two near-field speckles at different imaging distances. Furthermore, the proposed method can still work theoretically when the imaging targets are sandwiched between two scattering layers or looked around corners [5], which will be tested experimentally in the future work, and the raw data used to generate the results presented in this manuscript is available at https://cloud.tsinghua.edu.cn/d/296c066dbcc243839f52/.

Funding

National Natural Science Foundation of China (61827804, 61771275); Guangdong Special Support Plan for the scientific and technological innovation young talents (2016TQ03X998).

Acknowledgment

We would like to thank Xiangsheng Xie for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).

2. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]  

3. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

4. I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990). [CrossRef]  

5. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

6. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]  

7. X. Yang, Y. Pu, and D. Psaltis, “Imaging blood cells through scattering biological tissue using speckle scanning microscopy,” Opt. Express 22, 3405–3413 (2014). [CrossRef]  

8. L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991). [CrossRef]  

9. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]  

10. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]  

11. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.

12. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015). [CrossRef]  

13. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010). [CrossRef]  

14. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). [CrossRef]  

15. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

16. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [CrossRef]  

17. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [CrossRef]  

18. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018). [CrossRef]  

19. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010). [CrossRef]  

20. M. Mounaix, H. B. Aguiar, and S. Gigan, “Temporal recompression through a scattering medium via a broadband transmission matrix,” Optica 4, 1289–1292 (2017). [CrossRef]  

21. G. Kim and R. Menon, “Computational imaging enables a see-through lens-less camera,” Opt. Express 26, 22826–22836 (2018). [CrossRef]  

22. L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. 43, 1670–1673 (2018). [CrossRef]  

23. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017). [CrossRef]  

24. C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019). [CrossRef]  

25. X. Wang, X. Jin, J. Li, X. Lian, X. Ji, and Q. Dai, “Prior-information-free single-shot scattering imaging beyond the memory effect,” Opt. Lett. 44, 1423–1426 (2019). [CrossRef]  

26. A. Boniface, B. Blochet, J. Dong, and S. Gigan, “Noninvasive light focusing in scattering media using speckle variance optimization,” Optica 6, 1381–1385 (2019). [CrossRef]  

27. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

28. X. Jin, Z. Wang, X. Wang, and Q. Dai, “Depth of field extended scattering imaging by light field estimation,” Opt. Lett. 43, 4871–4874 (2018). [CrossRef]  

29. P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. 9, 11157 (2019). [CrossRef]  

30. Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, and Z. Yaqoob, “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. 39, 6062–6065 (2014). [CrossRef]  

31. S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015). [CrossRef]  

32. X. Jin, D. M. S. Wei, and Q. Dai, “Point spread function for diffuser cameras based on wave propagation and projection model,” Opt. Express 27, 12748–12761 (2019). [CrossRef]  

33. M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000). [CrossRef]  

34. J. Ko and C. C. Davis, “Comparison of the plenoptic sensor and the Shack-Hartmann sensor,” Appl. Opt. 56, 3689–3698 (2017). [CrossRef]  

35. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison Wesley, 1992).

36. D. Brandner and G. Withers, “Multipolar neuron, Rattus from CIL:2907,” https://doi.org/doi:10.7295/W9CIL2907 (2010).

37. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017). [CrossRef]  

References

  • View by:

  1. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).
  2. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
    [Crossref]
  3. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
    [Crossref]
  4. I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
    [Crossref]
  5. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  6. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  7. X. Yang, Y. Pu, and D. Psaltis, “Imaging blood cells through scattering biological tissue using speckle scanning microscopy,” Opt. Express 22, 3405–3413 (2014).
    [Crossref]
  8. L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
    [Crossref]
  9. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
    [Crossref]
  10. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
    [Crossref]
  11. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.
  12. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
    [Crossref]
  13. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
    [Crossref]
  14. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007).
    [Crossref]
  15. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
    [Crossref]
  16. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
    [Crossref]
  17. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
    [Crossref]
  18. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
    [Crossref]
  19. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
    [Crossref]
  20. M. Mounaix, H. B. Aguiar, and S. Gigan, “Temporal recompression through a scattering medium via a broadband transmission matrix,” Optica 4, 1289–1292 (2017).
    [Crossref]
  21. G. Kim and R. Menon, “Computational imaging enables a see-through lens-less camera,” Opt. Express 26, 22826–22836 (2018).
    [Crossref]
  22. L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. 43, 1670–1673 (2018).
    [Crossref]
  23. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213 (2017).
    [Crossref]
  24. C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
    [Crossref]
  25. X. Wang, X. Jin, J. Li, X. Lian, X. Ji, and Q. Dai, “Prior-information-free single-shot scattering imaging beyond the memory effect,” Opt. Lett. 44, 1423–1426 (2019).
    [Crossref]
  26. A. Boniface, B. Blochet, J. Dong, and S. Gigan, “Noninvasive light focusing in scattering media using speckle variance optimization,” Optica 6, 1381–1385 (2019).
    [Crossref]
  27. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  28. X. Jin, Z. Wang, X. Wang, and Q. Dai, “Depth of field extended scattering imaging by light field estimation,” Opt. Lett. 43, 4871–4874 (2018).
    [Crossref]
  29. P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. 9, 11157 (2019).
    [Crossref]
  30. Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, and Z. Yaqoob, “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. 39, 6062–6065 (2014).
    [Crossref]
  31. S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
    [Crossref]
  32. X. Jin, D. M. S. Wei, and Q. Dai, “Point spread function for diffuser cameras based on wave propagation and projection model,” Opt. Express 27, 12748–12761 (2019).
    [Crossref]
  33. M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000).
    [Crossref]
  34. J. Ko and C. C. Davis, “Comparison of the plenoptic sensor and the Shack-Hartmann sensor,” Appl. Opt. 56, 3689–3698 (2017).
    [Crossref]
  35. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison Wesley, 1992).
  36. D. Brandner and G. Withers, “Multipolar neuron, Rattus from CIL:2907,” https://doi.org/doi:10.7295/W9CIL2907 (2010).
  37. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
    [Crossref]

2019 (5)

2018 (4)

2017 (4)

2016 (2)

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

2015 (2)

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
[Crossref]

2014 (3)

2012 (2)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

2010 (2)

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

2007 (1)

2000 (1)

M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000).
[Crossref]

1991 (2)

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

1990 (2)

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

1988 (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

1982 (1)

Aguiar, H. B.

Alfano, R. R.

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

Bertolotti, J.

S. Schott, J. Bertolotti, J. F. Leger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Blochet, B.

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Boccara, A. C.

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Boniface, A.

Bourdieu, L.

Chai, T.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Chang, W.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Chen, P. X.

Choi, W.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, and Z. Yaqoob, “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. 39, 6062–6065 (2014).
[Crossref]

Choi, Y.

Dai, Q.

Dang, C.

Dasari, R. R.

Davis, C. C.

Denk, W.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Dong, J.

Edrei, E.

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

Feng, S.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Fienup, J. R.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Flotte, T.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Freund, I.

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Fujimoto, J. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Gigan, S.

Goodman, J. W.

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).

Gregory, K.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Guo, C.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Haralick, R. M.

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison Wesley, 1992).

He, H.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

Hee, M. R.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Ho, P. P.

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

Hosseini, P.

Huang, D.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Jain, P.

P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. 9, 11157 (2019).
[Crossref]

Jarabo, A.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Jeong, S.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Ji, X.

Jin, X.

Joo, J. H.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Kane, C.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Kang, S.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Kim, G.

Ko, H.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Ko, J.

Lagendijk, A.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Lane, R. G.

M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000).
[Crossref]

Lee, J. S.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Lee, P. A.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Leger, J. F.

Lerosey, G.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Li, J.

Li, L.

Li, Q.

Li, W.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Lian, X.

Liang, H.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

Lim, Y. S.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Lin, C. P.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Lin, H. Z.

Liu, C.

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

Liu, J.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Liu, W. T.

Liu, Y.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Masia, B.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Menon, R.

Mosk, A. P.

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007).
[Crossref]

Mounaix, M.

Ntziachristos, V.

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

Park, Q. H.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Popoff, S.

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Psaltis, D.

Pu, Y.

Puliafito, C. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Raskar, R.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Sahoo, S. K.

Sarma, S. E.

P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. 9, 11157 (2019).
[Crossref]

Satat, G.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.

Scarcelli, G.

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

Schott, S.

Schuman, J. S.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Shao, X.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Shapiro, L. G.

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison Wesley, 1992).

So, P. T. C.

Stingson, W. G.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Stone, A. D.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Strickler, J. H.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Sun, S.

Swanson, E. A.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

Tancik, M.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.

Tang, D.

Van Dam, M. A.

M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000).
[Crossref]

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Vellekoop, I. M.

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Wang, G.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Wang, J.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Wang, L.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

Wang, X.

Wang, Z.

Webb, W. W.

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Wei, D. M. S.

Wu, G.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Wu, T.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Xie, X.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

Xu, X.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

Yang, T. D.

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Yang, X.

Yaqoob, Z.

Zhang, G.

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

Zhang, Y.

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

Zhou, J.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

Zhu, L.

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Zhuang, H.

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

Appl. Opt. (2)

IEEE J. Sel. Top. Signal Process. (1)

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field image processing: an overview,” IEEE J. Sel. Top. Signal Process. 11, 926–954 (2017).
[Crossref]

J. Opt. Soc. Am. (1)

M. A. Van Dam and R. G. Lane, “Wave-front slope estimation,” J. Opt. Soc. Am. 17, 1319–1324 (2000).
[Crossref]

Nat. Commun. (1)

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Nat. Methods (1)

V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010).
[Crossref]

Nat. Photonics (3)

A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).
[Crossref]

Nature (1)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Opt. Commun. (1)

C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019).
[Crossref]

Opt. Express (4)

Opt. Lett. (5)

Optica (3)

Phys. A (1)

I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990).
[Crossref]

Phys. Rev. Lett. (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Sci. Rep. (4)

E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016).
[Crossref]

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8, 4585 (2018).
[Crossref]

P. Jain and S. E. Sarma, “Measuring light transport using speckle patterns as structured illumination,” Sci. Rep. 9, 11157 (2019).
[Crossref]

Science (3)

L. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, “Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991).
[Crossref]

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stingson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
[Crossref]

W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990).
[Crossref]

Other (4)

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in IEEE International Conference on Computational Photography (2018), pp. 1–10.

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts, 2007).

R. M. Haralick and L. G. Shapiro, Computer and Robot Vision (Addison Wesley, 1992).

D. Brandner and G. Withers, “Multipolar neuron, Rattus from CIL:2907,” https://doi.org/doi:10.7295/W9CIL2907 (2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of our multi-target large FOV scattering imaging system via the blind target position detection. Multiple isolated targets, O1,O2,,On, behind the diffuser form a large FOV scene.
Fig. 2.
Fig. 2. Simulated experiments to analyze the relationship between two PSFs at different imaging distances (d0=120mm, pixel size=4.8μm, 600×600 pixels). The point light source was set at the optical axis (u=300,v=300), as the corresponding point where Ok located in the object plane. (a) Normalized PSFd1k with d1=17mm. (b) Normalized PSFd2k with d2=19mm. (c) The estimated low-density scaling vectors based on (a) and (b). The space between any two vectors vertically or horizontally is 20 pixels. The green rectangle in (b) is the matched block of the green rectangle in (a) and the enlarged arrow in (c) represents the estimated scaling vector corresponding to these two green rectangles. The blue point in (c) is the location of the light source. (d) The histogram distribution of m values extracted from all the scaling vectors in (c). Scale bar, 50 camera pixels.
Fig. 3.
Fig. 3. The block diagram of the scaling-vector-based detection algorithm.
Fig. 4.
Fig. 4. Multi-target large FOV scattering imaging system setup via the blind target position detection.
Fig. 5.
Fig. 5. Tests on a real scattering imaging system. (a) The multi-target mask “2FL” with the detailed parameters as the imaging targets. (b) The final large FOV reconstruction with the detected position information. (c) The captured near-field speckle with d2=17.0mm. (d) The captured near-field speckle with d1=16.5mm and the extracted autocorrelation of each imaging target centered by the detected locations in (e). (e) The estimated scaling vectors (shown as the red arrows) by block matching and the detected locations (shown as the blue points). The connected component analysis result is shown in the bottom right in a smaller scale. (f)–(j) As in (a)–(e) for a larger and more complex scene “01234.” Scale bar, 50 camera pixels.
Fig. 6.
Fig. 6. Real tests for biological scattering observation. (a) The neuron-shape mask with the detailed parameters as the imaging targets. (b) The final reconstructed scene. (c) The captured near-field speckle with d1=16.5mm. Scale bar, 50 camera pixels.
Fig. 7.
Fig. 7. Real reconstructions for mask “2FL” when the spacing is decreasing from 3.25 mm to 1.5 mm. (a) The original imaging targets with detailed distance parameters. (b) The final reconstructed large FOV scenes corresponding to (a). (c) The averaged PSNRs curve between reconstructions and original targets with respect to the decreasing spacing. (d) The estimated scaling vectors and locations when spacing equals 2.75 mm as an example of reconstructions in good quality. (e) The estimated scaling vectors and locations when spacing equals 1.75 mm as an example of degraded reconstructions.

Tables (1)

Tables Icon

Table 1. PSNRs Between Reconstructions and Targets

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

Id1=k=1nId1k=k=1nOk*PSFd1k,
PSFd1k(x,y)=C(xuk,yvk)·Sd1k(x,y),
Id1(x,y)=k=1nC(xuk,yvk)·(Ok*Sd1k)(x,y).
Us(xs,ys)=ej2πd0/λjλd0+δ(u,v)|u=0v=0·ejπλd0[(uxs)2+(vys)2]dudv=ej2πd0/λjλd0·ejπλd0(xs2+ys2),
h1(x,y)=ej2πd0/λd1λ2d0xs,ysTM(xs,ys)·ejπλd0(xs2+ys2)·ej2πλd12+(xsx)2+(ysy)2d12+(xsx)2+(ysy)2dxsdysej2π(d0+d1)/λd1λ2d0ejπλd1(x2+y2)xs,ysTM(xs,ys)·ejπλf(xs2+ys2)·ej2π(xλd1xs+yλd1ys)d12+(xsx)2+(ysy)2dxsdys,
PSFd1k(x,y)=|h1(x,y)|2=(d1λ2d0)2|xs,ysTM(xs,ys)·ejπλf(xs2+ys2)·ej2π(xλd1xs+yλd1ys)d12+(xsx)2+(ysy)2dxsdys|2.
T(d1,d2,m)=PSFd1k(x,y)·PSFd2k(mx,my)dxdy,
(xd2,yd2)=argmax(x,y)[Corr(Mx,y,Nxd1,yd1)],
PSFd1k(x,y)PSFd2k(mx,my),
PSFd1k(x,y)PSFd2k[mx(m1)uk,my(m1)vk],
Id1k(Δx+uk,Δy+vk)Id2k(m·Δx+uk,m·Δy+vk),
ε={(x,y)|(x,y)(u1,v1)2(x,y)(uk,vk)2(x,y)(u1,v1)β/2,k=2,,n}.
C(xu1,yv1)C(xuk,yvk),k=2,,n,(x,y)ε.
IεIε(x,y)=k=1nC(xuk,yvk)2·OkOk(x,y)C(xu1,yv1)2·O1O1(x,y)O1O1(x,y),

Metrics