Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme

Open Access Open Access

Abstract

Three-dimensional fluorescence microscopy based on Nyquist sampling of focal planes faces harsh trade-offs between acquisition time, light exposure, and signal-to-noise. We propose a 3D compressed sensing approach that uses temporal modulation of the excitation intensity during axial stage sweeping and can be adapted to fluorescence microscopes without hardware modification. We describe implementations on a lattice light sheet microscope and an epifluorescence microscope, and show that images of beads and biological samples can be reconstructed with a 5-10 fold reduction of light exposure and acquisition time. Our scheme opens a new door towards faster and less damaging 3D fluorescence microscopy.

© 2017 Optical Society of America

1. Introduction

Imaging fluorescently labeled biological structures with high spatio-temporal resolution requires judicious compromises between the conflicting goals of achieving high signal-to-noise ratio (SNR) and temporal resolution while keeping the excitation power low to minimize photobleaching and phototoxicity. For example, to obtain a higher SNR, one can either increase the exposure time, thereby reducing imaging speed, or increase the illumination power, thereby increasing photodamage. These tradeoffs are further exacerbated in 3D imaging, which is often required in biological applications, such as calcium imaging in neurons or transient mitotic events in a developing embryo. Traditionally, 3D microscopy images are obtained by sequentially acquiring 2D images of individual focal planes, where axial spacing is dictated by the Nyquist sampling criterion to achieve optimal spatial resolution in all dimensions. As a consequence, hundreds or thousands of planes are needed to image samples 10-1,000 µm thick, dramatically increasing acquisition time and light exposure. Although light sheet illumination considerably reduces photodamage and allows prolonged imaging of living cells in 3D, the required hardware systems are often costly and scarce, and the acquisition time for each volume still remains constrained by the Nyquist criterion [1–3].

The field of compressed sensing, introduced over a decade ago, offers an avenue to overcome these limitations [4–7]. Compressed sensing leverages the fact that natural images are highly non-random and harbour intrinsic redundancies, which can be formulated as sparsity in an appropriate linear basis [8]. This sparsity can be exploited in order to reconstruct images from fewer measurements than specified by the Nyquist criterion, provided that measurements are taken in an appropriate manner. Compressed sensing has been successfully applied in diverse imaging applications in fields including astronomy [9], magnetic resonance imaging [10], lensless imaging [11,12] and ultrafast imaging [13,14], where it has enabled a considerable increase in acquisition speeds.

In biological microscopy, compressed sensing should in principle enable similar benefits in reducing acquisition time and light exposure without compromising SNR [15]. However, despite several proof of concepts, fluorescence microscopy has benefited relatively little from compressed sensing approaches in practice. One reason for this is that most compressed sensing strategies proposed to date require considerable modifications of the optical system, an important impediment for application on routinely used microscopes [16–21]. We note however, that a compressed sensing scheme without modification of the light path was recently used in confocal laser scanning microscopy to achieve a 10-15 fold speedup in 2D imaging [22].

Here we introduce a compressed sensing scheme for 3D fluorescence imaging that relies on compression along the optical axis (z axis) and is applicable to a large range of fluorescence modalities without modification of the optical path. We show that for a given SNR, our method can reconstruct a z stack from a 2-10 times faster acquisition than traditional plane-by-plane imaging with Nyquist sampling. For dynamic microscopy of live samples, this approach opens the door to either lower excitation power and photodamage (at constant acquisition speed and SNR) or to higher temporal resolution (at constant excitation power and SNR).

In Section 2, we first describe our method conceptually, starting with a brief reminder of the basics of compressed sensing. In Section 3, we present results on simulations. Implementation on a lattice light sheet microscope and a conventional epifluorescence microscope are demonstrated in Sections 4 and 5 respectively. Sections 6 and 7 provide a brief discussion and conclusion.

2. Method

2.1. Basics of compressed sensing

Compressed sensing is based on the realization that under certain (broad) conditions, natural signals such as images can be reconstructed from a smaller number of measurements than prescribed by Nyquist sampling. If X is a (vectorized) image of size N × 1 and A a known M × N matrix that transforms X into a signal Y = AX of smaller size M × 1 (M < N), then the goal is to recover X (or a good approximation thereof) from Y. The matrix A, which is independent of the data, specifies how the N pixels of the image are scrambled into the M “compressed” measurements and is called the sensing (or measurement) matrix.

In order to recover X from Y, compressed sensing reconstruction algorithms exploit the structural redundancy of images. In the simplest setting, it is assumed that the image X is sparse, i.e. that the number of non-zero values, K=X0 is small (K << N), or that X can be represented sparsely in a suitable basis, i.e. that X = Ψα, where Ψ is an invertible (e.g. orthonormal) N × N matrix and α is a sparse vector of size N × 1. Note that this setting can easily be adapted to incorporate a redundant dictionary Ψ of size W × N with W > N instead of an orthogonal basis, allowing for improved reconstructions [23]. The reconstruction algorithms aim to determine the sparsest representation α consistent with the data, i.e. such that X = AΨα. While minimizing the 0 norm to enforce sparsity is NP-hard and computationally unfeasible, minimizing the 1 norm (α  1=i=1N|αi|) leads to computationally tractable optimization algorithms that recover the exact solution.

In practice, images are noisy and only approximately sparse, therefore compressed sensing algorithms seek to recover approximations of X by determining the sparsest representation α such that YAΨα. For additive gaussian noise, this is typically done by solving the optimization problem: α* = arg minα F(α) for objective functions F(α) such as:

F(α)=α1+λAαY22
where 2 is the Euclidian norm and λ a Lagrange multiplier.

Under suitable conditions for A, such as the restricted isometry property (which is fulfilled in particular for random Gaussian matrices), it was shown that a good approximation of the N values in α can be recovered from a number of compressed measurements M=O(Klog(N/K)), which can be much smaller than N [6,24–26]. The reconstructed image is then simply obtained as X* = Ψα*. For images corrupted by Poisson noise, the appropriate objective function becomes:

F(α)=α1+λ(α)
and its minimization is subject to the positivity constraint: Ψα ≥ 0, where (α) designates the negative Poisson log-likelihood (α)=𝟙TAΨαi=1MYilog(eiTAΨα), 1 is a M × 1 vector of ones, ei is the canonical basis vector i and the T superscript denotes transposition [27]. A variety of efficient algorithms for compressed sensing recovery have been proposed, mostly for Gaussian noise, but also for Poisson noise [27,28]. See [7,8,29] for in-depth introductions to sparsity and compressed sensing.

2.2. Axially compressed imaging scheme

The traditional way to image a 3D volume is to successively scan the focal plane of the microscope along the z axis in a step-wise fashion, with spacing ∆z, and acquire a 2D image at each z = kz position (k = 1 … N). We hereafter refer to this imaging scheme as plane-by-plane acquisition, see Fig. 1(a). In this scheme, the focal plane position is given by:

zf(t)=E(tΔt)×Δz
where ∆t is the camera exposure time and E(x) is the next smallest integer to x. The spacing ∆z is usually dictated by the point spread function (PSF) width along the z-axis (Nyquist sampling). In plane-by-plane imaging, the k-th camera frame, Fkp.byp., k = 1 … N carries information from the z = kz-plane only and is given by:
Fkp.byp.(x,y)=L0×Δt×(I*PSF)(x,y,kΔz)fork=1N
where I(x, y, z) designates the 3D distribution of fluorophores in the sample, L0 is the laser intensity, PSF is the 3D PSF of the microscope and * stands for convolution. In this setting, the acquisition time for a full 3D z-stack with N focal planes is Nt and the light dose received by the sample is N L0t.

 figure: Fig. 1

Fig. 1 Principle of the proposed 3D compressed imaging method compared to traditional 3D plane-by-plane imaging. (a) Plane-by-plane imaging: for each camera frame (green curve indicates if the shutter is open or closed), one plane of the sample (red curve indicates z position) is illuminated at a constant laser intensity (blue curve shows transmission percentage of the excitation light source). The process is repeated for each plane (N = 101 times) to acquire a z-stack. Finally, the full imaging sequence is repeated n times to acquire a 4D movie for a total of N * n frames. The blue and red dots represent the illumination intensity and stage position at each time point respectively. This imaging scheme can be represented as the application of a square diagonal measurement matrix A as shown in (b): for each camera frame (row), only one z plane is illuminated (column). (c) Axially compressed imaging: the stage continually sweeps through the entire axial range while the illumination is modulated to create a specific axial light pattern. In this scheme, multiple planes of the sample are illuminated during a single camera exposure frame. This process is repeated M = 10 < N times with different light patterns, thus performing an optomechanical implementation of a compressed measurement matrix, as shown in (d). Finally, the full imaging sequence is repeated n times to acquire a 4D dataset with a total of M*n (10*101) frames.

Download Full Size | PDF

In the compressed sensing imaging scheme proposed here, the axial dimension of a 3D stack is acquired in a compressed fashion, such that the k-th acquired frame no longer contains information from the z = kz position only, but is a linear combination of information from multiple z-planes:

Fkcomp(x,y)=L0×Δt×0(N1)ΔzAk(z)×(I*PSF)(x,y,z)dzfork=1M<N
where Ak (z) is a function that describes how the image intensity profile along the z-axis is combined into the single frame k. Note that this scheme is compressed in the sense that it requires M < N frames. This expression can be approximated and discretized as:
Fkcomp(x,y)=L0×Δt×i=1i=NAk,i×(I*PSF)(x,y,iΔz)fork=1M<N

In Eq. 5 above, the matrix A is the discrete counterpart of Ak (z) (k = 1 … M) and describes how the image intensity from each of the N z-planes of the stack is combined into a single value. Note that in the case where the measurement matrix is the identity matrix (Ak,i = δ(i, k) with N = M and δ the Kronecker symbol), this scheme reduces to the plane-by-plane imaging scheme of Eq. 3, see Fig. 1(b).

The connection with the compressed sensing setting outlined in Section 2.1 is immediate. For any fixed (x, y) location, Yk=Fkcomp(x,y)(k=1M) defines a M × 1 vector Y, that is a set of compressed measurements obeying Y = AX, where X=I˜ is the N × 1 vector corresponding to the intensity profile along the z axis convolved with the PSF and sampled every ∆z, i.e.: I˜i=(I*PSF)(x,y,iΔz) for i = 1 … N. Thus, the matrix formulation of compressed sensing for a given (x, y) location is: Fcomp=AI˜. Applying the results mentioned in Section 2.1, under suitable conditions, approximate recovery of I˜ should therefore be possible from the compressed measurements Fcomp.

2.3. Physical implementation

In practice, our microscopy system achieves axial compression in the following way. During each camera exposure, the stage is swept at constant speed across the entire z-range of the volume to be imaged, see Fig. 1(c):

zf(t)=NΔz(tΔtE(tΔt))

In this way, each pixel (x, y) of the camera records an integration of the emitted fluorescence along the z axis. During this axial sweep, the excitation laser intensity at the sample, L(t) = L0T (t) is modulated over time, such that:

Fkcomp(x,y)=L0(k1)ΔtkΔtT(t)×(I*PSF)(x,y,zf(t))dt=ΔtNΔzL00(N1)ΔzT(z)×(I*PSF)(x,y,z)dz
is an average of the fluorescence distribution along z weighted by the modulated excitation light intensity. In the discrete approximation, we have:
Fkcomp(x,y)=L0×Δt×1Ni=1i=NTk,i×(I*PSF)(x,y,iΔz)
where L0 × Tk,i is the laser power applied during frame k when the focal plane is at z = iz. This modulation obeys a user-defined pattern specified by the k-th row of the measurement matrix A, as in Eq. 5, see Fig. 1(c), i.e. we set T = NA.

This procedure is repeated for all M rows of the measurement matrix, resulting in M compressed 2D images F1comp,FMcomp. For a given constant exposure time ∆t, the acquisition speedup compared to plane-by-plane imaging is thus simply given by the ratio: κ = N/M. Figure 1(c) jointly shows the z position of the stage and the illumination intensity for a measurement matrix A consisting of a truncated Fourier basis shown in Fig. 1(d).

A major advantage of this scheme is that it can be implemented without modification of the light path and simply requires a synchronization of two microscope components: (1) the z-piezo, which allows precise control of the axial focus (sample or objective) and (2) the AOTF (acousto-optic tunable filter) which allows to precisely and rapidly modulate the excitation light intensity transmitted to the sample (T). Instead of using an AOTF, it is in principle possible to modulate the intensity of the light source directly.

Note, that since the optical coding is performed by light modulation, this setup leaves absolute freedom of choice for the measurement matrix A, as long as its values are all positive. For example, random sensing matrices, which allow compressed sensing reconstruction for images sparse in any transform basis Ψ (universality property), can be implemented in a straightforward manner. In this paper, we choose a Fourier matrix, which is an optimal sensing matrix for images that are sparse in the direct spatial domain, taking the M first rows of the matrix, from low to high frequencies, see Fig. 1(d). Importantly, we linearly scale the matrix A such that all its values fall between 0 and 1/N. This ensures that during each frame k, the sample receives a light dose of L0ΔtN1i=1NTk,i=L0Δti=1NAk,iL0ΔtN1N=L0Δt, i.e. less or equal to the dose received in plane-by-plane imaging. Therefore, the total light dose received by the sample during a compressed imaging acquisition with M frames is at least κ = N/M times less than in the plane-by-plane acquisition, where κ is the compression ratio with respect to the plane-by-plane, Nyquist sampling.

2.4. Sparsity prior and PSF model

In this paper, we assume for simplicity that the 3D distribution of fluorescent structures is sparse in the spatial domain, but we take into account the 3D blurring caused by diffraction. This is done by incorporating a model of the 3D PSF into the W × N redundant dictionary Ψ, such that the 3D image can be modeled as: X = Ψα, where α represents the 3D distribution of fluorescent structures and is assumed to be (approximately) sparse. In practice, we first measure the empirical PSF of the microscope using a conventional plane-by-plane z-stack and derive one cropped 2D image in the (x, z) plane. We then build the dictionary Ψ: each element of the dictionary is defined as a translation in the (x, z) plane of the empirical PSF within a given 2D reconstruction window. Thus, the dictionary is a collection of PSFs at various locations. Before reconstruction, both the compressed stacked and the elements of the dictionary are flattened into 1D vectors. Therefore, although our scheme performs compression only along the z axis, the reconstruction algorithm incorporates a 2D sparsity prior [30].

2.5. Numerical implementation

The reconstructions are performed using a custom port in Python of the previously published <m>SPIRAL-TAP</m> algorithm [27], which solves the optimization problems (1) or (2) above. Our port is publicly available, together with sample data and analysis scripts (see Section Software and data availability).

In practice, performing a reconstruction on a full 3D (or even 2D) image is not computationally tractable. Since we use a PSF model with a restricted spatial support, we assume that two (x, y) positions located farther apart than the characteristic width of the PSF are independent and reconstruct them in parallel on a computing cluster. We then calculate a single 2D image by averaging small overlapping chunks, and stack them together to obtain a 3D image.

3. Compressed imaging on simulated images

3.1. Generation of test images and metrics

We first tested our compressed sensing approach on simulated images. For this purpose, we generated a series of 100 synthetic images, termed ground truth (GT). The images contained features at different scales and sparsity levels (see Fig. 2(a, top)) along the compression axis (z axis). The images are scaled so that the pixel of highest intensity had a value of Imax = 10, 000 counts. For simplicity, the effects of diffraction blurring is ignored here.

 figure: Fig. 2

Fig. 2 Simulations comparing the compressed sensing scheme with the traditional plane-by-plane scheme under various compression ratios and SNR. (a). Principle of the simulation: from a generated ground truth image (GT) and a specified SNR, (left) a noisy reference (NRSNR) is generated by adding Poisson and Gaussian noise to the ground truth. In parallel (right), the ground truth is compressed (along the z axis) with a compression ratio κ and an equivalent amount of noise is added at the same time as the compression (see main text for details). The compressed images are further decompressed (imageCSSNRκ) and the mean square error (MSE) is computed with respect to the ground truth. (b). Examples of simulated images in the (x, z) plane. From top to bottom: (plane-by-plane) GT, (1:2) to (1:20) CSSNRκ image reconstructed at a SNR of 20 and with a compression ratio of 1:2 to 1:20. The blue arrows represent lines of low sparsity, high sparsity and medium sparsity (respectively x1, x2 and x3). (c). quality of the reconstruction (assessed by the MSE with respect to GT) for various SNR and compression-ratios. The dashed line is the MSE of the noisy reference NRSNR with respect to the ground truth GT. The dash-dotted line is the MSE of the noisy reference acquired with a ten times lower exposure time NRSNR10x. Inset: close-up of the low SNR region (SNR=1–10).

Download Full Size | PDF

Next, to simulate an acquisition in realistic conditions in the conventional plane-by-plane scheme, we corrupted the ground truth images GT using Poisson noise (P) and additive half-normal noise (|N|). This resulted in noisy reference images (NRSNR) for different SNRs (see below): NRSNR=P(GT)+|N|(σπ/2) where σ corresponds to the expected number of background photons to achieve a given SNR, that is: σ = Imax/SNR. We also simulated images acquired by reducing the exposure time per frame by a factor of ten, allowing us to compare the plane-by-plane and the compressed acquisition scheme at a constant acquisition time per z-stack. To do so, we assumed that the SNR scales with the exposure time ∆t as SNRΔt and simulated plane-by-plane images with the corresponding SNR. We denote those images as NRSNR10x.

In parallel, we computed compressed versions of the same ground truth image by applying the measurement matrix A to GT (for different compression ratios κ, i.e. varying numbers of rows of A), and subsequently applied Poisson and additive Gaussian noise as for the images in the plane-by-plane acquisition (Fcomp=P(AI˜)+|N|(σπ/2)).

Then, we computed reconstructions from these noisy compressed images, and denote the resulting 3D images as CSSNRκ. The reconstructions were performed without a PSF model, thus assuming sparsity of the reconstructed image and using Ψ = IN. The SNR was computed as the mean of the non-zero pixels of the ground truth image divided by the mean of the additive noise, see Fig. 2(b).

Finally, we quantified reconstruction quality by computing the mean square error MSE(GT,CSSNRκ) between the ground truth images GT and the compressed sensing reconstructions CSSNRκ. For comparison, we also computed the MSE between the ground truth images and the plane-by-plane acquisition image obtained for the same SNR, MSE(GT, NRSNR) and with a ten times lower exposure time (MSE(GT,NRSNR10x)).

3.2. Results on simulated images

We first compare reconstructions for a fixed SNR=20 and increasing compression ratios κ, see Fig. 2(b). At low compression ratios (1:2, corresponding to a two fold speedup compared to the plane-by-plane acquisition), all features of the simulated images are accurately reconstructed, including both high and low frequency details. Minor artifacts are visible in the regions of moderate sparsity (arrow x1). As the compression ratio increases, fine features are progressively lost (arrow x2), whereas larger objects remain visible at their correct location (arrows x1 and x3). At high compression ratios, the reconstructed intensity significantly diverges from the ground truth. Nevertheless, this first example illustrates that object positions and shapes can be approximately reconstructed from compressed images with high compression ratios, even when the ground truth images exhibit quite variable levels of sparsity.

We then evaluate the influence of different noise levels on reconstruction quality by computing the MSE for SNR ranging from 1 to 80 and for compression ratios ranging from 1:2 to 1:30, see Fig. 2(c). At high SNR, the noisy plane-by-plane reference (NRSNR) exhibits a much lower MSE than the compressed sensing reconstruction. However, as mentioned in the introduction, compressed sensing is most useful for conditions in which photodamage and/or acquisition speed are limiting, i.e. for low SNR images. Figure 2(c) shows that for low SNR (≤ 15), the MSE of the noisy reference and the reconstruction become close. Furthermore, in this SNR range, the reconstruction from a 1:10 compression ratio shows a significantly lower MSE than a plane-by-plane acquisition performed in the same overall time, see the dash-dotted line in 2(c). This suggests that our compressed sensing approach can recover images of similar quality as plane-by-plane imaging, but with a considerable reduction of acquisition time and light exposure, and with a higher quality than simply decreasing the exposure time in the traditional plane-by-plane-mode. Equivalently, using the same acquisition time and light exposure as in plane-by-plane imaging, compressed imaging enables reconstructions of higher quality images as measured by the MSE metric.

We note that the reconstructions from a higher compression ratio tend to have a lower MSE than the ones from a higher compression ratio, a result that does not match the visual quality of images shown in Fig. 2(b). This is due to the smoothing (loss of high frequency content) that is observed in the higher compression reconstructions.

Thus, our simulation results suggest that axially compressed sensing acquisition might be a worthwhile alternative to plane-by-plane imaging for faster and less phototoxic 3D microscopy.

4. Compressed lattice light sheet imaging

4.1. Lattice light sheet implementation

Having demonstrated our technique on simulated images, we implemented our compressed sensing scheme on a lattice light sheet microscope (LLSM). In a light sheet microscope [31], one objective is used to produce a very thin sheet of light that illuminates the sample at a 90° angle with the axis of another objective used for detection. Due to its excellent axial resolution, a lattice light sheet microscope is an ideal candidate to implement our compressed sensing scheme.

In LLSM, the focus is adjusted by moving the light sheet (using a scanning galvanometer mirror) and the focus of the 20×/1.1 NA water immersion observation objective Nikon MRD77220 (which is mounted on a piezo stage) in a synchronized manner across the sample, see Fig. 3(a). For the compressed sensing acquisitions, we modified the software generating the FPGA control command (Coleman Technologies) in order to synchronize the motion of the sheet and the observation objective with a custom light modulation produced by the AOTF (AOTFnC-400-650-TN, AA Optoelectronics) during a single camera exposure. Our modified software also allows to load a predefined measurement matrix A and to set exposure parameters.

 figure: Fig. 3

Fig. 3 Compressed imaging reconstructions of fluorescent beads acquired with a lattice light sheet microscope. (a). Principle of a lattice light sheet microscope: two objectives at a 90°angle are used to observe the sample. The light sheet is generated through a spatial light modulator (SLM) and associated optics. The focus is adjusted by a coordinated move of the z piezo (that translates the observation objective) and of the z galvo (that translates the light sheet). Synchronization is achieved by a FPGA (Field-Programmable Gate Array). (b). Sample reconstructions (compressed) in the (x, z) plane from a 1:10 compressed acquisition and the corresponding acquisition in the plane-by-plane imaging scheme for two y positions (termed position 1 and position 2). (c). Compressed imaging sample reconstructions at increasing compression ratios (from 1:2 to 1:50) compared to the plane-by-plane scheme (top). For each reconstruction, a close-up of the PSF is displayed (PSF column) and the 2D frequency content of the PSF is displayed next to the PSF (FFT column). The PSF shown in red is the typical response of the LLSM whereas the shape of the yellow PSF is likely due to a defect in the bead. (d). Line profile across the two highlighted PSFs (in yellow and red, respectively left and right). top Close-up along the x axis, bottom close-up along the z axis. The colors correspond to different compression ratios and the dotted line to the plane-by-plane reference. Horizontal axis in µm. A field of view in the (x, z) plane is 50×20 µm (512×101 px). A full 3D reconstructions is provided in blue) Visualization 1.

Download Full Size | PDF

For the experiments reported below, we performed two sets of acquisitions: (i) one plane-by-plane acquisition for reference, and (ii) one compressed sensing acquisition. The plane-by-plane acquisition was performed by setting the measurement matrix A equal to the identity matrix in the acquisition software (A = IN) (Fig. 1(b)), in order to facilitate comparisons with the compressed imaging scheme and acquiring a z-stack with 101 frames at a fixed exposure time of ∆t =100 ms or 200 ms depending on the sample (see below), corresponding to a SNR of the reference image of ∼ 10–15. For the compressed sensing acquisition, we used a Fourier measurement matrix, depicted in Fig. 1(d), with the appropriate scaling (see Section 2.3) to ensure that the light dose delivered to the sample for each camera exposure was equal to or lower than in the plane-by-plane acquisition. We acquired 50 frames F1comp,F50comp with the same exposure time ∆t, i.e. corresponding to a compression ratio of 1:2. To analyze higher compression ratios, we simply considered subsets of these frames F1comp, F2compFMcomp, with M < 50. The highest compression ratio, 1:50, was obtained by keeping only the two first compressed frames F1comp and F2comp. Varying the compression ratio in this manner allowed us to assess the minimum number of measurements required to obtain an acceptable reconstruction quality.

4.2. Results on fluorescent beads

We first imaged fluorescent beads (Tetraspeck 100 nm beads) on a glass coverslip illuminated with a 488 nm laser. The exposure time for a single frame was set to ∆t = 100 ms and the stage was scanned over a 20 µm range (i.e. ∆z = 0.2µm).

Figure 3(c) compares a 3D image of two fluorescent beads obtained through plane-by-plane imaging to 3D images reconstructed from the compressed acquisition for increasing compression ratios κ (from 1:2 to 1:50). It is apparent that qualitatively, the bead signal is reconstructed in the correct position for all compression ratios, including the highest (1:50), which corresponds to only 2 frames (instead of 101 in the plane-by-plane acquisition). As the compression ratio is increased from 1:2 to 1:50, the reconstructed images of the two beads (i.e. PSFs) progressively deteriorate and became significantly distorted for compression ratios ≥ 1:20. However, up to a compression ratio of 1:10, all the beads and the fine features of the PSF shape are properly and accurately reproduced, see Fig. 3(b). This is also apparent in the line profiles in Fig. 3(d). Thus, this experiment illustrates that high quality reconstruction of 3D images is possible in compressed imaging using 1s of total acquisition time, compared to 10s in the plane-by-plane imaging scheme, thereby representing a 10 fold speedup.

4.3. Results on fixed cells

Encouraged by these results on beads, we proceeded to imaging fixed cells. Mouse embryonic stem cells (mESCs) were seeded on glass coverslips and fixed in 4% paraformaldehyde (PFA). The cells were then stained for actin with a fluorescently labeled probe (phalloidin-RFP) and imaged in an oxygen-scavenging medium. The camera exposure time was set to ∆t = 200 ms and the z scanning range to 20 µm (∆z = 0.2µm).

Results are shown in Fig. 4(a), where the 3D reconstruction of a sample imaged with 1:5 compression (right) is compared to the plane-by-plane acquisition (left). It is apparent that in both lateral (x, z) and axial (x, y) sections, fine and large details are successfully reconstructed. Although some high frequency details are lost in the (x, z) plane, most features are faithfully reproduced. This observation is further confirmed by the maximum intensity projection of the reconstructed stacks at various compression ratios, see Fig. 4(b). For compression ratios up to 1:5, the reconstructions show intensity profiles very similar to those of the plane-by-plane reference image. At higher compression ratios, however, significant artifacts become visible along the z axis (profiles in Fig. 4(a) and 4(b) bottom). Artifact-free reconstructions with higher compression ratios might be possible using a number of possible improvements (see Discussion). Nevertheless, these results already indicate that our compressed sensing approach is a viable method to achieve substantial reductions of acquisition time (and light exposure) for 3D imaging of biological samples.

 figure: Fig. 4

Fig. 4 Compressed imaging reconstruction of phalloidin-RFP-stained, fixed mouse embryonic stem cells (mESCs) acquired with a lattice light sheet microscope. (a). Example reconstruction from a 5 fold compression ratio (right) and the corresponding plane-by-plane reference acquisition (left) (1., top in the (x, y) plane and (2., middle) in the (x, z) plane. Dotted lines indicate the location of the line profiles presented in (panel 3., bottom) (left) profile along the x axis (yellow dotted line of panel 2.) for increasing compression ratios. (right) profile along the z axis (red dotted line of panel 3.) the curves are the average over 3 planes in the y dimension. (b). Maximum intensity projection in the (x, y) plane (1., left) and in the (x, z) plane (2., right) of the plane-by-plane stack (top) and reconstructed stack at increasing compression ratios (two bottom pictures). The orange dotted line shows the location of the line profile displayed in panel 3., bottom: line profile of the reconstruction at increasing compression ratios (dotted line) compare to the plane-by-plane reference (continuous line). A field of view in the (x, z) plane is 25×20 µm (256×101 px) and a field of view in the (x, y) plane is 25×25 µm (256×256 px). A full 3D reconstructions is provided in blue) Visualization 2.

Download Full Size | PDF

5. Compressed epifluorescence microscopy

5.1. Implementation

We also implemented our compressed sensing scheme on a standard epifluorescence microscope, an almost ubiquitous instrument in cell biology labs. Since our imaging strategy relies only on the synchronization of the stage position and the light modulation, it is very versatile and suitable for a wide range of microscopes.

An epifluorescence microscope (Nikon Eclipse TI) equipped with an AOTF (AOTFnC-400-650-TN, AA Optoelectronics) and a z piezo stage (Nano-ZL 500, Mad City Labs) is controlled using an Arduino microcontroller (Genuino Uno) to synchronize the AOTF and the stage through their analog input, based on the camera fire signal, see Fig. 5(a).

 figure: Fig. 5

Fig. 5 Compressed imaging reconstructions of fluorescent beads acquired with an epifluorescence microscope. (a). Principle of the epifluorescence microscope: both the AOTF and the motorized stage in z are synchronized by hardware (Arduino). (b). Example reconstructions (compressed) in the (x, z) plane from a 1:10 compressed acquisition and the corresponding acquisition in the traditional imaging scheme (plane-by-plane) for two y positions (termed position 1 and position 2). (c). Sample reconstructions at increasing compression ratios (from 1:2 to 1:50) compared to the plane-by-plane imaging scheme (top). (d). x and z profiles of one selected PSF (highlighted in panel c) reconstructed from various compression ratios. The black dotted line represents the plane-by-plane reference. A full 3D reconstructions is provided in blue) Visualization 3.

Download Full Size | PDF

The microscope was controlled using MicroManager [32], and custom firmware was written for the Arduino, allowing for a software switch between plane-by-plane and compressed sensing acquisition. In practice, the measurement matrix A is first loaded to the Arduino, then the focus is adjusted in the plane-by-plane imaging mode and an acquisition sequence is set using a custom MicroManager plugin. Finally, the Arduino is switched to the compressed sensing mode and the images are acquired and handled using the usual MicroManager logic.

5.2. Results on fluorescent beads

We imaged fluorescent beads on a glass coverslip with an axial range of ∼ 100 µm with a 60×/1.3 NA oil immersion objective, an exposure time of ∆t = 200 ms illuminated with 561 nm laser, leading to an average SNR of 15. Results are shown in Fig. 5(b)Fig. 5(d) for varying compression ratios κ. It is apparent from Fig. 5(c) that the reconstructed images qualitatively recover the beads position accurately for compression ratios up to 1:10. Interestingly, the reconstructed images display a decreased level of background noise, revealing the PSF shape of out-of-plane beads (i.e. beads in another y = constant plane) that are not visible in the noisy, plane-by-plane imaging reference, see Fig. 4(b). Although the location of the PSF in the reconstructed image is consistent with the plane-by-plane reference depicted in Fig. 4(d), the PSF in the latter is significantly sharper than in the reconstructed image. This might be due either to an insufficient synchronization of the AOTF with the stage or to an inaccurate PSF model used for the reconstruction. For higher compression ratios (≥ 1:20), the PSF is no longer well localized in z.

These results indicate that our compressed sensing approach can be successfully applied to an epifluorescence setup and achieve roughly ten-fold compression ratios. Given the large availability of epifluorescence microscopes, this shows that the benefits of our compressed imaging approach can be made widely accessible with little effort.

6. Discussion

In this paper, we present a new compressed sensing scheme for 3D fluorescence microscopy that can be applied to a wide range of microscopes. We validate our method through simulations, and demonstrate its feasability on a lattice light sheet microscope and on an epifluorescence microscope. We achieve reductions in image acquisition time and light exposure (compression ratios) of up to ten fold on both setups.

Most previously proposed compressed sensing strategies for biological microscopy require non-trivial modifications to microscope hardware, such as adding a digital micromirror device to the light path [17], or conjugating the camera with the back pupil plane to perform Fourier space imaging [16]. By contrast, our compressed sensing scheme is adaptable to any type of epifluorescence microscope, as long as the user can control both the stage position and the illumination intensity. Our lattice light sheet microscope implementation required software adjustments due to its complex nature, while implementation on an epifluorescence microscope required only one extra microcontroller (to ensure proper synchronization of the camera, AOTF and stage). We provide full schematics of the Arduino setup, together with a MicroManager plugin, which allows to quickly switch between plane-by-plane imaging and the compressed sensing imaging scheme.

Since our approach modulates the light intensity before it reaches the sample it results in a reduced excitation light dose to the sample. This is of crucial importance, since fluorescence imaging causes damage to the sample, ranging from photobleaching of the fluorescent probes to various metabolic and developmental defects. Furthermore, the light dose delivered to the sample is inversely proportional to the compression ratio: a ten fold compression ratio yields a ten fold reduction of the light dose at the sample compared to plane-by-plane imaging. Since phototoxicity is a nonlinear effect [33], we expect our compressed imaging method to allow dramatic reductions in photodamage. We note that due to the loss of high-frequency information in the implementation described here, the current approach is best suited for applications where long-term or high-temporal resolution 4D observation of larger scale fluorescently labeled structures is required. Potential applications of our scheme include long term imaging of transient cell cycle events in a developing fly embryo, the propagation of calcium influx in beating cardiomyocytes, the observation of signaling events or motility in live organisms or in cells growing in 3D matrices.

Another major benefit of our compressed imaging scheme is an increase in the temporal resolution of 3D imaging, thus allowing faster acquisition at a given SNR. For example, if plane-by-plane imaging requires 200 planes with 10 ms exposure each, i.e. a total of 2 s to acquire a given 3D volume, a compressed imaging scheme with a compression ratio of 10 will require only 200 ms for the same volume. Our simulations in Fig. 2 demonstrate that this scheme yields images of lower MSE than a plane-by-plane z-stack acquired with a 1 ms exposure time (that is a 200 ms overall acquisition time). This reduction in acquisition time should enable an equivalent increase in the temporal resolution of dynamic 3D microscopy of living biological samples. We anticipate that future work will build on the proposed approach to explore the potential of axially compressed imaging for faster 3D live cell imaging.

In this context, several challenges and perspectives for improvement are worth mentioning. First, better piezo hardware and/or calibration could reduce the mismatch between the theoretical sensing matrix and its experimental counterpart. Second, as in plane-by-plane imaging, our current method assumes that the imaged structure remains immobile throughout compressed acquisition of the 3D volume, which is rarely true in living samples. While sample movements can result in a blurred image or duplicated objects with traditional plane-by-plane imaging, it remains to be explored how these movements may distort reconstructed images in our compressed sensing scheme. As previously shown in the MRI field, methods to address these problems can be developed [30]. Third, computational strategies to accelerate image processing will be important to efficiently analyze the thousands of images generated in dynamic 3D imaging [34,35]. Finally, our method currently assumes sparsity in the image domain along the optical axis only and used a Fourier sensing matrix. Ten fold compression ratios were achieved on images with moderate degrees of sparsity, for which this sensing matrix is not optimal. We therefore expect that larger compression ratios can be achieved using 3D sparsity models better adapted to the imaged structures, e.g. using dictionary learning, or optimized sensing matrices [23,36].

7. Conclusion

In this work, we demonstrate a widely applicable 3D compressed sensing scheme in which images are compressed along the z axis during acquisition. This scheme can be implemented with little or no hardware modification on a wide range of microscopes. We first validated the feasability of our approach under noisy conditions using simulations, and then demonstrated the method experimentally on a lattice light sheet microscope and on an epifluorescence microscope, where we achieved roughly ten fold imaging speedup. This approach can be used to increase the temporal resolution or extend imaging time (through a reduction in light dose) in 3D fluorescence imaging applications.

Software and data availability

A Python port of SPIRALTAP [27] is available on GitHub https://github.com/imodpasteur/pySPIRALTAP (doi:10.5281/zenodo.439691. The datasets used in this work are available on the Zenodo repository (doi:10.5281/zenodo.439689) and the analysis scripts were deposited on GitHub https://github.com/imodpasteur/CompressedSensingMicroscopy3D (doi:10.5281/zenodo.439690). The MicroManager plugin and Arduino device adapter are available on https://github.com/imodpasteur/ArduinoCompressedSensing.

Funding

California Institute of Regenerative Medicine (CIRM) LA1-08013 to X.D.; National Institutes of Health (NIH) UO1-EB021236 to X.D.; National Institutes of Health (NIH) U54-DK107980 to X.D.; the Région Île de France (DIM Malinf) to C.Z; Visiting scholarship from the Siebel Stem Cell Foundation to C.Z; Institut Pasteur to C.Z.

Acknowledgments

We are very grateful to Zach Harmany who made available the SPIRALTAP software under an open source license and to Moran Mordechay for useful discussions on further optimization of the imaging scheme. We would like to thank the whole IMOD lab and Darzacq labs for insightful discussions and suggestions to this work. We thank Astou Tangara and the Betzig Lab (HHMI Janelia Research Campus) for their help in constructing the LLSM.

This work used the computational and storage services (TARS cluster) provided by the IT department at Institut Pasteur, Paris.

References and links

1. P. Keller, A. Schmidt, J. Wittbrodt, and E. Stelzer, “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322, 5904 (2008). [CrossRef]  

2. T. A. Planchon, L. Gao, D. E. Milkie, M. W. Davidson, J. A. Galbraith, C. G. Galbraith, and E. Betzig, “Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination,” Nat. Methods 8, 417–423 (2011). [CrossRef]   [PubMed]  

3. M. Weber, M. Mickoleit, and J. Huisken, “Light sheet microscopy,” in “Methods in Cell Biology” (Elsevier, 2014), pp. 193–215. [CrossRef]  

4. R. Baraniuk, “Compressive sensing,” IEEE Signal Processing Mag. 24, 118–121 (2007) [CrossRef]  

5. D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]  

6. E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Mag. 25, 21–30 (2008). [CrossRef]  

7. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012). [CrossRef]  

8. M. Elad, Sparse and Redundant Representations (Springer, 2010). [CrossRef]  

9. J. Bobin, J.-L. Starck, and R. Ottensamer, “Compressed sensing in astronomy,” IEEE J. Sel. Top. Signal Process 2, 718–726 (2008). [CrossRef]  

10. M. Lustig, D. Donoho, J. Santos, and J. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Mag. 25, 72–82 (2008). [CrossRef]  

11. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 310, 399 (2013).

12. J. Shin, B. T. Bosworth, and M. A. Foster, “Compressive fluorescence imaging using a multi-core fiber and spatially dependent scattering,” Opt Lett 109, 42 (2017).

13. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516, 74–77 (2014). [CrossRef]   [PubMed]  

14. J. Liang, C. Ma, L. Zhu, Y. Chen, L. Gao, and L. V. Wang, “Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse,” Sci Adv 3, e1601814 (2017). [CrossRef]   [PubMed]  

15. E. McLeod and A. Ozcan, “Unconventional methods of imaging: computational microscopy and compact implementations,” Rep. Prog. Phys. 79, 076001 (2016). [CrossRef]   [PubMed]  

16. M. M. Marim, E. D. Angelini, and J.-C. Olivo-Marin, “A compressed sensing approach for biological microscopic image processing,” in “2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro,” (IEEE, 2009), pp. 1374–1377.

17. P. Ye, J. L. Paredes, Y. Wu, C. Chen, G. R. Arce, and D. W. Prather, “Compressive confocal microscopy: 3d reconstruction algorithms,” in “SPIE MOEMS-MEMS: Micro-and Nanofabrication,” (International Society for Optics and Photonics, 2009), pp. 72100G.

18. Y. Wu, P. Ye, I. O. Mirza, G. R. Arce, and D. W. Prather, “Experimental demonstration of an optical-sectioning compressive sensing microscope (CSM),” Opt. Express 18, 24565–24578 (2010). [CrossRef]   [PubMed]  

19. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. U. S. A. 109, E1679–E1687 (2012). [CrossRef]   [PubMed]  

20. S. Schwartz, A. Wong, and D. A. Clausi, “Compressive fluorescence microscopy using saliency-guided sparse reconstruction ensemble fusion,” Opt. Express 20, 17281–17296 (2012). [CrossRef]   [PubMed]  

21. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster STORM using compressed sensing,” Nat. Methods 9, 721–723 (2012). [CrossRef]   [PubMed]  

22. N. Pavillon and N. I. Smith, “Compressed sensing laser scanning microscopy,” Opt. Express 24, 30038 (2016). [CrossRef]  

23. E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall, “Compressed sensing with coherent and redundant dictionaries,” Appl. Comput. Harmon Anal. 31, 59–73 (2011). [CrossRef]  

24. T. Tao and E. Candès, “Decoding by linear programming,” arXiv preprint arXiv:math/0502327 (2004).

25. M. Raginsky, R. M. Willett, Z. T. Harmany, and R. F. Marcia, “Compressed sensing performance bounds under Poisson noise,” IEEE Trans. Signal Process 58, 3990–4002 (2010). [CrossRef]  

26. E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52, 489–509 (2006). [CrossRef]  

27. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson intensity reconstruction algorithms–theory and practice,” IEEE Trans. Signal Process 21, 1084–1096 (2012).

28. S. Becker, J. Bobin, and E. Candès, “NESTA: A fast and accurate first-order method for sparse recovery,” arXiv preprint arXiv:0904.3367 (2009).

29. R. Baraniuk, M. A. Davenport, M. F. Duarte, and C. Hegde, An Introduction to Compressive Sensing (OpenStax CNX, 2011).

30. C. Bilen, Y. Wang, and I. Selesnick, “Compressed sensing for moving imagery in medical imaging,” arXiv preprint arXiv:1203.5772 (2012).

31. B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Bohme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution,” Science 346, 1257998 (2014). [CrossRef]   [PubMed]  

32. A. D. Edelstein, M. A. Tsuchida, N. Amodaj, H. Pinkard, R. D. Vale, and N. Stuurman, “Advanced methods of microscope control using micro-Manager software,” J. Biol. Methods 1, 10 (2014). [CrossRef]  

33. J.-Y. Tinevez, J. Dragavon, L. Baba-Aissa, P. Roux, E. Perret, A. Canivet, V. Galy, and S. Shorte, “A quantitative method for measuring phototoxicity of a live cell imaging microscope,” in “Methods in Enzymology” (Elsevier, 2012), pp. 291–309. [CrossRef]  

34. D. S. Smith, J. C. Gore, T. E. Yankeelov, and E. B. Welch, “Real-time compressive sensing MRI reconstruction using GPU computing and split Bregman methods,” Int. J. Biomed. Imaging 2012, 1–6 (2012). [CrossRef]  

35. B. E. Nett, J. Tang, and G.-H. Chen, “GPU implementation of prior image constrained compressed sensing (PICCS),” Proc. SPIE 7622, 762239 (2010). [CrossRef]  

36. M. Mordechay and Y. Y. Schechner, “Matrix optimization for Poisson compressed sensing,” in “2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)” (IEEE, 2014), pp. 684–688.

37. J. D. Hunter, “Matplotlib: A 2d graphics environment,” Comput. Sci. Eng. 9, 90–95 (2007). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1: AVI (3757 KB)      Supplementary movie 1: reconstruction of fluorescent beads acquired with a LLSM (movie along the y dimension)
Visualization 2: AVI (1097 KB)      Supplementary movie 2: reconstruction of mESCs labeled with actin acquired with a LLSM (movie along the y dimension)
Visualization 3: AVI (1029 KB)      Supplementary movie 3: reconstruction of fluorescent beads acquired with an epifluorescence microscope (movie along the y dimension)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Principle of the proposed 3D compressed imaging method compared to traditional 3D plane-by-plane imaging. (a) Plane-by-plane imaging: for each camera frame (green curve indicates if the shutter is open or closed), one plane of the sample (red curve indicates z position) is illuminated at a constant laser intensity (blue curve shows transmission percentage of the excitation light source). The process is repeated for each plane (N = 101 times) to acquire a z-stack. Finally, the full imaging sequence is repeated n times to acquire a 4D movie for a total of N * n frames. The blue and red dots represent the illumination intensity and stage position at each time point respectively. This imaging scheme can be represented as the application of a square diagonal measurement matrix A as shown in (b): for each camera frame (row), only one z plane is illuminated (column). (c) Axially compressed imaging: the stage continually sweeps through the entire axial range while the illumination is modulated to create a specific axial light pattern. In this scheme, multiple planes of the sample are illuminated during a single camera exposure frame. This process is repeated M = 10 < N times with different light patterns, thus performing an optomechanical implementation of a compressed measurement matrix, as shown in (d). Finally, the full imaging sequence is repeated n times to acquire a 4D dataset with a total of M*n (10*101) frames.
Fig. 2
Fig. 2 Simulations comparing the compressed sensing scheme with the traditional plane-by-plane scheme under various compression ratios and SNR. (a). Principle of the simulation: from a generated ground truth image (GT) and a specified SNR, (left) a noisy reference (NRSNR) is generated by adding Poisson and Gaussian noise to the ground truth. In parallel (right), the ground truth is compressed (along the z axis) with a compression ratio κ and an equivalent amount of noise is added at the same time as the compression (see main text for details). The compressed images are further decompressed ( image C S SNR κ ) and the mean square error (MSE) is computed with respect to the ground truth. (b). Examples of simulated images in the (x, z) plane. From top to bottom: (plane-by-plane) GT, (1:2) to (1:20) C S SNR κ image reconstructed at a SNR of 20 and with a compression ratio of 1:2 to 1:20. The blue arrows represent lines of low sparsity, high sparsity and medium sparsity (respectively x1, x2 and x3). (c). quality of the reconstruction (assessed by the MSE with respect to GT) for various SNR and compression-ratios. The dashed line is the MSE of the noisy reference NRSNR with respect to the ground truth GT. The dash-dotted line is the MSE of the noisy reference acquired with a ten times lower exposure time N R SNR 10 x. Inset: close-up of the low SNR region (SNR=1–10).
Fig. 3
Fig. 3 Compressed imaging reconstructions of fluorescent beads acquired with a lattice light sheet microscope. (a). Principle of a lattice light sheet microscope: two objectives at a 90°angle are used to observe the sample. The light sheet is generated through a spatial light modulator (SLM) and associated optics. The focus is adjusted by a coordinated move of the z piezo (that translates the observation objective) and of the z galvo (that translates the light sheet). Synchronization is achieved by a FPGA (Field-Programmable Gate Array). (b). Sample reconstructions (compressed) in the (x, z) plane from a 1:10 compressed acquisition and the corresponding acquisition in the plane-by-plane imaging scheme for two y positions (termed position 1 and position 2). (c). Compressed imaging sample reconstructions at increasing compression ratios (from 1:2 to 1:50) compared to the plane-by-plane scheme (top). For each reconstruction, a close-up of the PSF is displayed (PSF column) and the 2D frequency content of the PSF is displayed next to the PSF (FFT column). The PSF shown in red is the typical response of the LLSM whereas the shape of the yellow PSF is likely due to a defect in the bead. (d). Line profile across the two highlighted PSFs (in yellow and red, respectively left and right). top Close-up along the x axis, bottom close-up along the z axis. The colors correspond to different compression ratios and the dotted line to the plane-by-plane reference. Horizontal axis in µm. A field of view in the (x, z) plane is 50×20 µm (512×101 px). A full 3D reconstructions is provided in blue) Visualization 1.
Fig. 4
Fig. 4 Compressed imaging reconstruction of phalloidin-RFP-stained, fixed mouse embryonic stem cells (mESCs) acquired with a lattice light sheet microscope. (a). Example reconstruction from a 5 fold compression ratio (right) and the corresponding plane-by-plane reference acquisition (left) (1., top in the (x, y) plane and (2., middle) in the (x, z) plane. Dotted lines indicate the location of the line profiles presented in (panel 3., bottom) (left) profile along the x axis (yellow dotted line of panel 2.) for increasing compression ratios. (right) profile along the z axis (red dotted line of panel 3.) the curves are the average over 3 planes in the y dimension. (b). Maximum intensity projection in the (x, y) plane (1., left) and in the (x, z) plane (2., right) of the plane-by-plane stack (top) and reconstructed stack at increasing compression ratios (two bottom pictures). The orange dotted line shows the location of the line profile displayed in panel 3., bottom: line profile of the reconstruction at increasing compression ratios (dotted line) compare to the plane-by-plane reference (continuous line). A field of view in the (x, z) plane is 25×20 µm (256×101 px) and a field of view in the (x, y) plane is 25×25 µm (256×256 px). A full 3D reconstructions is provided in blue) Visualization 2.
Fig. 5
Fig. 5 Compressed imaging reconstructions of fluorescent beads acquired with an epifluorescence microscope. (a). Principle of the epifluorescence microscope: both the AOTF and the motorized stage in z are synchronized by hardware (Arduino). (b). Example reconstructions (compressed) in the (x, z) plane from a 1:10 compressed acquisition and the corresponding acquisition in the traditional imaging scheme (plane-by-plane) for two y positions (termed position 1 and position 2). (c). Sample reconstructions at increasing compression ratios (from 1:2 to 1:50) compared to the plane-by-plane imaging scheme (top). (d). x and z profiles of one selected PSF (highlighted in panel c) reconstructed from various compression ratios. The black dotted line represents the plane-by-plane reference. A full 3D reconstructions is provided in blue) Visualization 3.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

F ( α ) = α 1 + λ A α Y 2 2
F ( α ) = α 1 + λ ( α )
z f ( t ) = E ( t Δ t ) × Δ z
F k p . b y p . ( x , y ) = L 0 × Δ t × ( I * P S F ) ( x , y , k Δ z ) for k = 1 N
F k c o m p ( x , y ) = L 0 × Δ t × 0 ( N 1 ) Δ z A k ( z ) × ( I * P S F ) ( x , y , z ) d z for k = 1 M < N
F k c o m p ( x , y ) = L 0 × Δ t × i = 1 i = N A k , i × ( I * P S F ) ( x , y , i Δ z ) for k = 1 M < N
z f ( t ) = N Δ z ( t Δ t E ( t Δ t ) )
F k c o m p ( x , y ) = L 0 ( k 1 ) Δ t k Δ t T ( t ) × ( I * P S F ) ( x , y , z f ( t ) ) d t = Δ t N Δ z L 0 0 ( N 1 ) Δ z T ( z ) × ( I * P S F ) ( x , y , z ) d z
F k c o m p ( x , y ) = L 0 × Δ t × 1 N i = 1 i = N T k , i × ( I * P S F ) ( x , y , i Δ z )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.