Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

zOPT: an open source optical projection tomography system and methods for rapid 3D zebrafish imaging

Open Access Open Access

Abstract

Optical projection tomography (OPT) is a 3D imaging alternative to conventional microscopy which allows imaging of millimeter-sized object with isotropic micrometer resolution. The zebrafish is an established model organism and an important tool used in genetic and chemical screening. The size and optical transparency of the embryo and larva makes them well suited for imaging using OPT. Here, we present an open-source implementation of an OPT platform, built around a customized sample stage, 3D-printed parts and open source algorithms optimized for the system. We developed a versatile automated workflow including a two-step image processing approach for correcting the center of rotation and generating accurate 3D reconstructions. Our results demonstrate high-quality 3D reconstruction using synthetic data as well as real data of live and fixed zebrafish. The presented 3D-printable OPT platform represents a fully open design, low-cost and rapid loading and unloading of samples. Our system offers the opportunity for researchers with different backgrounds to setup and run OPT for large scale experiments, particularly in studies using zebrafish larvae as their key model organism.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Small size, optical transparency and ease of breeding are a few of the properties that makes the zebrafish (Danio rerio) such a desirable model organism to use for large-scale genetic or chemical screens. Compared to traditional cell-based screens, zebrafish phenotyping has the unique advantage of using an intact organism. The zebrafish is a highly complex vertebrate that possesses discrete organs and tissues like brain, heart, kidneys, intestine, muscular skeletal system and sensory organs, and many of the fundamental mechanisms are conserved in humans [1]. Furthermore, the application of genome editing tools such as CRISPR-Cas9 to the zebrafish have enabled researchers to study consequences of virtually any genetic alteration [2,3]. The large interest in zebrafish has generated a high demand for rapid 3D imaging and analysis of these animals. To address this, automated 3D analysis of gene expression in zebrafish larvae has recently been developed for fluorescent readouts and optical sectioning [4]. Although fluorescent probes are most commonly used in 3D imaging, they often require expensive hardware, suffer from photobleaching, have low signal intensity, and undergo quenching under common tissue clearing protocols. Using a brightfield stain, such as chromogenic in situ or Alcian blue, provides good contrast using only white transmission light. These staining techniques are also less sensitive, and photobleaching and quenching are not a concern when using brightfield stains. Furthermore, these staining techniques are well-established, inexpensive and commonly used in zebrafish research. Moreover, the biological variation between fish requires large scale experiments which is usually time consuming and requires fast sample handling and imaging.

Optical projection tomography (OPT) is a well-suited low-cost alternative for 3D imaging of zebrafish. It can generate 3D isotropic high-resolution images of transparent (optically cleared or intrinsically transparent) mm-sized samples in both brightfield and fluorescence. OPT computationally obtains volumetric information from a collection of 2D images acquired at multiple angles. Compared to techniques such as the confocal microscopy that needs optical sectioning of samples in depth, OPT and many other tomography techniques acquire depth information by rotating the sample, resulting in isotropic resolution in all three dimensions. The isotropic properties in volumetric data is ideal when aligning and comparing phenotypes from multiple samples. OPT is a simple and fast technique compared to, e.g., standard confocal microscopy. Whole fish imaging can be done faster using light-sheet [5], however, it requires a much more complex system and is limited to using fluorescence. Due to the size and optically transparent nature of the zebrafish larvae they are ideal to study using OPT in brightfield. Furthermore, zebrafish are often used for high throughput screening, which requires a simple and rapid acquisition [6].

Over the last decade, researchers have developed the OPT technology to be applied in a wide variety of applications using customized implementations [711]. To ensure successful reconstruction using OPT, it is important to correct and minimize errors that occur during rotation and acquisition. Van der Horst et al. proposed a deconvolution method to correct resolution blurring from imaging optics [12]. Birk et al. presented computational methods for correcting unevenly distributed background, intensity spikes and refractive index mismatching [13]. They also proposed methods for correcting specimen movement using in-vivo OPT data [14]. The motions errors are also addressed by Zhu et al. [15] based on Helgason–Ludwig consistency condition. Dong et al. provided a two-step method for correcting center-of-rotation (COR) errors using sinograms [16]. Tang et al. developed a post-processing pipeline for OPT including a COR correction algorithm based on feature point tracking using sinograms [17]. However, methods for COR correction based on point tracking in the sinogram is dependent on the features on the sample, thus it can be difficult to track correctly if the image has low signal-to-noise ratio and the features in the sample are not suitable to track. Among the tomography system properties studied the COR error is an important aspect not only for OPT, but also for other tomography data. Yang et al. proposed an image cross correlation method to find COR using data from CT system [18]. Vo et al. calculated the COR based on Fourier analysis of the sinogram using X-ray tomography [19]. Moreover, there are several reconstruction-based methods used to correct the COR error. Figueiras et al. presented a method to find optimal COR by detecting of the variance sharpest local maximum of reconstructed slices [20]. An approach developed by Yu et al. iteratively register projection images from reconstruction to the raw projection data to achieve better reconstruction quality in X-ray tomography [21]. Another approach using X-ray tomography is proposed by Cheng et al. [22] where the algorithm optimized the total variance of the reconstructed data to correct COR errors. However, each system has different limitations and the method of choice must be carefully chosen to suit the system properties and the samples to be imaged.

In this paper we present the design of a customized cost-effective brightfield OPT system. The system is optimized for easy loading and unloading of zebrafish larvae and the cost of the OPT sample stage is brought down using 3D printed components. We developed an open-source graphic-user-interface (GUI) and designed 3D models for the system, making it easy for anyone to setup and get started with. Furthermore, we have characterized our system and developed a workflow that matches the hardware and software to compensate for the errors in the system. We present generalized automated algorithms that effectively improves the OPT reconstruction quality by correcting COR errors using a two-step image processing approach. Finally, we demonstrate the reconstruction results using real data from fixed stained and live zebrafish larvae as well as an application for phenotypic quantification.

2. Methodology

2.1 Overview of the platform

Our Optical Projection Tomography (OPT) setup is designed for rapid imaging of zebrafish larva in brightfield. We have developed a graphical-user-interface (GUI) for data acquisition and an automated reconstruction workflow for generating 3D data. Our platform can be used for imaging samples from 200 to 860 μm in diameter, which corresponds to zebrafish larvae at approximately from 1 to 9 days post fertilization (dpf). The system acquisition time of one fish is 6.8 seconds for a full 360-degree rotation, 1 degree per frame. Following the acquisition, we reconstruct the 3D fish using our optimized reconstruction algorithms.

2.2 Sample preparation

All zebrafish larvae were collected, housed, and treated as previously described by del Pozo at el. in [23]. Pigmentation in larvae used for in situ hybridization was inhibited by addition of 0.003% 1-Phenyl-2-thiourea (PTU) to the embryo water. Zebrafish were collected at 3 dpf and fixed overnight at 4°C using 4% paraformaldehyde in phosphate buffered saline, washed once in 100% methanol and stored at -20 °C in 100% methanol. In situ hybridization was performed as previously described in [24]. Hybridized probes were visualized using BM purple (Roche, Switzerland) as substrate for alkaline phosphatase precipitation. Embryos were cleared in 99% glycerol.

Larvae for skeletal staining were collected at 5 dpf and fixed overnight at 4°C using 4% paraformaldehyde in phosphate buffered saline, washed once in phosphate buffered saline and transferred into 50% ethanol. Alcian Blue and Alizarin Red staining was performed right away as described previously in [25].

Living zebrafish larvae have been anaesthetized in system water containing tricaine prior to imaging.

Appropriate ethical approvals were obtained from a local ethical board in Uppsala (C 164/14).

2.3 Design of experimental setup

The OPT setup (Fig. 1(A)) consists of three major parts: brightfield illumination, sample stage with rotating capillary and a stepper motor, and lens system with a CMOS detector. The experimental setup can be found in Fig. S1 in Supplemental Document and the source codes can be downloaded from https://github.com/Hq-Z/zOPT.

 figure: Fig. 1.

Fig. 1. A) Schematics of brightfield OPT setup. The dotted green lines represent the light path, and solid lines represent signals transmitted by wires. Bold arrows represent sample loading and unloading respectively from the same port of the capillary. The capillary is rotated with a stepper motor and the CMOS camera acquires projections of the sample during rotation. B) An RGB projection from the OPT system using 3 dpf in situ stained zebrafish. The dashed lines represent the inner walls of the glass capillary. C) 3D reconstruction results. Left: 3D volume of three channels rendering using Volview software.(Kitware Inc.) with pseudo color. Right: 3D volume of green channel. A slice of the sample in z direction marked in red (solid line) is demonstrated. Maximum intensity projection views of the region marked in red (dashed lines) in the volume rendering. Maximum projections are made from ventral, frontal and sagittal views in the x-y-z coordinates.

Download Full Size | PDF

For illumination, we use a white LED with a diffuser to generate evenly distributed light over the entire field of view. A sample stage is placed between the camera and the light source. On the sample stage we have placed a quartz cuvette filled with glycerol. The sample stage can be controlled using a linear x, y, z stage. A tapered glass capillary with an inner diameter of 0.86 mm and an outer diameter of 1.50 mm is holding the sample and immersed in the cuvette. The capillary is connected to a stepper motor that controls the rotation of the sample. The stepper motor is controlled trough an Arduino UNO REV3 device and a custom-made MATLAB GUI. The lens system consists of a 3x telecentric objective (Edmund optics) with a depth of field of 0.36 mm and a numerical aperture of 0.043 in air. We use a FLIR camera (Blackfly S BFSU3-23S3) with 8 bits data depth, pixel size of 3.45 µm and sensor size of 6.624 mm × 4.140 mm, resulting in a field of view of 2.208 mm × 1.380 mm. The pixel resolution is 1.15 µm and the platform resolution is estimated to be 7.80 µm according to the Rayleigh criterion (resolution=0.61*λ/NA, where λ = 550 nm and NA = 0.043). The maximum sample depth for the system is estimated to 0.72 mm, based on the method used in [26]. The CMOS camera used in this paper can be exchanged to a camera of choice depending on budget and application. For high contrast brightfield projection data, such as the ones used in this paper, this camera has provided good results. However, if the system is intended to be used mainly for fluorescence a more sensitive camera with higher dynamic range should be used.

For the glass capillary a stopper can be added to the bottom end to fix the sample in position. This can be done with an adhesive of a matching refractive index (RI) [27]. Here, we decided to create a tapering of the glass capillary instead, which was done by a micropipette puller (Flaming/Brown micropipette puller, Sutter Instruments, model P-1000). To ensure optimal distortion free imaging through the glass capillary, we considered a few experimental details. First, to reduce distortion from refraction the RI of the glass capillary has to match the RI of the surrounding liquid. Therefore we choose to use 99% glycerol (RI 1.474) and a borosilicate glass capillary with RI 1.47-1.49 [28], producing minimal distortion from refraction. Second, a smooth rotation is crucial to reduce the unwanted motion of the sample caused by the stepping of the motor. To achieve this, we used a stepper motor with a driver carrier operating at 1/128 micro-steps (12800 steps per revolution) resulting in a smooth rotation without visible vibration. Third, calibrating and aligning the position and tilt angle of the stage is crucial. In practice, this type of error can be corrected by checking the maximum intensity projection (MIP) from a full rotation video. The image features on the sample should produce straight lines in the MIP, while an elliptical shape in the MIP is an indication of a tilt between the capillary and detector [10].

Our sample stage is made from 3D-printatble components enabling the use of a wide selection of stepper motors. By using customized 3D printed components as capillary holder, we allow capillaries with different diameters attached to the stepper motor in order to handle samples of different size and age.

In summary, our setup is compact, and all the optics related components can be built on top of a 200 × 250 mm flat surface. Note that the cost for the basic setup is approximately €1300, where the lens system and camera contribute to approx. 90% of the cost. However, depending on the 3D printing quality of the different components, support structures might be necessary to improve rigidity and alignment accuracy, adding to the cost. There are other optional components such as, translational stage and aluminum breadboard that can be added to the system. These optional components and the full price list can be found in Supplemental Document Fig. S1C.

2.4 Data acquisition

To load a specimen into the system, first we transfer the fish into a needle connected to a syringe. Then the syringe needle tip is placed in the fluidic port and the fish is injected together with some liquid into the vertically configured glass capillary. The fish will be stopped by the tapering and immobilized during imaging. By withdrawing the liquid from the same fluid port using a syringe the fish can be easily unloaded. The entire process for loading, imaging and unloading takes about 2 minutes per fish. With the easy loading and unloading feature and a user-friendly GUI for data acquisition, this allows the user to sample up to 200 fishes per day in a typical screening experiment for 8 hours. In addition, the automated processing pipeline can be run offline, thus not limiting the acquisition time.

During the acquisition the CMOS camera collects the transmitted light that passes through the rotating sample (Fig. 1(B)). The rotational video is acquired using a FLIR camera operating at a shutter time of 15.0 ms, a frame rate of 53 fps and with an image size of 1920 × 1200 pixel. Each channel in the acquired OPT data can be reconstructed individually based on a filtered back projection algorithm. In Fig. 1(C) a RGB reconstruction of an in situ stained fish can be seen.

2.5 Synthetic data generation

To validate our reconstruction, we use synthetic data and simulate the errors in the OPT system. There are two types of error which can significantly reduce the quality of reconstruction. The first type appears as a constant value that shift the COR for all projections. This error comes from the misalignment between the sample stage and the imaging system. We denote this as type I error. The second type of error causes different shifts to the COR in each projection. In our OPT system this can be caused by vibrations from the stepper motor that are transferred to a rigid capillary, resulting in movement of the center of rotation during the acquisition. A small random or periodic deviation of the sample position can be picked up by the OPT system which has micrometer resolution. This is denoted as type II error. Other errors are originated from limited depth of field of the optical system, small amount of refractive index mismatching, a small tilt angle of the capillary to the detector plane, inhomogeneous sample environment and light source. We address the depth of field limitation by placing the focus at ¼ of the sample to cover the maximum sample range during OPT imaging. The tilt is controlled by the alignment method introduced in section 2.3 using MIP. The inhomogeneous background can be reduced by placing the diffusor in the front of the sample and applying background removal by image processing. These approaches lead us to focus on the type I and II errors that remains in the system and has visible impact on the quality of the reconstruction. Therefore, in our simulations we worked under the assumption that the sample, as well as the capillary, were rigid and there was no movement of the sample during the recordings. The COR errors were considered in both the x-direction (i.e. perpendicular to the light path) and the z-direction (parallel to the light path). We measured that small movement in the z-direction contribute to less than 2% of the depth of field where such deviation from the focus had little influence on image quality and the reconstruction. On the other hand, the same amount of shift errors in x-direction is visible in the image at focus and can cause blurred edges in the reconstruction. Therefore, we focused mainly on movement along the x-direction (Fig. 2(A)). To generate the simulation data we used a 2D Shepp-Logan phantom image in MATLAB and applied the radon transform to get tomography data with an angular resolution of 1 degree per projection (Fig. 2(B)). After generating 360 projections we applied image translation in the x-direction. For the type I error we used a constant value to shift all the projections. For the type II error we generated them in two parts:(1) a random error simulated by uniformly distributing random numbers generated with the distribution’s mean as zero and (2) a periodic error simulated with a sinusoid wave. To generate these errors we simulated 10 pixels drift in the z direction as type I error and used 5 pixels as the amplitude of periodic and random noise as type II error in 256 × 256 and 512 × 512 templates. The final reconstructed image is obtained by applying filtered-back projection (FBP) reconstruction algorithm using the sinogram containing the errors (Fig. 2(B)). The reconstruction of the phantom image with different types of synthetic errors are presented in Fig. 2(C).

 figure: Fig. 2.

Fig. 2. A) Schematics of the OPT system for simulation. The solid line with single arrow represents the rotation of the capillary; the dotted lines with arrow represent the light path; the bold arrow represents the direction of simulated motion errors. B) Processes for generating simulation data using a phantom image in MATLAB. C) The effect of COR errors on the final reconstruction.

Download Full Size | PDF

3. OPT reconstruction workflow

To ensure high-quality 3D reconstruction from OPT, we must correct for the system errors identified in the previous section. Our pipeline for OPT image processing consists of pre-processing, extracting 360 degrees rotation data, two-step COR correction method, which we denote as gCOR representing a global center-of-rotation correction method, and iRRpw which is an iterative reconstruction and registration approach with pairwise strategy. Finally, we get the tomographic reconstruction using FBP (Fig. 3(A)).

 figure: Fig. 3.

Fig. 3. Workflow of OPT reconstruction is presented with examples using Alcian blue stained fish. The pre-processing steps are for brightfield OPT data. Solid arrows represent the data flow. The dashed line in the images of step 5 represents the image center line and the solid line represents the detected symmetry axis. B) Detailed workflow for COR correction using gCOR and iRRpw method.

Download Full Size | PDF

3.1 Pre-processing

The unevenly distributed light intensity across the projection image can be compensated using a background image. The background image can be acquired before a sample is loaded into the system or it can be computationally generated by first detecting background pixels and then use them to interpolate pixel values in the sample region. According to Lambert-Beer's law, we first divide each image with the background image, then invert the intensity values and apply a logarithm making the intensity linear to the light absorption model for brightfield OPT [9].

After applying the background correction and scaling, we extract the data corresponding to a full rotation. To ensure 360 degrees for high-quality tomography reconstruction we acquire our data with ∼10% overhead by having 400 frames. We calculate 2D correlation between a starting projection and candidate projections from 400 frames to find one projection with a maximum value that represent the projection at 360 degrees. The averaged angle interval between consecutive projections can be calculated by dividing 360 by the number of frames in a full rotation.

3.2 COR correction and reconstruction

We have developed a two-step pipeline for correcting the errors during rotation; a global center-of-rotation correction method (gCOR) followed by an iterative reconstruction and registration approach with a pairwise correction strategy (iRRpw) (Fig. 3(B)). This two-step pipeline is necessary to deal with the two types of errors we defined in section 2.5 Synthetic data generation. The first method is a global method that detects the symmetry to estimate COR in a projection image. This projection image is calculated as,

$$P(x,y) = \sum\limits_{i = 1}^N {f(I(x,y,i))} ,$$
where I represent the input data, x and y are the coordinates in each projection, i is the projection number and the total number is N. The function f can be chosen as either the summation or the maximum over all projections. In the case that f is the maximum over all projections, it is known as the maximum intensity projection. We assume the projections in I rotates around an axis passing through the image center in y direction, denoted as symmetry axis. The resulting projection P and its mirror P’ with respect to the x-axis are compared using 2D correlation coefficient. We then optimize the following equation to find the optimal symmetry axis.
$$\mathop {\arg \min }\limits_{a \in R,b \in R,r \in R} [1 - \frac{{\sum\limits_x {} \sum\limits_y {} (M(x,y) - \overline M )(M^{\prime}(x,y) - \overline {M^{\prime}} )}}{{\sqrt {\sum\limits_x {} \sum\limits_y {} {{(M(x,y) - \overline M )}^2}\sum\limits_x {} \sum\limits_y {} {{(M^{\prime}(x,y) - \overline {M^{\prime}} )}^2}} }}],$$
$$M = {R_r}({T_{ab}}(P(x,y))),M^{\prime} = FlipY(M),$$
where Rr(A) represents rotating a matrix A by r degrees around the matrix center; Tab(A) represents the translation of matrix A by a and b in the x and y directions, respectively; FlipY(A) flip a matrix A with respect to the center line to get its mirror. By solving this optimization problem, we apply a rigid transform for all projections in I using half the value of a, b and r, so that the rotation symmetry is along the axis passing through the projection center in y direction. We denote this symmetry axis as global rotation axis. This method is robust to pixel noise and is used to detect a common COR shift for all the projections.

This process is followed by an iterative reconstruction and registration approach (iRR) which resolves the remaining COR errors (see [21] for a similar method). An inaccurate COR estimation and slightly miss-aligned projections will produce a blurred structure in the reconstruction with reduced resolution and more artifacts. By aligning the experimentally acquired projections to their corresponding radon transformed projections from this blurred reconstruction, all experimental projections will be re-aligned to the center of a blurred structure. These newly aligned projections are used to reconstruct a new structure with improved quality. This is done multiple times to further improve the reconstruction quality. In the implementation of iRR we first reconstruct the 3D image using FBP. The calculations are accelerated with GPUs using the Astra toolbox [29]. We then apply a radon transform to generate a dataset I’ which has the same image size, rotation angle interval and direction, and projections as the input I. We take each projection in I and apply a rigid transform to align with its corresponding projection in I’. To find the best alignment for each frame, we solve the optimization function as described in Eq. (2) with M as a projection in I and M’ as a corresponding projection from I’. The optimal rigid transform parameters for each projection are applied to transform each projection in I to correct the COR errors. Once all projections are transformed a new 3D reconstruction is generated with FBP. This process is applied iteratively to improve the reconstruction.

To speed up the iterative process and get better reconstruction results in fewer iterations using the above-mentioned method, we used a pairwise strategy to align all projections, and we denote this method as iRRpw. First, we use FBP to get a 3D volume R. Second, we use the radon transform to generate projection images I’ from R. Third, we perform a rigid alignment of all projections in I to the generated projections from I’ to correct COR errors for projections ranging from 0 to 360 degrees (IiRR). Fourth, we align the projections in IiRR from 0 to 180 degrees to that of its mirrored pair projection in the range from 180 to 360 degrees using only translation to get IiRRpw. This is valid since each projection in tomography data and its 180 degrees counterparts, in an ideal case, should have identical information (mirrored along the COR). In practice, even if the specimen is only half in focus, a projection still contains enough similarity with its 180 counterparts to align well using translation. Finally, we complete one iteration by applying FBP to IiRRpw to get the 3D reconstruction. In the next iteration, we use IiRRpw as input to the reconstruction in the first step mentioned above. This process is conducted iteratively to improve the reconstruction. Finally, we complete one iteration by applying FBP to IiRRpw to get 3D reconstruction. In the next iteration, we use IiRRpw for reconstruction and radon transform as in the first step mentioned above. This process is conducted iteratively to improve the reconstruction.

4. Results and discussion

Here we first quantitatively evaluate our workflow for tomography reconstruction using synthetic data. Then we demonstrate generation of high-quality 3D reconstruction of real zebrafish larva data using our OPT system and the automated workflow. Finally, we present a phenotypic characterization in a mutant line using data obtained with our system.

4.1 Reconstruction using simulated data

Simulated data are acquired using the methods described in the section 2.5 Synthetic data generation. Figure 4(A) shows the reconstruction results from the artificially created data with no correction, gCOR and iRRpw. The sum of absolute difference (SAD) of pixel-to-pixel intensity between the reconstructed image and the phantom image, ground truth, is used to quantify the amount of errors in the reconstruction. To measure the similarity between the reconstruction and the ground truth regardless of their relative position, we applied a rigid 2D registration before calculating the SAD. In Fig. 4(B) the SAD between the data and the ground truth in each processing step is shown. After applying the gCOR step, the SAD of the result is decreased to 50% compared to that in the input data. However, this step shows limited improvements in SAD since Type II errors cannot be handled by this method. The following step that uses iRR based methods to estimate Type II errors in each projection is robust to noise and can effectively reduce the SAD. We ran the simulation 5 times to get an average SAD and standard deviation. For a 512×512 reconstruction phantom, using the iRRpw the SAD is reduced from 50% to 28% after the first iteration and converges at around 10 iterations with a SAD down to 11%. Similarly, for 256×256 template, both iRR and iRRpw methods can reduce COR errors 4 times compared to the results from gCOR. Note that with the pairwise strategy, the SAD error decreases faster than the method without in the first iteration. However, the processing time per iteration for iRRpw takes approximately 15% longer compared to the iRR approach. Thus, for the test images the iRRpw method is always 1 iteration better than the iRR method. Within 6 iterations, the iRRpw achieves better execution time and accuracy in terms of the SAD than that of the iRR method.

 figure: Fig. 4.

Fig. 4. A) Reconstructed image (left) and overlay with ground truth for each method (right). In the comparison image the reconstructed image is shown in green and the ground truth is shown in magenta. B) SAD of image intensities between reconstructed data and ground truth for each step in the algorithm. Mean and standard deviation of SADs are calculated based on 5 trails and the image intensity values are ranging from 0 to 1. Two sizes of image were tested using 256×256 and 512×512. The 0 in the horizontal axis represents the FBP reconstruction results without COR corrections. The gCOR is applied only once in the process and iRR methods are applied with 10 iterations with their SAD results labelled at iteration 1, 3 and 10.

Download Full Size | PDF

In further analysis of our workflow, the results show that using gCOR followed by iRRpw is the most accurate combination of methods (Fig. 5(A)). We also found in iRRpw + gCOR that the iRRpw alone can correct the type I error without the gCOR, however, it cannot provide the same accuracy as with gCOR applied before the process. Furthermore, we analyzed the total variance (TV) which is used in some reconstruction-based approaches [22,30] to find optimal reconstruction parameters using variational methods. By using the iRRpw method, the total variance decreases both with gCOR and without gCOR (Fig. 5(B)). However, having the gCOR before iRRpw allows the TV value to converge faster compared to the one without. This result indicate that the total variance can be used to evaluate the iRRpw process and stop the iteration if the TV value is larger than its previous value. In practice, we applied this criterion to iRRpw together with a maximum iteration value in processing real OPT data as a trade-off between performance and speed.

 figure: Fig. 5.

Fig. 5. A) Comparison of three different combination of methods for COR correction and reconstruction in terms of SAD. We used 256×256 template size with the same noise level for all experiments. The processing step at 0 represents the reconstruction result without any correction. Dashed lines represent the single step process of gCOR and solid lines represent the process of iRR based methods with 10 iterations. B) Total variance of the reconstructed image in each iteration of iRRpw process.

Download Full Size | PDF

4.2 Demonstration of workflow using real data

We tested our OPT system and automatic reconstruction using Alcian blue stained 5 dpf zebrafish larvae and in situ stained 3 dpf larva in 99% glycerol as well as an example of a live 5 dpf larva in water (Fig. 6(A)). We are using a warm white illumination and based on its spectrum we choose to use the green channel to make the Alcian blue stain as well as the in situ stain clearly seen in the reconstruction. However, other channels might be more useful in other applications. We applied pre-processing, gCOR and iRRpw algorithms to generate 3D data of the stained fish (Fig. 6(B)).

 figure: Fig. 6.

Fig. 6. A) Three examples of brightfield OPT images of zebrafish larvae and B) their corresponding ventral, frontal and sagittal view of reconstructed data using green channel and with volume rendering by Volview.

Download Full Size | PDF

A detailed analysis of the quality of the reconstruction results in 2D is presented in Fig. 7(A). We observe from the results in the first row of Fig. 7(A) that the COR error can be reduce by both gCOR and iRRpw method. Moreover, in our experiment using OPT data, we found the reconstruction result is dependent on the image quality and features. For example, in the slice of around the body of the Alcian blue stained fish, the projection contains strong image features with high contrast. Therefore, the gCOR method can provide relatively good alignment. However, in the slice near the fish tail, the intensity is relatively low and COR errors cannot be corrected for. On the contrary, our iRRpw method show robustness to image noise and can provide reliable COR correction. The robustness of iRRpw algorithm is important for OPT data since the depth of field of the OPT system is limited. In our system the theoretical depth of field is 0.36 mm, which is focused at ¼ of the sample to cover a maximum depth of 0.72 mm. The capillary has an inner diameter of 0.86 mm, thus, the light scattering and blurred effects from the limited depth of focus can blur the image during the sample rotation. On the other hand, the iRRpw based registration is robust to noise in the image and can iteratively improve the reconstruction quality. We compare the results from iRRpw in each iteration as in Fig. 7(B). We observed significant improvement of the alignment from the second iteration compared to that the first iteration. Since the results are similar after three iterations, in practice we set the maximum iteration value set to 3 for this dataset.

 figure: Fig. 7.

Fig. 7. A) Single slice is selected for reconstruction with their capillary removed and we compare reconstruction results in each step and with two approaches for correcting type II error. The bold arrows represent the workflow and the thin arrows point to edges that used to compare reconstruction differences. B) Intermediate reconstruction results from the iRRpw method at iteration 1 through 3.

Download Full Size | PDF

The reconstruction of 855×855×950 voxels data and single-channel with 3 iterations processed by our MATLAB implementation of iRRpw takes approximately 6 min to run on a 3.2 GHz Intel Core i7-8700 processor with 32 GB memory and using NVIDIA GeForce GTX 1060 for GPU acceleration.

4.3 Quantification of phenotypic differences

Visual verification of data is often not enough to identify true phenotypes in zebrafish data and statistically validated results is needed. In zebrafish there is a large fish to fish variation and therefore subtle phenotypic difference can often be difficult to distinguish from individual variations. Allalou et. al developed a method for statistical analysis of in situ stained zebrafish [27]. We have used the same workflow on our Alcian blue stained zebrafish to identify statistically significant phenotypes in a mutant fish line. The wildtype fish were initially aligned and used to create an average reference fish using an Iterative Shape Averaging (ISA) algorithm [31]. In the next step all wild-type (n=10) and mutant (n=12) zebrafish were aligned to the reference. Averaged patterns using aligned wildtype and mutant fish can be created as shown in Fig. 8(A). A voxel-wise method was used to detect voxels that are significantly different between the groups using the Mann-Whitney U test. The p-value threshold is set using false discovery rate (FDR) [32] and a permutation test [33]. The FDR was set so that those random groupings showed only a small number of significant voxel differences (p < 2.5 × 10−4; FDR = 0.045). A maximum projection of the significant result can be seen in Fig. 8(B). The phenotypic differences identified within the first pharyngeal arch in the mutant zebrafish line were expected and confirmed by the OPT analysis. Phenotypic differences detected outside the first pharyngeal arch region however were unexpected and might not have been detected without this type of analysis.

 figure: Fig. 8.

Fig. 8. A) OPT of wild-type (left) and wildtype (right) zebrafish stained with Alcian blue at 5 dpf. B) Maximum projection with color-coded regions showing significant difference between wild-type (n=10) and mutant (n=12) groups. Cyan shows voxels with statistical higher intensity in wild-type group and magenta shows voxels with statistical higher intensity in mutant group. The thin arrows point to the circular areas contain expected differences.

Download Full Size | PDF

5. Conclusions

We presented a design of a cost-effective brightfield OPT setup capable of rapid loading and unloading of zebrafish samples in water or glycerol. We also presented our data processing workflow and developed a two-step algorithm for correcting COR errors identified in our OPT setup. The algorithm contains a global method (gCOR) using intensity projection image followed by an iterative reconstruction and registration method (iRRpw) to correct COR errors in the system and obtain high quality 3D structure. We have validated our algorithms for COR correction using both simulated images as well as experimental data of zebrafish larvae. The workflow is automated and generalized without setting system or sample related parameters. It can be run offline for batch processing of large datasets which is potentially useful for high throughput OPT systems.

We have developed a complete system that can be setup and run by anyone interested in performing brightfield OPT. The system is cheap compared to a microscope and with our detailed instructions it is easy to setup and customize. All acquisition and reconstruction algorithms are packaged into a user-friendly GUI. We believe this system can benefit the zebrafish research community and improve imaging, visualization and analysis of a wide range of phenotypes.

Funding

Science for Life Laboratory.

Acknowledgement

We would like to thank Carolina Wählby from the Department of information technology, Uppsala University and SciLifeLab BioImage Informatics Facility for her generous support in this project. We would also like to thank Johan Ledin, Beata Filipek, Tiffany Klingström and João Campos Costa from the Department of Organismal Biology, Uppsala University and Genome Engineering Zebrafish facility at SciLifeLab for their help.

Disclosures

The authors declare no conflicts of interest.

Supplementary Materials

See Supplement 1 for supporting content.

References

1. C. A. MacRae and R. T. Peterson, “Zebrafish-Based Small Molecule Discovery,” Chem. Biol. 10(10), 901–908 (2003). [CrossRef]  

2. S. Avagyan and L. I. Zon, “Fish to Learn: Insights into Blood Development and Blood Disorders from Zebrafish Hematopoiesis,” Hum. Gene Ther. 27(4), 287–294 (2016). [CrossRef]  

3. G. K. Varshney, B. Carrington, W. Pei, K. Bishop, Z. Chen, C. Fan, L. Xu, M. Jones, M. C. LaFave, J. Ledin, R. Sood, and S. M. Burgess, “A high-throughput functional genomics workflow based on CRISPR/Cas9-mediated targeted mutagenesis in zebrafish,” Nat. Protoc. 11(12), 2357–2375 (2016). [CrossRef]  

4. O. Ronneberger, K. Liu, M. Rath, D. Rueß, T. Mueller, H. Skibbe, B. Drayer, T. Schmidt, A. Filippi, R. Nitschke, T. Brox, H. Burkhardt, and W. Driever, “ViBE-Z: a framework for 3D virtual colocalization analysis in zebrafish larval brains,” Nat. Methods 9(7), 735–742 (2012). [CrossRef]  

5. R. Tomer, K. Khairy, F. Amat, and P. J. Keller, “Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy,” Nat. Methods 9(7), 755–763 (2012). [CrossRef]  

6. C. Pardo-Martin, A. Allalou, J. Medina, P. M. Eimon, C. Wählby, and M. Fatih Yanik, “High-throughput hyperdimensional vertebrate phenotyping,” Nat. Commun. 4(1), 1467 (2013). [CrossRef]  

7. D. Nguyen, P. J. Marchand, A. L. Planchette, J. Nilsson, M. Sison, J. Extermann, A. Lopez, M. Sylwestrzak, J. Sordet-Dessimoz, A. Schmidt-Christensen, D. Holmberg, D. Van De Ville, and T. Lasser, “Optical projection tomography for rapid whole mouse brain imaging,” Biomed. Opt. Express 8(12), 5637 (2017). [CrossRef]  

8. T. Correia, N. Lockwood, S. Kumar, J. Yin, M.-C. Ramel, N. Andrews, M. Katan, L. Bugeon, M. J. Dallman, J. McGinty, P. Frankel, P. M. W. French, and S. Arridge, “Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish,” PLoS One 10(8), e0136213 (2015). [CrossRef]  

9. P. P. Vallejo Ramirez, J. Zammit, O. Vanderpoorten, F. Riche, F.-X. Blé, X.-H. Zhou, B. Spiridon, C. Valentine, S. E. Spasov, P. W. Oluwasanya, G. Goodfellow, M. J. Fantham, O. Siddiqui, F. Alimagham, M. Robbins, A. Stretton, D. Simatos, O. Hadeler, E. J. Rees, F. Ströhl, R. F. Laine, and C. F. Kaminski, “OptiJ: Open-source optical projection tomography of large organ samples,” Sci. Rep. 9(1), 15693 (2019). [CrossRef]  

10. M. D. Wong, J. Dazai, J. R. Walls, N. W. Gale, and R. M. Henkelman, “Design and Implementation of a Custom Built Optical Projection Tomography System,” PLoS One 8(9), e73491 (2013). [CrossRef]  

11. T. Watson, N. Andrews, S. Davis, L. Bugeon, M. D. Dallman, and J. McGinty, “OPTiM: Optical projection tomography integrated microscope using open-source hardware and software,” PLoS One 12(7), e0180309 (2017). [CrossRef]  

12. J. van der Horst and J. Kalkman, “Image resolution and deconvolution in optical tomography,” Opt. Express 24(21), 24460 (2016). [CrossRef]  

13. U. Jochen Birk, A. Darrell, N. Konstantinides, A. Sarasa-Renedo, and J. Ripoll, “Improved reconstructions and generalized filtered back projection for optical projection tomography,” Appl. Opt. 50(4), 392 (2011). [CrossRef]  

14. U. J. Birk, M. Rieckher, N. Konstantinides, A. Darrell, A. Sarasa-Renedo, H. Meyer, N. Tavernarakis, and J. Ripoll, “Correction for specimen movement and rotation errors for in-vivo Optical Projection Tomography,” Biomed. Opt. Express 1(1), 87 (2010). [CrossRef]  

15. D. D. Shouping Zhu, U. J. Birk, M. Rieckher, N. Tavernarakis, X. Qu, J. Liang, J. Tian, and J. Ripoll, “Automated Motion Correction for In Vivo Optical Projection Tomography,” IEEE Trans. Med. Imaging 31(7), 1358–1371 (2012). [CrossRef]  

16. Di Dong, Shouping Zhu, V. Kumar, J. V. Stein, S. Oehler, C. Savakis, Jie Tian, and J. Ripoll, “Automated Recovery of the Center of Rotation in Optical Projection Tomography in the Presence of Scattering,” IEEE J. Biomed. Heal. Informatics 17(1), 198–204 (2013). [CrossRef]  

17. X. Tang, D. M. van der Zwaan, A. Zammit, K. F. D. Rietveld, and F. J. Verbeek, “Fast Post-Processing Pipeline for Optical Projection Tomography,” IEEE Trans. Nanobioscience 16(5), 367–374 (2017). [CrossRef]  

18. Y. Min, G. Haidong, L. Xingdong, M. Fanyong, and W. Dongbo, “A new method to determine the center of rotation shift in 2D-CT scanning system using image cross correlation,” NDT&E Int. 46(1), 48–54 (2012). [CrossRef]  

19. N. T. Vo, M. Drakopoulos, R. C. Atwood, and C. Reinhard, “Reliable method for calculating the center of rotation in parallel-beam tomography,” Opt. Express 22(16), 19078 (2014). [CrossRef]  

20. E. Figueiras, A. M. Soto, D. Jesus, M. Lehti, J. Koivisto, J. E. Parraga, J. Silva-Correia, J. M. Oliveira, R. L. Reis, M. Kellomäki, and J. Hyttinen, “Optical projection tomography as a tool for 3D imaging of hydrogels,” Biomed. Opt. Express 5(10), 3443 (2014). [CrossRef]  

21. H. Yu, S. Xia, C. Wei, Y. Mao, D. Larsson, X. Xiao, P. Pianetta, Y.-S. Yu, and Y. Liu, “Automatic projection image registration for nanoscale X-ray tomographic reconstruction,” J. Synchrotron Radiat. 25(6), 1819–1826 (2018). [CrossRef]  

22. C.-C. Cheng, Y.-T. Ching, P.-H. Ko, and Y. Hwu, “Correction of center of rotation and projection angle in synchrotron X-ray computed tomography,” Sci. Rep. 8(1), 9884 (2018). [CrossRef]  

23. A. del Pozo, R. Manuel, A. B. Iglesias Gonzalez, H. K. Koning, J. Habicher, H. Zhang, A. Allalou, K. Kullander, and H. Boije, “Behavioral Characterization of dmrt3a Mutant Zebrafish Reveals Crucial Aspects of Vertebrate Locomotion through Phenotypes Related to Acceleration,” eNeuro 7(3), ENEURO.0047-20.2020 (2020). [CrossRef]  

24. C. Thisse and B. Thisse, “High-resolution in situ hybridization to whole-mount zebrafish embryos,” Nat. Protoc. 3(1), 59–69 (2008). [CrossRef]  

25. M. B. Walker and C. B. Kimmel, “A two-color acid-free cartilage and bone stain for zebrafish larvae,” Biotech. Histochem. 82(1), 23–28 (2007). [CrossRef]  

26. J. Sharpe, “Optical Projection Tomography as a Tool for 3D Microscopy and Gene Expression Studies,” Science 296(5567), 541–545 (2002). [CrossRef]  

27. A. Allalou, Y. Wu, M. Ghannad-Rezaie, P. M. Eimon, and M. F. Yanik, “Automated deep-phenotyping of the vertebrate brain,” eLife 6, 1–26 (2017). [CrossRef]  

28. A. M. Petzold, V. M. Bedell, N. J. Boczek, J. J. Essner, D. Balciunas, K. J. Clark, and S. C. Ekker, “SCORE Imaging: Specimen in a Corrected Optical Rotational Enclosure,” Zebrafish 7(2), 149–154 (2010). [CrossRef]  

29. W. van Aarle, W. J. Palenstijn, J. De Beenhouwer, T. Altantzis, S. Bals, K. J. Batenburg, and J. Sijbers, “The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography,” Ultramicroscopy 157, 35–47 (2015). [CrossRef]  

30. J. R. Walls, J. G. Sled, J. Sharpe, and R. M. Henkelman, “Correction of artefacts in optical projection tomography,” Phys. Med. Biol. 50(19), 4645–4665 (2005). [CrossRef]  

31. T. Rohlfing, R. Brandt, C. R. Maurer, and R. Menzel, “Bee brains, B-splines and computational democracy: generating an average shape atlas,” in Proceedings IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA 2001) (IEEE Comput. Soc2001), pp. 187–194.

32. W. S. Noble, “How does multiple testing correction work?” Nat. Biotechnol. 27(12), 1135–1137 (2009). [CrossRef]  

33. S. L. Simpson, R. G. Lyday, S. Hayasaka, A. P. Marsh, and P. J. Laurienti, “A permutation testing framework to compare groups of brain networks,” Front. Comput. Neurosci. 7171(2013). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Detailed information of experimental setup

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. A) Schematics of brightfield OPT setup. The dotted green lines represent the light path, and solid lines represent signals transmitted by wires. Bold arrows represent sample loading and unloading respectively from the same port of the capillary. The capillary is rotated with a stepper motor and the CMOS camera acquires projections of the sample during rotation. B) An RGB projection from the OPT system using 3 dpf in situ stained zebrafish. The dashed lines represent the inner walls of the glass capillary. C) 3D reconstruction results. Left: 3D volume of three channels rendering using Volview software.(Kitware Inc.) with pseudo color. Right: 3D volume of green channel. A slice of the sample in z direction marked in red (solid line) is demonstrated. Maximum intensity projection views of the region marked in red (dashed lines) in the volume rendering. Maximum projections are made from ventral, frontal and sagittal views in the x-y-z coordinates.
Fig. 2.
Fig. 2. A) Schematics of the OPT system for simulation. The solid line with single arrow represents the rotation of the capillary; the dotted lines with arrow represent the light path; the bold arrow represents the direction of simulated motion errors. B) Processes for generating simulation data using a phantom image in MATLAB. C) The effect of COR errors on the final reconstruction.
Fig. 3.
Fig. 3. Workflow of OPT reconstruction is presented with examples using Alcian blue stained fish. The pre-processing steps are for brightfield OPT data. Solid arrows represent the data flow. The dashed line in the images of step 5 represents the image center line and the solid line represents the detected symmetry axis. B) Detailed workflow for COR correction using gCOR and iRRpw method.
Fig. 4.
Fig. 4. A) Reconstructed image (left) and overlay with ground truth for each method (right). In the comparison image the reconstructed image is shown in green and the ground truth is shown in magenta. B) SAD of image intensities between reconstructed data and ground truth for each step in the algorithm. Mean and standard deviation of SADs are calculated based on 5 trails and the image intensity values are ranging from 0 to 1. Two sizes of image were tested using 256×256 and 512×512. The 0 in the horizontal axis represents the FBP reconstruction results without COR corrections. The gCOR is applied only once in the process and iRR methods are applied with 10 iterations with their SAD results labelled at iteration 1, 3 and 10.
Fig. 5.
Fig. 5. A) Comparison of three different combination of methods for COR correction and reconstruction in terms of SAD. We used 256×256 template size with the same noise level for all experiments. The processing step at 0 represents the reconstruction result without any correction. Dashed lines represent the single step process of gCOR and solid lines represent the process of iRR based methods with 10 iterations. B) Total variance of the reconstructed image in each iteration of iRRpw process.
Fig. 6.
Fig. 6. A) Three examples of brightfield OPT images of zebrafish larvae and B) their corresponding ventral, frontal and sagittal view of reconstructed data using green channel and with volume rendering by Volview.
Fig. 7.
Fig. 7. A) Single slice is selected for reconstruction with their capillary removed and we compare reconstruction results in each step and with two approaches for correcting type II error. The bold arrows represent the workflow and the thin arrows point to edges that used to compare reconstruction differences. B) Intermediate reconstruction results from the iRRpw method at iteration 1 through 3.
Fig. 8.
Fig. 8. A) OPT of wild-type (left) and wildtype (right) zebrafish stained with Alcian blue at 5 dpf. B) Maximum projection with color-coded regions showing significant difference between wild-type (n=10) and mutant (n=12) groups. Cyan shows voxels with statistical higher intensity in wild-type group and magenta shows voxels with statistical higher intensity in mutant group. The thin arrows point to the circular areas contain expected differences.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

P ( x , y ) = i = 1 N f ( I ( x , y , i ) ) ,
arg min a R , b R , r R [ 1 x y ( M ( x , y ) M ¯ ) ( M ( x , y ) M ¯ ) x y ( M ( x , y ) M ¯ ) 2 x y ( M ( x , y ) M ¯ ) 2 ] ,
M = R r ( T a b ( P ( x , y ) ) ) , M = F l i p Y ( M ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.