Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dynamic tracking of onion-like carbon nanoparticles in cancer cells using limited-angle holographic tomography with self-supervised learning

Open Access Open Access

Abstract

This research presents a novel approach for the dynamic monitoring of onion-like carbon nanoparticles inside colorectal cancer cells. Onion-like carbon nanoparticles are widely used in photothermal cancer therapy, and precise 3D tracking of their distribution is crucial. We proposed a limited-angle digital holographic tomography technique with unsupervised learning to achieve rapid and accurate monitoring. A key innovation is our internal learning neural network. This network addresses the information limitations of limited-angle measurements by directly mapping coordinates to measured data and reconstructing phase information at unmeasured angles without external training data. We validated the network using standard SiO2 microspheres. Subsequently, we reconstructed the 3D refractive index of onion-like carbon nanoparticles within cancer cells at various time points. Morphological parameters of the nanoparticles were quantitatively analyzed to understand their temporal evolution, offering initial insights into the underlying mechanisms. This methodology provides a new perspective for efficiently tracking nanoparticles within cancer cells.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Cancer is a prevalent global medical issue and remains a leading cause of mortality worldwide. Current conventional therapies, including chemotherapy and radiotherapy, may induce severe side effects and yield unsatisfactory prognoses [1,2]. Consequently, there is an imperative need for expedited and precise treatment methods that yield more effective results. Photothermal therapy (PTT) employs materials known for their high photothermal conversion efficiency to generate substantial heat when exposed to laser irradiation, specifically targeting and eliminating cancer cells [3]. With its precision and non-invasive characteristics, PTT has attracted substantial scholarly attention [4]. Onion-like carbon (OLC) nanoparticles, being carbon-based materials with high biocompatibility, have drawn considerable interest in PTT due to their low toxicity, high cellular uptake rate, and remarkable photothermal conversion efficiency [5,6]. The real-time monitoring of the dynamic three-dimensional distribution of OLC inside cancer cells is essential for comprehending the interaction between OLC and cells and for the development of accurate photothermal conversion models.

At present, an array of imaging techniques has been employed to visualize nanoparticle distribution within cells. Scanning electron microscopy [7] generates surface images by scanning the sample with a focused electron beam. However, the sample preparation process, which involves slicing the sample into 50-200nm segments, may disrupt the nanoparticles’ distribution and precludes the dynamic observation of living cells. Laser scanning microscopy, an enhancement of fluorescence microscopy [8], utilizes a laser scanning device to boost optical imaging resolution and facilitate tomography. Nevertheless, this method necessitates the use of immunofluorescence labeling and ion fluorescence labeling probes, thereby inhibiting non-contact and non-destructive cell observation.

Digital holographic tomography (DHT) is a powerful quantitative phase imaging technique that enables three-dimensional analysis of a sample's internal structure by measuring its refractive index (RI) distribution [9,10]. Its label-free and non-invasive nature eliminates the need for exogenous markers or dyes [11], minimizing potential disturbances to the sample caused by photobleaching or phototoxicity. DHT's advantages have made it a highly sought-after tool in the three-dimensional study of intracellular nanoparticles. For instance, A. Géloën et al. and D. Pirone et al. successfully obtained the 3D spatial distributions of nanodiamonds and nanographene oxide within cells, respectively [12,13]. D. K. Ikliptikawati et al. utilized DHT to investigate the aggregation and disaggregation processes of intracellular nanodiamonds by tracking refractive index changes [14]. Furthermore, W. Sung et al. integrated 3D live cell imaging with a Monte Carlo approach to predict the survival curves of breast cancer cells incubated with gold nanoparticles [15]. These studies underscore DHT's position as an essential technique for precise localization and quantitative measurement of nanoparticles within cells, allowing for the conversion of 3D RI data into valuable biochemical parameters.

For observing the dynamic evolution process of adherent cells, limited-angle DHT is usually used to reduce the scanning time and related costs required for sampling [16]. However, limited angle may lead to insufficient information, thus affecting the quality of RI reconstruction [17]. Therefore, the three-dimensional tomographic reconstruction strategy at limited angles has attracted extensive research. A prevalent solution to this issue involves the use of an iterative algorithm with regularization constraints based on the external shape or internal structure attributes of the object [1820], which may bring computational difficulty to the object with unknown structure. In recent years, with the advancement of deep learning, the quality of tomographic reconstruction at limited angles enhancing via neural network has been noted [21,22]. This typically involves training on large datasets to learn artifact information, enabling the establishment of end-to-end mapping from a low-quality three-dimensional RI distribution to a high-quality version. This approach is challenging when high-quality RI ground truth is unavailable. The neural radiance field model, as a novel deep learning paradigm in the field of computer vision, generate images from novel perspectives by creating mappings between position and direction of the light source and images [23]. It is a self-supervised learning approach and does not require any extra training dataset except the measured field itself. This approach offers a new perspective for tomographic reconstruction, which could complement the image at unmeasured angles by establishing mapping between object beam direction and image, thereby improving the quality of 3D reconstruction. Presently, it has found use in X-ray tomography and intensity diffraction tomography [24,25].

In this study, we proposed an approach, limited-angle DHT configurated with internal learning neural network (ILNN) which is based on neural radiation field model, for tracking OLC in cancer cells dynamically. ILNN is a self-supervised method which could work without necessitating ground truth and external training datasets. It leverages the inherent correlation in image to establish a mapping from sampling angle and position coordinates to phase value, thus enhancing the quality of 3D tomographic reconstruction by supplementing phase images at unmeasured angles. We first employed the limited-angle DHT configured with ILNN on standard-sized microspheres to evaluate the effectiveness of this approach, and selected the optimal incident angle and sampling interval by structural similarity index measure (SSIM). Subsequently, we employed it to track the temporal evolution of OLC nanoparticles in three colorectal cancer cells, which enabled us to quantitatively calculate the changes in surface area and volume of these nanoparticles over time and conducted an initial analysis of the underlying reasons. This approach offers a novel perspective for detecting dynamic changes of nanoparticles in living cells.

2. Methods

2.1 Optical experiment setup and reconstruction algorithm

2.1.1 DHT setup and phase reconstruction

The DHT system employed in the experiment is shown as Fig. 1, which is based on an off-axis Mach-Zehnder holographic interferometer. The light emitted by a solid-state laser (MSL-U-532, 100mW, 532nm, China) split into the object beam and reference beam using a polarizing beam splitter (PBS). These two beams are separately filtered and expanded through a space filter (SF) to achieve collimated plane waves. Before expansion and collimation, the reference beam goes through an attenuator and a half-wave plate (HWP) to adjust its intensity and polarization, enhancing the contrast of interference fringes. The object beam changes its direction by reflecting off two mirrors, driven by a rotating motor. Then, it illuminates onto the sample, and the transmitted light fields of the sample at various angles is captured through a microscope objective (MO, Olympus, 60×, NA = 0.7, Japan). These two beams converge at the camera's surface to produce an interference image, namely digital hologram, which is recorded by a CCD camera (2048 × 2048 pixels, 5.5µm, PointGrey, Canada).

 figure: Fig. 1.

Fig. 1. Schematic of DHT setup. (M: Mirror, A: Attenuator, PBS: Polarizing beam splitter, HWP: Half wave plate, SF: Space filter, BS: Beam splitter, TL: Tube lens, MO: Microscope objective)

Download Full Size | PDF

After capturing the image, we utilized the diffraction reconstruction method to generate the phase image. Initially, the hologram undergoes filtering in the frequency domain, preserving only the -1-order image capable of generating a real image, thereby eliminating interference from the zero-order image and the conjugate image on the holographic reconstruction. The -1-order image is then shifted to the origin in the frequency domain to rectify the off-axis angle. Subsequently, the angular spectrum algorithm [26] is utilized to propagate the diffracted light field to the image plane, rectifying the reconstruction error induced by defocusing. The propagation distance is determined through an auto-focusing algorithm [27]. For the aberration introduced by the optical system itself, compensation is achieved by subtracting the phase distribution of the reference hologram recorded at the same angle but without any object. Finally, the minimum-norm phase unwrapping method [28] is used to restore the real phase distribution, which solely contains the phase information of the object.

2.1.2 Optical diffraction tomography reconstruction algorithm

Deriving the three-dimensional scattering potential from the two-dimensional optical field can be viewed as an ill-posed inverse problem. The optical diffraction tomography algorithm we employed solves this problem by establishing a weak scattering model under 1st Born approximation [29] to obtain an approximate solution.

First, the transmitted optical field $U(\mathbf{r})$ is expressed as the superposition of incident optical field $U_{i}(\mathbf{r})$ and scattered optical field $U_{s}(\mathbf{r})$, as shown in Eq. (1):

$$U(\mathbf{r}) = U_{i}(\mathbf{r}) + U_{s}(\mathbf{r})$$

Since $U_{i}(\mathbf{r})$ is a monochromatic plane wave satisfying the homogeneous wave equation, Eq. (1) can be transformed into Eq. (2) by introducing the Green's function:

$$U_{s}(\mathbf{r}) = \int {G(\mathbf{r - r^{\prime}})} f(\mathbf{r^{\prime}})U(\mathbf{r^{\prime}})d\mathbf{r^{\prime}}$$
where, $f(\mathbf{r})$ represents the sought scattering potential.

In the three-dimensional situation, the Green's function is in spherical wave form, as shown in Eq. (3):

$$G(\mathbf{r - r^{\prime}}) = \frac{{\exp (ik_{m}|\mathbf{r - r^{\prime}}|)}}{{4\pi |\mathbf{r - r^{\prime}}|}}$$
where, $k_{m}$ is the wave number.

According to the 1st Born approximation, when Eq. (4):

$$U_{s}(\mathbf{r}) \ll U_{i}(\mathbf{r})$$
is satisfied, Eq. (5) can be obtained:
$$U_{s}(\mathbf{r}) \approx U_{B}(\mathbf{r}) = \int {G(\mathbf{r - r^{\prime}})} f(\mathbf{r^{\prime}})U_{i}(\mathbf{r^{\prime}})d\mathbf{r^{\prime}}$$
where, $U_{B}(\mathbf{r})$ is scattered field under 1st Born approximation.

According to Eq. (5), the sought scattering potential can be solved by the known incident optical field.

Assuming the incident field propagates along the z-axis, and the CCD is located at $z = l_{d}$, substituting Eq. (3) into Eq. (5) and performing a two-dimensional Fourier transform, Eq. (6) is obtained:

$$\begin{aligned} \widetilde{U}_{B, l_d}(u, v) &= \int_{ - \infty }^{ + \infty } {\int_{ - \infty }^{ + \infty } U_{B, l_d}(x, y) }\exp ( - i(ux + vy))dxdy\\ &= \frac{i}{{2\sqrt {k_m^2 - {u^2} - {v^2}} }}\exp (i\sqrt {k_m^2 - {u^2} - {v^2}} {l_d})\widetilde f(u,v,\sqrt {k_m^2 - {u^2} - {v^2}} - {k_m}),\sqrt {{u^2} + {v^2}} \le {k_m} \end{aligned}$$

When $t = \sqrt {k_m^2 - {u^2} - {v^2}} - {k_m}$, then ${u^2} + {v^2} + (t + k_m^2) = k_m^2$, which indicates that the two-dimensional spectrum of the transmitted light field corresponds to a hemispherical shell in the three-dimensional spectrum of scattering potential, with the line connecting the sphere's center and origin running parallel to the incident light's propagation direction. Consequently, by continuously changing the propagation direction of incident field within a 360° range, the spherical shell in the frequency domain will gradually fill different positions of the three-dimensional spectrum. We performed an inverse Fourier transformation on the filled spectrum to obtain the object's three-dimensional scattering potential, as shown in Fig. 2.

After that, Eq. (7) can be used to ascertain the three-dimensional RI distribution.

$$n(\mathbf{r}) = {n_m}\sqrt {\frac{{f(\mathbf{r})}}{{k_m^2}} + 1}$$
where, $n(\mathbf{r})$ is the three-dimensional RI distribution of the object, and ${n_m}$ is the RI of the medium.

 figure: Fig. 2.

Fig. 2. The diagram of optical diffraction tomography reconstruction algorithm.

Download Full Size | PDF

When samples have no absorption, their Fourier spectrum adheres to Hermitian symmetry [30]. Leveraging this property, we reconstructed the lower half of the spectrum by taking the complex conjugate transpose of the upper half containing positive $kz$ components, thereby enhancing the accuracy of the reconstruction results. Following spectrum reconstruction, building upon the prior knowledge that the RI of the evaluated sample never falls below the RI of the background medium, we employed iteration on the three-dimensional RI distribution to rectify the underestimation of RI during the reconstruction process [31]. We set the iteration count to 10. Upon completion of the iterations, we ultimately derived the three-dimensional RI distribution of the object.

2.2 Neural network architecture and performance evaluation

2.2.1 Structure of ILNN and training process

In the field of computer vision, the neural radiation field model is constructed for three-dimensional implicit space through unsupervised learning. It synthesizes images at new perspectives based on a series of captured images from known viewpoints, along with the intrinsic and extrinsic parameters of camera. We first applied this concept to limited-angle DHT reconstruction by introducing ILNN, aiming to address the problem of insufficient information due to large sampling interval. ILNN creates a mapping from the measured angle and position coordinate to the phase value, for the purpose of generating phase images at unmeasured angles leveraging the inherent correlations in images. ILNN only requires the measurement fields at different angles of a single sample for network training, enabling high-fidelity three-dimensional reconstruction when faces with limited samples and difficulty in acquiring large training datasets.

The workflow of ILNN. The input to our ILNN includes the object beam direction and each pixel's position coordinate $({x_i},{y_j})$. The output is the corresponding phase value ${P_{({x_i},{y_j})}}$ at each coordinate and for each object beam direction, as illustrated in Fig. 3(a). We defined the object beam direction using two variables: the incident angle $\alpha $ and the rotation angle $\theta $. The incident angle is the angle between the object beam and the optical axis, the rotation angle refers to the angle of the rotating motor, which is mounted perpendicular to the optical axis, as shown in Fig. 3(b). Since we train the ILNN separately for each sample and incident angle of $\theta $, the object beam direction simplifies to a single dimension, the rotation angle of $\alpha $, during a single training and inference process.

 figure: Fig. 3.

Fig. 3. (a) The schematic of phase values with position coordinates; (b) The diagram of incident angle and rotation angle; (c) The workflow of ILNN. The orange box represents training process, the blue box represents inference process.

Download Full Size | PDF

Consequently, during the training process, the ILNN takes a set of three-dimensional vectors $({\alpha _k},{x_i},{y_j})$ as input, and the corresponding phase values as the ground truth. We defined the difference in rotation angles between two adjacent transmitted fields as the sampling interval, denoted as $\Delta \alpha $.Then the rotation angle ${\alpha _k} = k\Delta \alpha $ ($0 \le k\mathrm{\ < }\frac{{360}}{{\Delta \alpha }}$, $k \in N$). To reduce unnecessary memory consumption, we cropped the phase image to the minimum size that can contain the object being measured. Assuming the cropped phase image size is $M \times N$, then the number of mapping pairs can be calculated from Eq. (8):

$$Nu{m_{mapping\_pairs}} = \frac{{M \times N \times 360}}{{\Delta \alpha }}$$

After training, a set of three-dimensional vectors $(2k,{x_i},{y_j})$ is used as input to the ILNN, which consists of Fourier Feature Mapping (FFM) and multi-layer perceptron (MLP).Then arranging the output phase values sequentially according to the angle and position coordinates, a set of phase image sequences totaling 180 frames with sampling interval of 2°and size of $M \times N$ at a given incident angle $\theta $ is obtained. This enables the supplementation of phase images at unmeasured angles. The workflow of ILNN is illustrated in Fig. 3(c).

MLP architecture. The core component of our ILNN is a multi-layer perceptron (MLP), with its structure illustrated in Fig. 4. The MLP in our study comprises an input layer, 17 hidden layers, and an output layer, all fully interconnected. The structure of the first 16 hidden layers is identical, each featuring 256 neurons and utilizing the Rectified Linear Unit (ReLU) as the activation function. After every two hidden layers, a skip connection is incorporated that directly links the hidden layer’s output to the input layer, thereby mitigating overfitting risks and enhancing training efficiency [32]. The last hidden layer, which contains 128 neurons, is directly connected to the output layer, without any activation function.

 figure: Fig. 4.

Fig. 4. The framework of MLP.

Download Full Size | PDF

Fourier Feature Mapping. Prior to feeding into the MLP, we employed FFM to expand the frequency component of the input, thereby ensuring adequate representation of high-frequency variations in the input dataset [33]. The corresponding calculation is depicted in Eq. (9).

$$\textrm{FFM}(\mathbf{X}) = \left( \begin{array}{c} \sin ({k_1}\mathbf{X}),\cos ({k_1}\mathbf{X})\\ \ldots \\ \sin ({k_i}\mathbf{X}),\cos ({k_i}\mathbf{X})\\ \ldots \\ \sin ({k_L}\mathbf{X}),\cos ({k_L}\mathbf{X}) \end{array} \right)$$
where, $\mathbf{X}$ is the input vector, ${k_i}$ is the coefficient, and L is the total number of expanded frequency components.

In the original FFM, ${k_i} = {2^{i - 1}}\pi $ [34]. In order to reduce the overfitting of high-frequency noise, we set ${k_i} = \frac{{i\pi }}{2}$ and $L = 10$.

Loss function. We selected the standard ${L_2}$ norm [35] as the loss function, as shown in Eq. (10).

$$loss = \frac{1}{{M \times N \times \frac{{360}}{{\Delta \alpha }}}}\sum\nolimits_{m = 1}^{M \times N \times \frac{{360}}{{\Delta \alpha }}} {\parallel \textrm{ILNN}({\mathbf{X}_m})} - {P_m}||_2^2$$
where, ${\mathbf{X}_m}$ is the input vector $({\alpha _k},{x_i},{y_j})$, ${P_m}$ is the ground truth of measured phase value ${P_{({\alpha _k},{x_i},{y_j})}}$, $M \times N$ is the size of single image plane, and $\Delta \alpha $ is sampling interval. So, $M \times N \times \frac{{360}}{{\Delta \alpha }}$ represents the total number of input vectors.

Other details of ILNN. To train the network, we used the Adam optimizer [36] with 500 epochs and a batch size of 1024. The learning rate was set to decay incrementally to optimize the convergence of the loss function [37]. The initial learning rate was set to 10−3 and decreased to 10−5 throughout the training process. We reserved 5% of the measured data as the validation set, which is excluded from training but used to evaluate model performance and improve generalization. Notably, the ILNN infers the phase value corresponding to a single 3D vector in approximately 3.8 × 10−6 seconds. This translates to an inference time of roughly 3.98 seconds for a 1024 × 1024 phase image.

2.2.2 Assessment criteria of network performance

We assessed the network performance by quantitatively comparing the similarity between the predicted phase image and the measured phase image, as well as the similarity between the three-dimensional RI distribution reconstructed using the phase image sequence output from ILNN and the ground truth. We employed the Structural Similarity Index Measure (SSIM) as the assessment criteria [38]. The SSIM primarily evaluates the luminance, contrast, and structure of the image. The simplified calculation equation for the SSIM is shown in Eq. (11):

$$\textrm{SSIM}({P_{mea}},{P_{pre}}) = \frac{{(2{\mu _{mea}}{\mu _{pre}} + {\textrm{C}_\textrm{1}})(2{\sigma _{mea}}{\sigma _{pre}} + {\textrm{C}_2})}}{{(\mu _{mea}^2 + \mu _{pre}^2 + {\textrm{C}_\textrm{1}})(\sigma _{mea}^2 + \sigma _{pre}^2 + {\textrm{C}_\textrm{2}})}}$$
where, ${P_{mea}},{P_{pre}}$ is the measured phase image and the predicted phase image, respectively; ${\mu _{mea}},{\mu _{pre}}$ is their average value of each pixel, ${\sigma _{mea}},{\sigma _{pre}}$ is their standard deviation, and ${\sigma _{mea,pre}}$ is the covariance of these two phase images. The purpose of the constants ${C_1}$ and ${C_2}$ is to avoid having zero denominators. We set ${C_1} = {10^{ - 4}},{C_2} = 9 \times {10^{ - 4}}$.

Since the reconstructed 3D RI distribution can be visualized as a stack of slices, we calculated the SSIM for each slice using Eq. (11) and then average the results. This average SSIM quantifies the overall similarity between the reconstructed and ground truth 3D RI distributions. Together, the SSIM of the phase images and the 3D RI distribution provide a comprehensive evaluation of the network's performance.

2.3 Materials preparation and data processing procedure

2.3.1 Process of observing OLC nanoparticles inside cancer cell

The methodology for observing OLC nanoparticles within cancer cells is depicted in Fig. 5. The process begins with the collection of holograms at varying angles using the angle-scanning DHT setup, followed by the acquisition of phase images through the holographic reconstruction method and unwrapping algorithm. The measured phase images are then fed into the ILNN to supplement phase images at unmeasured angles. Subsequently, the phase image sequence is employed to reconstruct the RI distribution via the optical diffraction tomography algorithm. Then the RI distribution of OLC nanoparticles is isolated through thresholding.

 figure: Fig. 5.

Fig. 5. Schematic of the process of observing OLC nanoparticles inside cancer cell.

Download Full Size | PDF

This procedure is iteratively conducted to generate the three-dimensional RI distribution of OLC nanoparticles at each timepoint. The time interval is set at 4-minutes, spanning a total duration of 2 hours. Subsequently, we calculated the two morphological parameters, surface area and volume, of OLC nanoparticles within the cancer cell based on the reconstructed three-dimensional RI distribution at each time point. And then we obtain the temporal evolution curves of these parameters.

2.3.2 Cell culture and OLC nanoparticles preparation

The human colorectal cancer cell line (HCT116) was procured from the Chinese Academy of Medical Sciences & Peking Union Medical College. In brief, HCT116 cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Gibco, #11965092) fortified with 10% fetal bovine serum (FBS; Gibco, #10095080) and 1% penicillin-streptomycin (Gibco, #15140122). The culture was maintained in a humidified atmosphere consisting of 95% air and 5% CO2 at 37°C. Monthly Mycoplasma tests were conducted to ensure a Mycoplasma-free culture. For producing cell suspensions, Trypsin-EDTA (Gibco, #25300062) was utilized. The attached cells were exposed to OLC nanoparticles at a concentration of 50 µg/mL for a duration of 2 hours. The nanoparticles were confirmed to be non-toxic to the cells. Prior to observation, any unattached OLC nanoparticles were rinsed off using PBS.

The nanodiamonds (NDs) used in the experiment were procured from Sino Crystal Micro Diamond Co. Ltd. (Zhengzhou, China). OLC nanoparticles were synthesized by annealing NDs under an N2 atmosphere at 1400 °C. Subsequently, the OLC nanoparticle powder was immersed in an acid mixture (H2SO4:HNO3 = 3:1(v: v)) at 80°C for a duration of 24 hours to enhance its solubility. The morphology of the as-prepared OLC was assessed using TEM. The OLC particles were found to be spheroidal in shape, with a diameter of 5-10 nm and exhibiting an onion-like concentric structure.

2.3.3 Morphological parameter calculation of OLC nanoparticles inside cancer cell

We calculated two morphological parameters, surface area and volume of OLC nanoparticles inside cancer cell, from the reconstructed RI distribution at each timepoint. The calculation equation is shown as Eq. (11) and Eq. (12).

$${S_{OLC}} = \sum\limits_{i = 1}^{Num} {{C_{Pix}} \times \frac{{PixelSiz{e^2}}}{{Ma{g^2}}}} $$
$${V_{OLC}} = \sum\limits_{i = 1}^{Num} {{S_{Pix}} \times \frac{{PixelSiz{e^3}}}{{Ma{g^3}}}} $$
where, $Num$ is the number of slices along z axis, ${C_{Pix}}$ is the pixel number of the OLC region’s circumference on each slice, ${S_{Pix}}$ is the pixel number of the OLC region on each slice, and $M$ is the magnification of the microscope objective.

3. Results and discussion

3.1 Effectiveness evaluation of ILNN and parameter selection

To assess the effectiveness of the ILNN at varying sampling intervals $\Delta \alpha $ and different incident angles $\theta $, we employed a SiO2 microsphere with a diameter of 20µm as a sample. We selected three different incident angles $\theta $, ${36^ \circ }$, ${27^ \circ }$ and ${18^ \circ }$, as well as four distinct sampling intervals $\Delta \alpha $, ${12^ \circ }$, ${24^ \circ }$, ${36^ \circ }$ and ${60^ \circ }$, for network training. According to Eq. (8), the number of phase images inputted into the ILNN at these four sampling intervals are 30, 15, 10, and 6, respectively, and the number of generated phase images is 180 at sampling interval of ${2^ \circ }$, namely rotation angle ${\alpha _k} = 2k(0 \le k\mathrm{\ < }180,k \in N)$. The phase images measured and predicted by ILNN at these above incident angles and sampling intervals are shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Phase images predicted by ILNN and measured by DHT at different incident angles $\theta $ and different sampling intervals $\Delta \alpha $ (The left is incident angles pattern at the sample plane in frequency domain. Within each dotted box are a pair of measured and predicted phase image at a same incident angle, The red line and black line of the curves are the phase value distribution of the predicted and measured phase image along the corresponding dashed line in phase image.)

Download Full Size | PDF

Next, we utilized the supplemented phase image sequence to reconstruct the three-dimensional RI distribution of the microsphere, which is presented in Fig. 7. To avoid any bias that could potentially stem from the tomography reconstruction algorithm itself, we take the result reconstructed using all the measured phase images at three incident angles and a sampling interval of 2° as the ground truth.

 figure: Fig. 7.

Fig. 7. Tomography reconstruction results using phase image sequences predicted by ILNN training at different sampling intervals $\Delta \alpha $ and incident angles $\theta $ (The red line and black line of the curves are the RI distribution along optical axis reconstructed using the predicted phase images and the ground truth.)

Download Full Size | PDF

We calculated the SSIM of phase image and three-dimensional RI distribution at different sampling intervals and incident angles by Eq. (11), which is used to assess the network performance and the RI reconstruction accuracy. The results are shown in Table 1 and Table 2.

Tables Icon

Table 1. The SSIM of measured phase image and predicted phase image

Tables Icon

Table 2. The SSIM of three-dimensional RI distribution reconstructed using phase image sequence predicted output from ILNN and ground truth

Table 1 and Table 2 illustrate that a smaller incident angle results in a higher SSIM for phase images, yet conversely leads to a lower SSIM for the three-dimensional RI distribution. Larger sampling intervals increase the sampling speed but decrease the SSIM.

We further investigated the quantitative relationship between the SSIM and sampling time. Using Eq. (8), we can determine the number of phase images required for different sampling intervals. Given that the average image capture time is 0.15 seconds, and the ideal SSIM of 1 necessitates 180 images, we can derive a direct relationship between sampling time and SSIM at various incident angles. This relationship is visualized as a line graph in Fig. 8.

 figure: Fig. 8.

Fig. 8. Line graph of the relationship between sampling time and SSIM at different incident angles.

Download Full Size | PDF

Thus, taking both the sampling speed and the SSIM value into account, we set the incident angle ${\theta _2} = {27^ \circ }$ and the sampling interval $\Delta \alpha = {24^ \circ }$ for the experiment examining the dynamic distribution of OLC nanoparticles inside cancer cells.

3.2 Dynamic distribution of OLC nanoparticles in cancer cells and parameter calculation

We tracked alterations of the OLC nanoparticles’ three-dimensional distribution in three colorectal cancer cells over a 2-hour period. Figure 9 displays the three-dimensional RI distribution at each time point for each cell. Visualization 1 provides the three-dimensional visualization of a single cell at 0 minutes, 60 minutes, and 120 minutes. Visualization 2 provides the visualization of the cell's temporal evolution at four-minute intervals over the two-hour period. The RI threshold for OLC nanoparticles was established in advance through measurements of the average RI in cells without OLC nanoparticles.

 figure: Fig. 9.

Fig. 9. Three-dimensional distribution of OLC nanoparticles inside colorectal cancer cells at each moment.

Download Full Size | PDF

Utilizing Eq. (12) and (13), we performed quantitative calculations to determine the surface area and volume of OLC nanoparticles in each cell at each time point. The temporal evolution of these measurements is illustrated in Fig. 10.

 figure: Fig. 10.

Fig. 10. The temporal evolution curve of OLC nanoparticles’ surface area and volume.

Download Full Size | PDF

3.3 Analysis of the OLC nanoparticles inside colorectal cancer cell

The cellular uptake of nanoparticles is largely mediated by endocytosis. In this process, vesicles coated with cellular membrane are produced by invagination of the plasma membrane. These vesicles envelop nanoparticles are subsequently detached from the plasma membrane, initiating a series of physiological activities that ultimately release the vesicle contents into the cell [39]. There are several types of endocytosis, including clathrin-mediated endocytosis, caveolae-mediated endocytosis, clathrin/caveolae-independent endocytosis, and pinocytosis [40].

Research indicates that the primary pathway for nanoparticle internalization is clathrin-mediated endocytosis [41,42]. In this process, vesicles are released from the membrane, aided by dynein-induced conformational changes. After detaching from the membrane, these vesicles are conveyed to the endosome through intracellular actin filaments [43]. Upon cellular internalization, nanoparticles typically aggregate into clusters of sufficient size, allowing them to be resolved by the microscope objective. This enables the use of DHT to reconstruct the three-dimensional distribution of these particles.

Under TEM observation, it was observed that cancer cells completely internalized OLC nanoparticles after 6 hours of co-cultivation. Consequently, we selected cancer cells exposed to OLC nanoparticles for 2 hours as experimental samples to monitor the dynamic changes in the distribution of OLC nanoparticles within the cancer cells. The surface area and volume of OLC nanoparticles within the three cancer cells used in the experiment increased in comparison to their initial values. For Cell1, the volume of OLC nanoparticles within the cell initially increased and then decreased. This phenomenon might be attributed to the release of some OLC nanoparticles once they reached saturation or the dispersion of certain nanoparticle clusters that were below the resolution threshold of the microscope objective and couldn't be imaged. For Cell2, the surface area of OLC nanoparticles within the cell initially decreased and then increased. This could be due to the nanoparticles aggregating into larger clusters with only a slight change in the total volume, leading to a decrease in surface area. Subsequently, as the content of OLC nanoparticles increased, the surface area also increased. For Cell3, both the surface area and volume increased over time.

4. Conclusion

In this research, we utilized a limited-angle DHT configured with ILNN to track the dynamic distribution of OLC nanoparticles within cancer cells. DHT used angle scanning to collect holograms and diffraction tomography algorithm to reconstruct the RI distribution. The ILNN supplements the phase image at unmeasured angles by establishing a mapping of sampling angle and coordinates to phase value, thereby improving the quality of tomographic reconstruction. First, we used SiO2 microsphere as standard sample and evaluated the effectiveness of the approach by calculating the SSIM quantitatively. Considering both sampling speed and tomographic reconstruction quality, the incident angle and sampling interval were determined as 24° and 27° for experiment, respectively. Subsequently, we employed it to reconstruct the three-dimensional RI distribution of cells at four-minute intervals and dynamically tracked the temporal evolution of OLC nanoparticles distribution within three colorectal cancer cells over a two-hour period. Furthermore, we calculated the change curve of nanoparticles’ surface area and volume over time and conducted a preliminary analysis of the potential reason. This methodology offers a novel perspective for the dynamic observation of the 3D distribution of nanoparticles within living cells, which holds substantial reference value for the investigation of the interaction between OLC nanoparticles and cancer cells, as well as for the development of an accurate photothermal conversion model.

Funding

Key Clinical Projects of Peking University Third Hospital (BYSYZD2022035); Innovation & Transfer Fund of Peking University Third Hospital (BYSYZHKC2021113); Beijing Municipal Natural Science Foundation (M22017).

Acknowledgments

The authors thank the funding support of Beijing Municipal Science & Technology Commission for the research related to this article.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. H. Sung, J. Ferlay, R.L. Siegel, et al., “Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” Ca-Cancer J. Clin. 71(3), 209–249 (2021). [CrossRef]  

2. Z. Li, S. Tan, S. Li, et al., “Cancer drug delivery in the nano era: an overview and perspectives,” Oncol. Rep. 38(2), 611–624 (2017). [CrossRef]  

3. L. Zou, H. Wang, B. He, et al., “Current approaches of photothermal therapy in treating cancer metastasis with nanotherapeutics,” Theranostics 6(6), 762–772 (2016). [CrossRef]  

4. Y. Liu, P. Bhattarai, Z. Dai, et al., “Photothermal therapy and photoacoustic imaging via nanotheranostics in fighting cancer,” Chem. Soc. Rev. 48(7), 2053–2108 (2019). [CrossRef]  

5. J. Ahlawat, S. M. Asil, G. G. Barroso, et al., “Application of carbon nano onions in the biomedical field: recent advances and challenges,” Biomater. Sci. 9(3), 626–644 (2021). [CrossRef]  

6. E. H. Fragal, V. H. Fragal, G. H. Da Silva, et al., “Enhancing near-infrared photothermal efficiency of biocompatible flame-synthesized carbon nano-onions with metal dopants and silica coating,” ACS Appl. Bio Mater. 3(9), 5984–5994 (2020). [CrossRef]  

7. D. B. Peckys and N. de Jonge, “Visualizing gold nanoparticle uptake in live cells with liquid scanning transmission electron microscopy,” Nano Lett. 11(4), 1733–1738 (2011). [CrossRef]  

8. B. M. Rothen-Rutishauser, S. Schürch, B. Haenni, et al., “Interaction of fine particles and nanoparticles with red blood cells visualized with advanced microscopic techniques,” Environ. Sci. Technol. 40(14), 4353–4359 (2006). [CrossRef]  

9. D. Suski, J. Winnik, and T. Kozacki, “Fast multiple-scattering holographic tomography based on the wave propagation method,” Appl. Opt. 59(5), 1397–1403 (2020). [CrossRef]  

10. Y. Sung, “Snapshot holographic optical tomography,” Phys. Rev. Appl. 11(1), 014039 (2019). [CrossRef]  

11. K. Kim, J. Yoon, S. Shin, et al., “Optical diffraction tomography techniques for the study of cell pathophysiology,” J. Biomed. Photonics. Eng. 2, 020201 (2016).

12. A. Géloën, K. Isaieva, M. Isaiev, et al., “Intracellular detection and localization of nanoparticles by refractive index measurement,” Sensors 21(15), 5001 (2021). [CrossRef]  

13. D. Pirone, M. Mugnano, P. Memmolo, et al., “Three-dimensional quantitative intracellular visualization of graphene oxide nanoparticles by tomographic flow cytometry,” Nano Lett. 21(14), 5958–5966 (2021). [CrossRef]  

14. D. K. Ikliptikawati, M. Hazawa, F. So, et al., “Label-free tomographic imaging of nanodiamonds in living cells,” Diamond Relat. Mater. 118, 108517 (2021). [CrossRef]  

15. W. Sung, Y. Jeong, H. Kim, et al., “Computational modeling and clonogenic assay for radioenhancement of gold nanoparticles using 3D live cell images,” Radiat. Res. 190(5), 558–564 (2018). [CrossRef]  

16. S.J. LaRoque, E.Y. Sidky, and X. Pan, “Accurate image reconstruction from few-view and limited-angle data in diffraction tomography,” J. Opt. Soc. Am. A 25(7), 1772–1782 (2008). [CrossRef]  

17. J. Lim, K. Lee, K. H. Jin, et al., “Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography,” Opt. Express 23(13), 16933–16948 (2015). [CrossRef]  

18. R. Guo, I. Barnea, and N. Shaked, “Limited-angle tomographic phase microscopy utilizing confocal scanning fluorescence microscopy,” Biomed. Opt. Express 12(4), 1869–1881 (2021). [CrossRef]  

19. W. Krauze, P. Makowski, M. Kujawińska, et al., “Generalized total variation iterative constraint strategy in limited angle optical diffraction tomography,” Opt. Express 24(5), 4924–4936 (2016). [CrossRef]  

20. J. Xu, Y. Zhao, H. Li, et al., “An image reconstruction model regularized by edge-preserving diffusion and smoothing for limited-angle computed tomography,” Inverse. Probl. 35(8), 085004 (2019). [CrossRef]  

21. A. Goy, G. Rughoobur, S. Li, et al., “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Natl. Acad. Sci. 116(40), 19848–19856 (2019). [CrossRef]  

22. D. Ryu, D. Ryu, Y. Baek, et al., “DeepRegularizer: rapid resolution enhancement of tomographic imaging using deep learning,” IEEE. Trans. Med. Imaging 40(5), 1508–1518 (2021). [CrossRef]  

23. B. Mildenhall, P. P. Srinivasan, M. Tancik, et al., “Nerf: representing scenes as neural radiance fields for view synthesis,” Commun. ACM 65(1), 99–106 (2022). [CrossRef]  

24. Y. Sun, J. Liu, M. Xie, et al., “Coil: coordinate-based internal learning for tomographic imaging,” IEEE. Trans. Comput. Imaging 7, 1400–1412 (2021). [CrossRef]  

25. R. Liu, Y. Sun, J. Zhu, et al., “Recovery of continuous 3d refractive index maps from discrete intensity-only measurements using neural fields,” Nat. Mach. Intell. 4(9), 781–791 (2022). [CrossRef]  

26. Z. He, X. Sui, G. Jin, et al., “Distortion-correction method based on angular spectrum algorithm for holographic display,” IEEE Trans. Ind. Inf. 15(11), 6162–6169 (2019). [CrossRef]  

27. P. Gao, B. Yao, J. Min, et al., “Autofocusing of digital holographic microscopy based on off-axis illuminations,” Opt. Lett. 37(17), 3630–3632 (2012). [CrossRef]  

28. O. Backoach, S. Kariv, P. Girshovitz, et al., “Fast phase processing in off-axis holography by CUDA including parallel phase unwrapping,” Opt. Express 24(4), 3177–3188 (2016). [CrossRef]  

29. C. Kirisits, M. Quellmalz, M. Ritsch-Marte, et al., “Fourier reconstruction for diffraction tomography of an object rotated into arbitrary orientations,” Inverse. Probl. 37(11), 115002 (2021). [CrossRef]  

30. L. Foucault, N. Verrier, M. Debailleul, et al., “Versatile transmission/reflection tomographic diffractive microscopy approach,” J. Opt. Soc. Am. 36(11), C18–C27 (2019). [CrossRef]  

31. J. Li, Q. Chen, J. Zhang, et al., “Optical diffraction tomography microscopy with transport of intensity equation using a light-emitting diode array,” Opt. Lasers Eng. 95, 26–34 (2017). [CrossRef]  

32. J. Yamanaka, S. Kuwashima, and T. Kurita, “Fast and accurate image super resolution by deep CNN with skip connection and network in network,” in Neural Information Processing: 24th International Conference, ICONIP 2017, Proceedings, Part II 24 (Springer, 2017), 217–225.

33. N. Rahaman, A. Baratin, D. Arpit, et al., “On the spectral bias of neural networks,” in International Conference on Machine Learning (PMLR, 2019), 5301–5310.

34. S. Evmorfos, K. Diamantaras, and A. Petropulu, “Deep q learning with fourier feature mapping for mobile relay beamforming networks,” in 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) (IEEE, 2021), 126–130.

35. Y. Wu, W. Cao, Y. Liu, et al., “Semantic auto-encoder with l2-norm constraint for zero-shot learning,” in 2021 13th International Conference on Machine Learning and Computing (2021), 101–105.

36. Z. Zhang, “Improved adam optimizer for deep neural networks,” in 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS) (IEEE, 2018), 1–2.

37. L.N. Smith, “Cyclical learning rates for training neural networks,” in 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (IEEE, 2017), 464–472.

38. U. Sara, M. Akter, M. Uddin, et al., “Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study,” J. Comput. Commun. 07(03), 8–18 (2019). [CrossRef]  

39. B. Yameen, W. I. Choi, C. Vilos, et al., “Insight into nanoparticle cellular uptake and intracellular targeting,” J. Controlled Release 190, 485–499 (2014). [CrossRef]  

40. N. D. Donahue, H. Acar, and S. Wilhelm, “Concepts of nanoparticle cellular uptake, intracellular trafficking, and kinetics in nanomedicine,” Adv. Drug Delivery Rev. 143, 68–96 (2019). [CrossRef]  

41. M. Kaksonen and A. Roux, “Mechanisms of clathrin-mediated endocytosis,” Nat. Rev. Mol. Cell Biol. 19(5), 313–326 (2018). [CrossRef]  

42. J. P. Mattila, A. V. Shnyrova, A. C. Sundborger, et al., “A hemi-fission intermediate links two mechanistically distinct stages of membrane fission,” Nature 524(7563), 109–113 (2015). [CrossRef]  

43. P. Decuzzi and M. Ferrari, “The receptor-mediated endocytosis of nonspherical particles,” Biophys. J. 94(10), 3790–3797 (2008). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The video presents the three-dimensional visualization results of three cells at three timepoints: 0min,60min, and 120 min.
Visualization 2       This video provides the visualization of the cell's temporal evolution at four-minute intervals over the two-hour period.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic of DHT setup. (M: Mirror, A: Attenuator, PBS: Polarizing beam splitter, HWP: Half wave plate, SF: Space filter, BS: Beam splitter, TL: Tube lens, MO: Microscope objective)
Fig. 2.
Fig. 2. The diagram of optical diffraction tomography reconstruction algorithm.
Fig. 3.
Fig. 3. (a) The schematic of phase values with position coordinates; (b) The diagram of incident angle and rotation angle; (c) The workflow of ILNN. The orange box represents training process, the blue box represents inference process.
Fig. 4.
Fig. 4. The framework of MLP.
Fig. 5.
Fig. 5. Schematic of the process of observing OLC nanoparticles inside cancer cell.
Fig. 6.
Fig. 6. Phase images predicted by ILNN and measured by DHT at different incident angles $\theta $ and different sampling intervals $\Delta \alpha $ (The left is incident angles pattern at the sample plane in frequency domain. Within each dotted box are a pair of measured and predicted phase image at a same incident angle, The red line and black line of the curves are the phase value distribution of the predicted and measured phase image along the corresponding dashed line in phase image.)
Fig. 7.
Fig. 7. Tomography reconstruction results using phase image sequences predicted by ILNN training at different sampling intervals $\Delta \alpha $ and incident angles $\theta $ (The red line and black line of the curves are the RI distribution along optical axis reconstructed using the predicted phase images and the ground truth.)
Fig. 8.
Fig. 8. Line graph of the relationship between sampling time and SSIM at different incident angles.
Fig. 9.
Fig. 9. Three-dimensional distribution of OLC nanoparticles inside colorectal cancer cells at each moment.
Fig. 10.
Fig. 10. The temporal evolution curve of OLC nanoparticles’ surface area and volume.

Tables (2)

Tables Icon

Table 1. The SSIM of measured phase image and predicted phase image

Tables Icon

Table 2. The SSIM of three-dimensional RI distribution reconstructed using phase image sequence predicted output from ILNN and ground truth

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

U ( r ) = U i ( r ) + U s ( r )
U s ( r ) = G ( r r ) f ( r ) U ( r ) d r
G ( r r ) = exp ( i k m | r r | ) 4 π | r r |
U s ( r ) U i ( r )
U s ( r ) U B ( r ) = G ( r r ) f ( r ) U i ( r ) d r
U ~ B , l d ( u , v ) = + + U B , l d ( x , y ) exp ( i ( u x + v y ) ) d x d y = i 2 k m 2 u 2 v 2 exp ( i k m 2 u 2 v 2 l d ) f ~ ( u , v , k m 2 u 2 v 2 k m ) , u 2 + v 2 k m
n ( r ) = n m f ( r ) k m 2 + 1
N u m m a p p i n g _ p a i r s = M × N × 360 Δ α
FFM ( X ) = ( sin ( k 1 X ) , cos ( k 1 X ) sin ( k i X ) , cos ( k i X ) sin ( k L X ) , cos ( k L X ) )
l o s s = 1 M × N × 360 Δ α m = 1 M × N × 360 Δ α ILNN ( X m ) P m | | 2 2
SSIM ( P m e a , P p r e ) = ( 2 μ m e a μ p r e + C 1 ) ( 2 σ m e a σ p r e + C 2 ) ( μ m e a 2 + μ p r e 2 + C 1 ) ( σ m e a 2 + σ p r e 2 + C 2 )
S O L C = i = 1 N u m C P i x × P i x e l S i z e 2 M a g 2
V O L C = i = 1 N u m S P i x × P i x e l S i z e 3 M a g 3
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.