Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-assisted smartphone-based quantitative microscopy for label-free peripheral blood smear analysis

Open Access Open Access

Abstract

Hematologists evaluate alterations in blood cell enumeration and morphology to confirm peripheral blood smear findings through manual microscopic examination. However, routine peripheral blood smear analysis is both time-consuming and labor-intensive. Here, we propose using smartphone-based autofluorescence microscopy (Smart-AM) for imaging label-free blood smears at subcellular resolution with automatic hematological analysis. Smart-AM enables rapid and label-free visualization of morphological features of normal and abnormal blood cells (including leukocytes, erythrocytes, and thrombocytes). Moreover, assisted with deep-learning algorithms, this technique can automatically detect and classify different leukocytes with high accuracy, and transform the autofluorescence images into virtual Giemsa-stained images which show clear cellular features. The proposed technique is portable, cost-effective, and user-friendly, making it significant for broad point-of-care applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Hematological analysis plays a pivotal role in clinical tests associated with the analysis of the cellular component of blood. Through the evaluation of the blood cell enumeration and morphology, hematological analysis enables screening and monitoring of blood-related conditions and diseases, like inflammation, anemia, infection, hemophilia, blood-clotting disorders, and several types of cancers. Hematological analysis has been conducted in many clinical applications. For example, in chemotherapy, the chemotherapy drugs will inevitably damage the normal cells of the human body while killing cancer cells due to their poor selectivity, resulting in adverse drug reactions [1]. Myelosuppression [2], one of the most common adverse reactions of chemotherapy, is mainly manifested by thrombocytopenia and the decrease in the number of leukocytes (especially neutrophils) and hemoglobin. Hematological analysis can reflect whether the body has myelosuppression and the degree of myelosuppression. Hence, hematological analysis has been considered one of the most efficient tests before and after chemotherapy. Although flow cytometry and microscopic evaluation of stained blood smears are widely used in clinical laboratories and hospitals for hematological analysis, there remain some drawbacks. The whole procedure requires multiple chemical reagents and expensive machines, laborious system calibration, and highly trained personnel for operation. These disadvantages limit the evaluation to be performed in a hospital or clinical setting, which is less ideal for chemotherapy patients, who are suffering from a weak immune system, to frequently travel in and out of hospitals or clinics for blood tests. Thus, there is a great demand to develop a rapid, reagent-free, and portable imaging system for hematological analysis.

Several alternative novel imaging modalities were developed to achieve label-free hematological analysis. For instance, Raman spectroscopy has been used to investigate leukocytes and visualize cellular components to differentiate leukocytes according to the different biomolecular information in the spectra [3,4]. However, the drawback is that the spontaneous Raman signal is weak, fundamentally leading to a low signal-to-noise ratio. Deep-ultraviolet (Deep-UV) microscopy [5] is another powerful emerging tool for molecular imaging without labels, and its contrast comes from biomolecules’ optical absorption properties. Deep-UV microscopy can achieve high spatial resolution due to its short wavelength light for illumination, and specific information from various endogenous biomolecules. Recently, deep learning-assisted and miniaturized Deep-UV microscopy also attract widespread attention [6,7]. However, its widespread application is still limited by the requirement of UV-transparent optics and UV-sensitive sensors. These components can be expensive, which increase the cost and complexity of Deep-UV microscopy systems. Other techniques, such as hyperspectral microscopy [810], quantitative phase microscopy [1115], and fluorescence lifetime microscopy [16], have been shown as promising tools for cellular and subcellular imaging in biomedical fields. To simplify the conventional microscopy system, a defocusing phase-contrast imaging system via a regular microscope was developed, which supplies a reliable and easy-to-use way for blood smear analysis [17]. Due to the high magnification of this system, its field of view (FOV) is limited to approximately 0.23 mm × 0.14 mm for a single shot. Although all these methods have shown great potential for characterizing changes in blood cell components for blood smear analysis, the fact that they require complex and expensive optical devices with sophisticated system alignment and image reconstruction limits their further point-of-care applications. In addition to traditional microscope systems, recent smartphones have shown great potential as a promising platform for point-of-care monitoring and diagnosing due to their wide usage worldwide. A considerable amount of studies have attempted to develop microscopy imaging modalities (e.g., bright-field, fluorescence, phase-contrast, etc.) based on smartphones with simple add-on lenses, especially for various medical diagnoses and patients’ medical record tracking [1827]. Therefore, smartphone-based microscopy has the potential to democratize microscopy and provide affordable and accessible diagnostic tools, making it a highly promising modality for hematological analysis [22,25,28]. In addition, there has been a growing trend in recent years towards utilizing deep learning techniques for analyzing and interpreting hematological data, including virtual staining of blood cells [6,17], red-blood-cell-related disorders diagnosis [29], various leukocytes classification [17], etc. Numerous recent advancements in deep learning have significantly contributed to medical and hematological analysis. As a result, researchers have begun exploring the application of deep learning techniques to smartphone-based microscopy for blood analysis. For example, Haan et al. applied a deep learning framework to a smartphone-based microscope to perform automatic screening of sickle cells in blood smears [30]. Besides, there still remains untapped potential for further exploration in this area.

Here, we propose an imaging technique based on intrinsic fluorescence, termed smartphone-based autofluorescence microscopy (Smart-AM) system, for imaging label-free blood smears at subcellular resolution and performing automatic peripheral blood smear analysis, which is intended as a simple and accessible tool in resource-limited settings or point-of-care scenarios. The Smart-AM system provides subcellular autofluorescence images of blood smears with maximum FOVs of 2.58 mm × 1.94 mm (before optical zoom) and 1.33 mm × 1 mm (after 2× optical zoom), revealing morphological features of different blood cells (leukocytes, erythrocytes, and thrombocytes) without labels and show abnormal variations in blood cells. The Smart-AM system is robust and cost-effective due to its simple design and ease of use. Furthermore, we combine the imaging system with a novel detection and segmentation algorithm (e.g., Detectron2 [31]) and deep learning transformation networks (e.g., pix2pix [32]), coined as DeepSmart-AM, to reduce the requirement for skilled hematologists to make the manual diagnosis. The performance of Smart-AM is experimentally verified by imaging mouse and human blood samples. The repeatable results show that specific features in Smart-AM images can be used for differentiating leukocytes automatically with high accuracy (>90%), showing a high consistency with the manual counting results. Also, the deep learning transformation networks can convert the autofluorescence images into virtual Giemsa-stained images which are close to the real Giemsa-stained blood smear images. This style transformation can significantly simplify the complicated sample preparation procedures and reduce diagnosis errors caused by poor and inconsistent staining quality. In summary, this proposed Smart-AM system combined with deep learning methods simplifies the clinical workflow of the blood tests (Fig. 1(a)), as shown in Fig. 1(b), which is significant for broad point-of-care applications.

 figure: Fig. 1.

Fig. 1. Motivation, design, and simulation of Smart-AM. a) Conventional blood smear analysis in a hospital. b) Proposed home-based blood imaging and diagnosis by the Smart-AM system. c) Schematic of the Smart-AM system. The major components of the Smart-AM system are UV-LEDs, focus lenses, a sample slide, an external lens module, and a smartphone camera. The UV light is obliquely focused onto the sample by UV-fused silica plano-convex lenses. The excited autofluorescence signal is collected by an external lens module, refocused by the built-in lens, and subsequently detected by the smartphone camera sensor. d) Ray-tracing simulation of the external reversed lens module served as an objective for the smartphone-based microscope. e1–e6) Simulated spot diagrams at the image plane of six different fields (Δy = 0, 0.25, 0.50, 0.75, 1.00, and 1.25 mm from the center of the field at the sample plane). Minimal aberration was observed within the center 1-mm radius region. RMS: root mean square.

Download Full Size | PDF

2. Materials and methods

2.1 Design of the Smart-AM system

The setup of the Smart-AM system is illustrated in Fig. 1(c) and Figure S1. Since previous studies suggested that many important endogenous fluorophores (e.g., hemoglobin, nicotinamide adenine dinucleotide hydride, cytoplasmic aromatic amino acids, etc.) can fluoresce with deep-UV excitation [3335], small UV light-emitting diodes (UV-LEDs) (265–270 nm center wavelength, 10–15 mW, 5.5–7.0 V, 100 mA, 3535 packages) are used to illuminate samples obliquely with a high incidence angle of ∼70°. After welding the LEDs to the board, we test the actual wavelength profile (Figure S2, Supplement 1) in the working voltage of LEDs using a spectrometer (Sarspec, Lda). The two LEDs are focused on the sample plane by small UV-fused silica plano-convex lenses, which cover the entire FOV for consistent and homogeneous illumination. Note that opposite-direction oblique illumination is adopted here to minimize the shadowing effects and uneven illumination caused by light absorption in the sample regions closer to the LEDs [36]. A lightweight external lens module (harvested from a USB camera (module no.: HBVCAM-NB20231W V 33) replacement part) is attached to a typical smartphone’s built-in camera. The oblique illumination from the opposite direction can circumvent the use of fluorescence filters and reduce the background shadowing artifacts because the directly transmitted excitation light (light purple rays in the ray-tracing image, Fig. 1(c)) are not received by the external lens. Meanwhile, the smartphone camera sensor typically exhibits limited sensitivity to UV wavelengths, particularly within the Deep-UV range. Therefore, the scattered excitation light will not be detected by the sensor. The excited autofluorescence signals are collected by the reversed external lens module which serves as the objective lens. Then, through the infinity-corrected internal lens in the smartphone’s rear camera, the object is finally acquired by the color complementary metal-oxide semiconductor (CMOS) sensor of the smartphone (iPhone 12 Pro, f/1.6, 1.4 µm pixel size, 12.2 mm2 sensor size). Figure S3 (Supplement 1) illustrates the differences between the conditions without and with blood smear samples by using Smart-AM.

The built-in lens of a smartphone camera conforms to the optical design principles of an infinite conjugate system, which assumes that the object being imaged is positioned at a considerable distance from the lens, resulting in the formation of a minimized image. Consequently, in order to visualize micron-scale structures, the integration of external magnifying optics into the smartphone is imperative. Digital devices such as smartphones and laptops are usually equipped with camera lenses composed of multiple highly complex aspheric lenses, which minimize aberration and field curvature. Therefore, they are well-suited to optically match the smartphone as a miniature microscope. With the external lens, the whole smartphone-based system acts as a relay lens. The system forms the final image on the smartphone sensor plane when an object is positioned at the focal plane of the external lens. The optical magnification (M) can be calculated by ${f_1}/{f_2}$ theoretically, where f1 and f2 are the focal lengths of the phone’s internal camera lens and the external lens, respectively. However, accurate information about the outer dismantled lens cannot be obtained easily. Therefore, we measured the magnification by imaging a specific line in the target (USAF 1951 resolution target, Edmund Optics Inc.) (Figure S4, Supplement 1). The Magnification ratio can be calculated as $({\textrm{Num} \times \Delta {S_{sensor}}} )/w$, where Num is the number of pixels of the line in the image, ΔSsensor is the pixel size of the smartphone camera sensor, and w is the actual length of the line. In our case, the pixel size of the smartphone camera sensor is 1.4 µm. Therefore, the calculated magnification is ∼2.19 (before optical zoom) and ∼4.25 (after 2× optical zoom). In the following experiments, the magnification ratio of 4.25 is used unless otherwise specified. According to the optical magnification, the effective pixel size $\Delta S_{sensor}^{\prime}$ at the sample plane is $\Delta {S_{sensor}}/M$, which is 0.64 µm (before optical zoom) and 0.33 µm (after 2× optical zoom), resulting in maximum FOVs of 2.58 mm × 1.94 mm and 1.33 mm × 1 mm, respectively. Recent smartphone cameras have the advantage of employing a built-in lens with a high numerical aperture (NA), and an image sensor with a small pixel size, thus providing high-resolution images. According to the Rayleigh criteria, the theoretical resolution is approximately $0.61\lambda /\textrm{NA} = 2({f/\# } )\times 0.61\lambda $, where λ is the fluorescence emission wavelength, and f/# is the f-number of the external lens. Meanwhile, through Zemax optical ray-tracing and spot diagram simulation (six different fields, $\mathrm{\Delta y} = 0,\; 0.25,\; 0.50,\; 0.75,\; 1.00,\; \; \textrm{and}\; 1.25\; \textrm{mm}$, from the center of the field) (Fig. 1(e1–e6)), minimal distortion (e.g., coma, astigmatism, and field curvature) is observed within the center 1-mm radius region.

2.2 Sample preparation

Mouse blood samples were collected by a terminal procedure (cardiac puncture), while human blood samples were collected from hospitals after the deidentification of patients. After collection, all blood samples were immediately added to vacuum tubes with an anticoagulant solution (Ethylenediaminetetraacetic acid solution, E8008, Sigma-Aldrich Inc.). Note that the anticoagulant solution is needed here because we aimed to maximize the amount of data obtained from each blood sample, and ensure the integrity of the sample during the testing process. However, for the specific application case of our Smart-AM system as a point-of-care monitoring device, where only a single drop of blood is needed, the anticoagulant solution may not be necessary. Before preparing the blood smears, the vacuum tube should be gently rolled to mix the blood cells. About 10 µL of whole blood is needed to drop on one end of the clean quartz slide. Another slide is used as the spreader of the blood at an angle of ∼30–45 degrees. We pushed the spreader forward smoothly when we observed the blood drop spreading along the edge of the spreader slide. A dense body, a well-developed feathered edge, and a monolayer area can be seen in a good blood smear. The monolayer of the blood smear is the region of interest (RoI) that we used for the subsequent tasks.

2.3 Data acquisition for Smart-AM and gold standard images

The prepared blood smear was imaged directly using our Smart-AM system after air drying. As shown in Fig. 1(c), the blood smear is placed on the sample plane, which is between the reversed lens and illumination parts. The default smartphone camera application can be used for image acquisition directly, and other professional camera applications can also be used for advanced controls of exposure time, ISO (i.e., the sensitivity of the camera’s sensor), image format, etc. The focus alignment is easy due to the automated focus function of the smartphone camera. To impede the background ambient light, a black box or cloth can be used to cover the imaging part. It takes less than one second per frame (∼1.4 mm2) for image acquisition. The raw images will then be uploaded into a shared folder with a separate computer for further analysis.

After Smart-AM imaging, the same slide is manually stained to obtain the ground truth hematological image. Giemsa stain (32884, Sigma-Aldrich Inc.) is used, following the standard blood smear staining workflow (Fig. 1(a)). The blood smear is first fixed in the methanol for 7 minutes and stained in a 1:20 diluted Giemsa solution for 30–60 minutes. After rinsing the stained smear with distilled water, the slide is allowed to dry in the air, and the permanent mount is made with the mounting medium. Then, the slide is scanned by a digital whole-slide imaging machine (40×, NA = 0.75, NanoZoomer-SQ, Hamamatsu Photonics K.K.) to obtain the corresponding Giemsa-stained hematological images. Note that the Giemsa-stained hematological images can duplicate the region imaged by the Smart-AM system with little distortion due to the fixation procedure.

2.4 Automatic and high-precision differential of five leukocytes with the Detectron2 platform

Leukocytes have the most complex morphology among blood cells. Different leukocyte subtypes have different functions, including recognizing intruders, killing harmful bacteria, producing antibodies, etc. Therefore, leukocyte identification is widely used to define blood conditions and diseases. Firstly, we used a manifold learning approach, locally linear embedding (LLE) [37], to visualize the autofluorescence image features (476 images). LLE is widely used in image recognition, high-dimensional data visualization, and other fields because it keeps the local features of samples in dimensionality reduction. In the next step, we can detect and differentiate leukocyte subtypes with relatively high accuracy by extracting the distinguishable characteristics of each subtype using an open-source detection platform with deep learning architecture (Detectron2) [31]. Detectron2 is implemented in PyTorch and includes numerous variants of the mask region-based convolution neural network (Mask R-CNN) model [38], which meet the segmentation and detection needs of leukocytes. The network combines feature map extraction, region proposals, bounding box regression, and classification of each RoI region. The Smart-AM images (image size of 256 × 256) of five leukocyte subtypes with corresponding type labels were used to train the network. In the testing phase, the input is Smart-AM images, and the output contains the bounding boxes, predicted leukocyte labels, and the confidence coefficient (the confidence threshold is set to 0.75) of the predictions.

2.5 Virtual staining of Smart-AM images with a conditional adversarial network

To show the effectiveness of Smart-AM in hematological imaging and to circumvent conventional chemical staining procedures, we employed a conditional generative adversarial network (cGAN) from pix2pix [32] in transforming our Smart-AM images to clinical standard Giemsa-stained hematological images. Figure 6(a) illustrates the virtual staining training process using the pix2pix algorithm, where pixel-wise paired Smart-AM images and Giemsa-stained blood smear images were used. The Smart-AM image is digitally transformed to the virtually Giemsa-stained image (termed DeepSmart-AM image) by a generator G. Then, a discriminator D is used to classify between the generated DeepSmart-AM image and the actual stained image. The UNet-based G and PatchGAN-based D are used in this work. We made two improvements to the original pix2pix to force it to keep the structural information of Smart-AM images. First, we added structure similarity index (SSIM) [39] loss that minimizes the structural differences between input Smart-AM images and generated DeepSmart-AM images. In addition, we also train a generator F to transfer the Giemsa-stained blood smear images back to virtual Smart-AM images. With the F, the generated virtually stained image can be transferred back to virtual Smart-AM images which can be used to calculate the recovering loss with original Smart-AM images. The recovering loss also forces the G to keep the structural information of Smart-AM images. 1230 paired Smart-AM and Giemsa-stained image patches with a patch size of 256 × 256 (cropped from the 4036 × 3024 pixels original Smart-AM images with a 100-pixel moving step) were used in the training phase. For our virtual staining task, this dataset size already achieves stable and satisfactory results. The required data is minimal, and the network can be trained quickly. The loss function of the network is defined as:

$$L = {L_{cGAN}}({G,D} )+ \mathrm{\lambda }{L_{L1}}({G,F} )+ \gamma {L_{SSIM}}({G,F} )$$

Here:

$${L_{cGAN}}({G,D} )= {E_{x,y}}\|D{({x,y} )\|_2} + {E_x}\|1 - D{({x,G(x )} )\|_2}$$
$${L_{L1}}({G,F} )= \|G(x )- {y\|_1} + \|F({G(x )} )- {x\|_1}$$
$${L_{SSIM}}({G,F} )= 1 - {E_{x\sim {p_{data}}(x )}}[{\textrm{SSIM}({x,G(x )} )} ]+ 1 - {E_{G(x )\sim {p_{data}}({G(x )} )}}[{\textrm{SSIM}({G(x ),F({G(x )} )} )} ]$$
where the x and y represent the input Smart-AM images and real Giemsa-stained images, respectively. We set the weighting parameters of the L1 loss $\lambda = 10$, and SSIM loss $\gamma = 1$. The optimal network is obtained through continuous optimization of the confrontation between the generator and discriminator. The SSIM loss aims to minimize distortions in the virtual staining process. Unlike image recognition tasks where accuracy can be measured using recognition rates, the quality of our virtual staining images can be used as a direct reflection of the network’s accuracy and performance. To demonstrate the possibilities and effectiveness of virtual staining Smart-AM (DeepSmart-AM) images, we imaged unstained blood smears using our Smart-AM system and then obtained the DeepSmart-AM images by the trained generator directly. The validation datasets include 48 image patches (1000 × 1000) from another four Smart-AM images (4036 × 3024). With the well-trained network, it takes less than three minutes to obtain a DeepSmart-AM image from the raw Smart-AM image (∼1.4 mm2).

The network is implemented using Python version 3.7.3, with Pytorch version 1.0.1. The software is implemented on a desktop computer with a Core i7-8700 K CPU@ 3.7 GHz and 32 GB of RAM. The training and testing of the neural networks were performed using GeForce GTX 1080Ti GPUs with 11 GB RAM.

3. Results

3.1 Imaging performance of Smart-AM with fluorescent beads

The imaging resolution performance of our Smart-AM system is quantified by imaging green fluorescent polymer microspheres with a diameter of 200 nm (G200, Thermo Fisher Scientific Inc.) (Fig. 2(a)). The data points of ten fluorescent beads were selected and averaged to measure the resolution by Gaussian fitting (Fig. 2(b)). The full width at half maximum (FWHM) of the Gaussian fitting profile is 1.42 µm with a signal-to-noise ratio (SNR) of 38.5 dB, representing the imaging resolution of the Smart-AM system.

 figure: Fig. 2.

Fig. 2. The Smart-AM system resolution characterization. a) A Smart-AM image of fluorescent polymer microspheres of 200 nm in diameter. b) The zoomed-in view of one bead in the orange box in (a) and the profile along the red dashed line is extracted for averaging (red circles). The FWHM of the Gaussian fitting profile (solid blue line) is about 1.42 µm. The pixel size of the image (the distance between two consecutive data points) is 0.33 µm.

Download Full Size | PDF

3.2 Subcellular imaging of blood smears by Smart-AM

As shown in Fig. 3, our system enables the generation of autofluorescence images of different blood cells. The first and the third columns of Fig. 3 are Smart-AM images of the blood samples, and the second and the last columns are the corresponding Giemsa-stained images scanned by the digital whole-slide scanner. Again, the essential endogenous fluorophores can fluoresce under UV excitation, providing the autofluorescence imaging contrast of each cell type. Therefore, the recapitulate features of Smart-AM images show a significant consistency with the Giemsa-stained gold standard, which enables us to differentiate different blood cells, especially the five different leukocytes. We acquired multiple images of the same slide using the same imaging setting and parameters. The results obtained from each imaging session are consistent and reproducible, indicating that the imaging system is reliable. They all show that the UV absorption of nucleic acids in leukocytes produces distinguishable contrast in Smart-AM images, which corresponds to the violet color in Giemsa-stained images. The Smart-AM images for neutrophils (Fig. 3(a–c)) present lobe textures of the nuclei (e.g., bilobed, trilobed, and multilobed neutrophils). Besides, eosinophils’ cytoplasm (Fig. 3(d,e), the white arrowheads) has stronger fluorescence compared to other granulocytes due to the rich acidophilic cytoplasmic granules. The repeatable results indicate that the Smart-AM system shows a high sensitivity to visualize different blood cells.

 figure: Fig. 3.

Fig. 3. Smart-AM images and corresponding Giemsa-stained images of different blood cells. Different morphological details can be observed among different blood cells. a–c) Smart-AM and corresponding Giemsa-stained images of neutrophils. d,e) Smart-AM and corresponding Giemsa-stained images of eosinophils. f) Smart-AM and corresponding Giemsa-stained images of a monocyte. g–i) Smart-AM and corresponding Giemsa-stained images of lymphocytes. j) Smart-AM and corresponding Giemsa-stained images of a basophil. k) Smart-AM and corresponding Giemsa-stained images of erythrocytes. l) Smart-AM and corresponding Giemsa-stained images of thrombocytes (yellow arrowheads).

Download Full Size | PDF

3.3 Atypical blood morphology visualized by Smart-AM

The Smart-AM images contain detailed morphological information of blood cells which are essential for blood-related disease diagnosis and monitoring. Abnormal blood cell distribution or morphology can be observed in our Smart-AM images without sample processing or chemical reagents. To show the strength of Smart-AM, Smart-AM images of some abnormal blood samples were shown (Fig. 4).

 figure: Fig. 4.

Fig. 4. Smart-AM imaging of abnormal blood samples. a) Smart-AM (top) and corresponding Giemsa-stained (bottom) images over a ∼1 mm × 1.4 mm region of a blood smear with abnormal leukocytes. b–d) Zoomed-in Smart-AM and Giemsa-stained images of orange, green, and yellow boxes marked in (a), respectively. There are some leukocytes under apoptosis, such as cytoplasmic blebbing on the surface (red arrows in (c), (d)), and apoptotic bodies from one eosinophil (yellow arrows in (c)). Besides, some clumps of leukocytes are observed (pink arrows in (d), (e)) due to inflammation or bacterial infections. e) Smart-AM images over a ∼1 mm × 1.4 mm region of a blood smear with abnormal thrombocytes. The insert at the bottom left of the Smart-AM image is the corresponding Giemsa-stained image. f) Zoomed-in Smart-AM and corresponding Giemsa-stained image of orange dash region marked in (e), showing the excess platelets. g,h) Zoomed-in Smart-AM and corresponding Giemsa-stained images of yellow and green boxes marked in (e), respectively, showing platelets clumping. i) Smart-AM (top) and corresponding Giemsa-stained (bottom) images over a ∼1 mm × 1.4 mm region of a blood smear with cancer. j–l) Zoomed-in Smart-AM and Giemsa-stained images of orange, green, and yellow boxes marked in (i), respectively. Excess neutrophils and larger lymphocytes (red arrows in (j), (k)) compared with normal lymphocytes (pink arrows in (k), (l)) are observed. m) Distribution of leukocyte features extracted from normal and cancer blood samples with median cross-sectional areas of 134 µm2 and 175 µm2, and median intercellular distances of 186 µm and 17 µm, respectively. n) Distributions of leukocyte features extracted from (j). The significance is defined as p ≤ 0.05 in all cases.

Download Full Size | PDF

Apoptosis is usually a normal process of programmed cell death [40]. However, apoptosis in both excessive and reduced amounts has pathological implications. Figure 4(a–d) shows excess leukocytes are under apoptosis (the red and yellow arrowheads). The red arrowheads indicate cytoplasmic blebbing on the surface, and the yellow arrowhead indicates one eosinophil is breaking into apoptotic bodies. Besides, some clumps of leukocytes (the pink arrowheads in Fig. 4(b,c)) are observed due to inflammation or bacteria infections.

In the Smart-AM images of blood samples with platelet abnormalities (Fig. 4(e)), large and excess platelets (zoomed-in Smart-AM image Fig. 4(f) of the orange dash region marked in Fig. 4(e)), and aggregate platelets (Fig. 4( g,h) with corresponding Giemsa-stained hematological images) can be easily observed. These platelet abnormalities imply essential thrombocythemia or reactive thrombocytosis.

Figure 4(i) exhibits excess and abnormal leukocytes over a ∼1.4 mm2 region. The cancer blood samples were extracted from a 4T1 cell allograft mouse. The zoomed-in Smart-AM images (Fig. 4(j–l)) show the details of these leukocytes. The number of neutrophils increases dramatically in this sample, and the size of some lymphocytes (the red arrowheads) is bigger than normal lymphocytes (the pink arrowheads). Diagnostic features, such as cross-sectional area and intercellular distance of leukocytes, play a vital role in addressing the issue of blood conditions. These features can be digitally extracted from Smart-AM images of both cancer blood samples and normal blood samples by using a pixel-based segmentation Fiji plugin (trainable Weka segmentation) [41]. The image was subsequently converted to a binary image based on the resulting probability maps, enabling the identification of individual blood cells. The segmentation results and binarized image can be analyzed directly in Fiji to get the area and position of each leukocyte. The statistical results (Fig. 4(m)) indicate a significant difference between normal and cancer blood samples based on the cellular features. Compared with Giemsa-stained images, the Smart-AM images can successfully identify atypical blood morphology and size (Fig. 4(n)), which verifies the potential of our system for blood screening applications.

3.4 Leukocyte five-part differential

Differential leukocyte counting is essential for hematological analysis to monitor and diagnose blood diseases. Here, we employed LLE to visualize the leukocyte features in two-dimensional and three-dimensional space (Fig. 5(a)). The results show good clusterings for the leukocyte subtype. To complete leukocyte subtype differential analysis, we employed an open-source platform, Detectron2, based on a deep learning algorithm for Smart-AM images (detailed data information is shown in Table S1, Supplement 1). Obtained raw Smart-AM images can be directly input into the trained model, and we can get the output images containing the bounding boxes, predicted leukocyte labels, and the confidence coefficient within minutes. Representative Smart-AM images of detection and classification results are shown in Fig. 5(b). The confusion matrix represents the differential result by the counts of the predicted and actual class labels (Fig. 5(c)). According to the matrix, the result was evaluated by using quantitative metrics of accuracy and F1-score. The accuracy is the ratio of all correct predictions of instances to the whole pool of instances, which is the most intuitive metric. In contrast, the F1-score is the harmonic average of the precision (the ratio of the correctly predicted positive labels to all the correct predictions) and recall (the ratio of the correctly predicted positive labels to the actual positive instances), which is better for our uneven class distribution. The average accuracy and F1-score among all leukocytes are 0.982 and 0.925, respectively. The differential performance of the monocyte and the basophil is poorer than other subtypes, showing that some monocytes and basophils are classified into neutrophils or lymphocytes due to some similar morphological features. This issue can be addressed by obtaining more evenly distributed samples or improving the deep learning algorithm. The Smart-AM images combined with the detection and classification deep learning network can provide a tool for leukocyte subtype differential with a favorable performance for hematological analysis.

 figure: Fig. 5.

Fig. 5. Automatic and high-accuracy differential of five leukocytes with the Detectron2 platform. a) The LLE visualization of blood cell features in two-dimensional and three-dimensional (inset at the bottom right) space. b) Smart-AM images of five leukocyte subtypes from the detection results. The results contain the bounding boxes, predicted leukocyte labels, and the confidence coefficient. c) Confusion matrix for five-part leukocyte differential counts. d) Performance evaluation of differential results by quantitative metrics (accuracy, sensitivity, specificity, and F1-score). The average values among all leukocytes are 0.982 and 0.925 for accuracy and F1-score, respectively. Neu.: neutrophil, Lym.: lymphocyte, Mon.: monocyte, Eos.: eosinophil, Bas.: basophil.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Virtual staining of the Smart-AM (DeepSmart-AM) images with a conditional adversarial network. a) Virtual staining by a conditional adversarial network with two generators and a discriminator. SSIM: structural similarity index measure, Rec: recovered. b–e) Smart-AM (top/left) and DeepSmart-AM (bottom/right) validation with blood smear samples. f–m) Zoomed-in Smart-AM and DeepSmart-AM images of orange and green solid boxes in (b–e), respectively. n–u) The corresponding Giemsa-stained images as the ground truth. Blood cells including erythrocytes, different leukocytes (lymphocyte in (f), eosinophils in (g) and (h), and neutrophils in (i–k)), and platelets, were well virtually stained to mimic the appearance of real Giemsa-stained images.

Download Full Size | PDF

To validate the performance of our Smart-AM system with the detection and classification deep learning network for screening leukocyte count disorders, we compared the leukocyte percentages from both our detection results and the manual counting results (Figure S5, Supplement 1). Only slight differences are shown between the predicted results and the manual counting results, which can be further minimized using sufficient amounts of leukocytes. Meanwhile, from the results, our technique can be proven as a promising tool to show the abnormal variation in leukocyte percentages for screening patients’ blood conditions.

3.5 DeepSmart-AM images of unstained blood smears versus traditional Giemsa-stained images

While the autofluorescence images produced by our Smart-AM system are generally of good quality, we have observed some artifacts in the images of red blood cells. These artifacts may include bright dots or irregularities in the shape and appearance of the cells, which may come from dust particles or debris on the slide or optics, or variations in lighting conditions. In addition to optimizing the experimental environment, we implemented a deep learning network to transform the Smart-AM images into DeepSmart-AM images that mimic the appearance of real Giemsa-stained images, which can be readily interpreted by hematologists. This approach has proven to be more effective for enhancing the stability and reliability of our Smart-AM images and has the translational potential to significantly improve the accuracy and consistency of our technique. Detailed information on the virtual staining network (Fig. 6(a)) is shown in the Materials and Methods section. To demonstrate the clinical possibilities and effectiveness of our DeepSmart-AM images, we imaged both fresh normal and abnormal blood smears using our Smart-AM system and generated the DeepSmart-AM images by the virtual staining network, then compared them with Giemsa-stained images of the same slides (Fig. 6, Figure S6, and Visualization 1, Visualization 2, Supporting Information). Figure 6(f–m) shows Smart-AM images and their virtual staining version output (DeepSmart-AM images) of the zoomed-in regions in Fig. 6(b–e), and Fig. 6(n–u) are the corresponding bright-field Giemsa-stained images as the ground truth. Validated with erythrocytes, different leukocyte subtypes (Fig. 6(f–k)), and platelets (Fig. 6( l,m)), the well-trained network enables the style transformation of Smart-AM contrast images of label-free blood smears into DeepSmart-AM images which contain conducive information for hematologists to analyze and diagnose. For example, the multilobes of the leukocyte nucleus can be observed in DeepSmart-AM images. The pink part of the eosinophil (Fig. 6( g,h)), which indicates the rich acidophilic cytoplasmic granules, is consistent with the clinical standard (Fig. 6(o,p)). We first applied SSIM to evaluate the virtual staining results. The SSIM values are 0.73, 0.75, 0.58, 0.68, 0.62, 0.50, 0.87, and 0.80 for the zoomed-in regions (Fig. 6(f–m)), respectively. Note that the SSIM values alone may not fully reflect the perceptual similarity observed between the virtual staining images and the ground truth. Therefore, we performed further quantitative cellularity analysis of diagnostic features. To show the consistent diagnostic features presented in both DeepSmart-AM images and traditional Giemsa-stained images, we segmented both DeepSmart-AM and clinical standard images to identify and localize each blood cell. Blood cells’ cross-sectional area and intercellular distance were extracted from both DeepSmart-AM and clinical standard images. We plotted the distribution of these features to find out the relationship between the DeepSmart-AM and clinical standard Giemsa-stained images (Fig. 7). According to the Wilcoxon rank-sum test, the statistical results suggest that the cellular features of DeepSmart-AM images agree fairly well with the clinical standard images, meaning the style transformation is highly effective.

 figure: Fig. 7.

Fig. 7. Distribution of blood cell features extracted from both DeepSmart-AM images and clinical standard Giemsa-stained images. a,c) Cross-sectional areas of blood cells extracted from mouse blood samples and human blood samples, respectively. b,d) Intercellular distances of blood cells extracted from mouse blood samples and human blood samples, respectively. Wilcoxon rank-sum testing is performed for each distribution. The significance is defined as p ≤ 0.05.

Download Full Size | PDF

4. Discussion

Smartphone-based systems utilizing various contrast mechanisms have been effectively introduced and applied across a wide range of applications. Table S2 (Supplement 1) provides a comparison of some representative smartphone-based microscopy techniques, among which our Smart-AM exhibits superiority in terms of label-free autofluorescence contrast, high resolution, minimum aberration and field curvature, and acceptable FOV of a single frame. To the best of our knowledge, this work demonstrates the first smartphone-based autofluorescence microscopy for label-free peripheral blood smear analysis. Blood enumeration and morphology play a vital role in the screening and diagnosing of blood abnormalities. The morphological features of blood cells can be visualized using our Smart-AM without exogenous agents due to the intrinsic fluorescence contrast using UV, which provides equivalent diagnostic information to the canonical method. Furthermore, the deep learning-based detection and transformation algorithms enable rapid, practical, and reliable visualization and characterization of peripheral blood smears for non-professional users. These attributes make DeepSmart-AM highly favorable for resource-limited environments, which is also significant for a myriad of point-of-care applications. Meanwhile, the cost of our system (less than USD 100 for add-on parts and ∼USD 1,000 for the smartphone) is remarkedly cheaper than the commercial machines which also require multiple chemical agents for hematological analysis. Note that the ubiquitous of smartphones further improves the deployment of our proposed approach. Our Smart-AM system, combined with deep learning algorithms, holds great promise as a home-monitoring device for blood smear imaging and diagnosis, providing great convenience to patients who suffer from blood-related diseases that require frequent blood tests.

Despite successful demonstrations, further improvements still need to be conducted to meet some challenges and make our technique more applicable for clinical translations. First, the current system is implemented in a transmission mode for blood smear imaging, which limits its applications to thin slides and monolayer imaging. A reflection mode Smart-AM system can be developed through the total internal reflection field illumination method [42] or by using optical fibers to guide the illumination beam from the same side as the detection beam. In this configuration, the system can be applied to image large regions of the blood smears that are not monolayer, or to image a drop of blood directly without smearing. Second, while our Smart-AM technique has shown great promise for imaging peripheral blood smears and providing morphological information, it should be admitted that our current approach is not able to visualize hemoglobin within red blood cells. This is likely because hemoglobin does not exhibit significant autofluorescence in the visible or near-infrared range. However, hemoglobin can be indirectly detected through its effects on the absorption of light, meaning that it is possible to incorporate an additional light source, which operates at a wavelength corresponding to the peak absorption of hemoglobin, into our Smart-AM system. This could potentially enable us to obtain information about the presence and distribution of hemoglobin within peripheral blood smears, providing a more comprehensive hematological analysis. It should be noted that implementing these solutions may come with the tradeoff of increased system complexity. Third, since the supervised deep learning method has been proven to be highly effective in enhancing the stability and reliability of Smart-AM images with significant translational potential for improving the accuracy and consistency of our technique, unsupervised deep learning methods can also be explored to impede the registration procedure in the network training part, thus enhancing the detection and transformation efficiency. Meanwhile, an encrypted cloud storage system [24] can be utilized for post-processing remotely in future applications, which creates new possibilities for cost-effective point-of-care diagnostics and care delivery. Finally, our DeepSmart-AM approach has focused mostly on the normal mouse and human blood samples in this manuscript, which only demonstrates the potential of our method for accurate leukocyte classification and style transformation. To thoroughly assess its suitability for the claimed applications in diagnosing blood-related diseases, further investigations are needed to test the detection and style transformation accuracy of various abnormal blood samples (e.g., leukemia). Conducting a more comprehensive statistical analysis and clinical study, including the size, amount, and circularity/shape variation of both leukocytes and erythrocytes, also holds great significance for future research endeavors.

In summary, we developed a smartphone-based quantitative autofluorescence microscopy system assisted with deep-learning algorithms to achieve label-free blood smear imaging for peripheral blood smear analysis, including the visualization of essential features of blood cells, accurate detection and classification of different leukocytes, and high-quality virtual Giemsa-stained image transformation. This practical technique simplifies the traditional clinical workflow of the blood test from days to less than ten minutes, eliminating the need for expensive machines, chemical agents, and human labor. DeepSmart-AM has great potential to enable point-of-care monitoring and remote hematological diagnostics in the future.

Funding

Hong Kong University of Science and Technology (R9421).

>Authors’ contributions. B. H. and T. T. W. W. conceived of the study. B. H. built the imaging system. B. H., V. T. C. T., and C. T. K. L. prepared the specimens involved in this study. B. H. performed imaging experiments. B.H. and L. K. performed hematological staining. B. H. processed and analyzed the data. B. H. and T. T. W. W. wrote the manuscript. T.T.W.W. supervised the whole study.

Disclosures

T. T. W. W. has a financial interest in PhoMedics Limited, which, however, did not support this work. B. H., L. K., and T. T. W.W. have applied for a patent (US Provisional Patent Application No.: 63/340 947) related to the work reported in this manuscript. The remaining authors declare no competing interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but could be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. P. G. Corrie, “Cytotoxic chemotherapy: clinical aspects,” Medicine 36(1), 24–28 (2008). [CrossRef]  

2. T. B. Sneed, H. M. Kantarjian, M. Talpaz, et al., “The significance of myelosuppression during therapy with imatinib mesylate in patients with chronic myelogenous leukemia in chronic phase,” Cancer 100(1), 116–121 (2004). [CrossRef]  

3. K. L. Brown, O. Y. Palyvoda, J. S. Thakur, et al., “Raman spectroscopic differentiation of activated versus non-activated T lymphocytes: An in vitro study of an acute allograft rejection model,” J. Immunol. Methods 340(1), 48–54 (2009). [CrossRef]  

4. A. Ramoji, U. Neugebauer, T. Bocklitz, et al., “Toward a Spectroscopic Hemogram: Raman Spectroscopic Differentiation of the Two Most Abundant Leukocytes from Peripheral Blood,” Anal. Chem. 84(12), 5335–5342 (2012). [CrossRef]  

5. A. Ojaghi, G. Carrazana, C. Caruso, et al., “Label-free hematology analysis using deep-ultraviolet microscopy,” Proc. Natl. Acad. Sci. 117(26), 14779–14789 (2020). [CrossRef]  

6. N. Kaza, A. Ojaghi, and F. E. Robles, “Virtual Staining, Segmentation, and Classification of Blood Smears for Label-Free Hematology Analysis,” BME Front. 2022, 1 (2022). [CrossRef]  

7. V. Gorti, N. Kaza, E. K. Williams, et al., “Compact and low-cost deep-ultraviolet microscope system for label-free molecular imaging and point-of-care hematological analysis,” Biomed. Opt. Express 14(3), 1245 (2023). [CrossRef]  

8. Q. Huang, W. Li, B. Zhang, et al., “Blood Cell Classification Based on Hyperspectral Imaging with Modulated Gabor and CNN,” IEEE J. Biomed. Heal. Informatics 24(1), 160–170 (2020). [CrossRef]  

9. Q. Li, M. Zhou, H. Liu, et al., “Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology,” Appl. Spectrosc. 69(12), 1372–1380 (2015). [CrossRef]  

10. G. S. Verebes, M. Melchiorre, A. Garcia-Leis, et al., “Hyperspectral enhanced dark field microscopy for imaging blood cells,” J. Biophotonics 6(11-12), 960–967 (2013). [CrossRef]  

11. N. T. Shaked, L. L. Satterwhite, M. J. Telen, et al., “Quantitative microscopy and nanoscopy of sickle red blood cells performed by wide field digital interferometry,” J. Biomed. Opt. 16(03), 1 (2011). [CrossRef]  

12. Y. K. Park, M. Diez-Silva, G. Popescu, et al., “Refractive index maps and membrane dynamics of human red blood cells parasitized by Plasmodium falciparum,” Proc. Natl. Acad. Sci. U. S. A. 105(37), 13730–13735 (2008). [CrossRef]  

13. T. A. Zangle, D. Burnes, C. Mathis, et al., “Quantifying Biomass Changes of Single CD8+ T Cells during Antigen Specific Cytotoxicity,” PLoS One 8(7), e68916 (2013). [CrossRef]  

14. J. Yoon, K. Kim, H. Park, et al., “Label-free characterization of white blood cells by measuring 3D refractive index maps,” Biomed. Opt. Express 6(10), 3865 (2015). [CrossRef]  

15. X. Shu, S. Sansare, D. Jin, et al., “Artificial-Intelligence-Enabled Reagent-Free Imaging Hematology Analyzer,” Adv. Intell. Syst. 3(8), 2000277 (2021). [CrossRef]  

16. B. P. Yakimov, M. A. Gogoleva, A. N. Semenov, et al., “Label-free characterization of white blood cells using fluorescence lifetime imaging and flow-cytometry: molecular heterogeneity and erythrophagocytosis [Invited],” Biomed. Opt. Express 10(8), 4220 (2019). [CrossRef]  

17. D. Chen, N. Li, X. Liu, et al., “Label-free hematology analysis method based on defocusing phase-contrast imaging under illumination of 415 nm light,” Biomed. Opt. Express 13(9), 4752 (2022). [CrossRef]  

18. D. N. Breslauer, R. N. Maamari, N. A. Switz, et al., “Mobile Phone Based Clinical Microscopy for Global Health Applications,” PLoS One 4(7), e6320 (2009). [CrossRef]  

19. H. C. Koydemir, Z. Gorocs, D. Tseng, et al., “Rapid imaging, detection and quantification of Giardia lamblia cysts using mobile-phone based fluorescent microscopy and machine learning,” Lab Chip 15(5), 1284–1293 (2015). [CrossRef]  

20. S. Chung, L. E. Breshears, A. Gonzales, et al., “Norovirus detection in water samples at the level of single virus copies per microliter using a smartphone-based fluorescence microscope,” Nat. Protoc. 16(3), 1452–1475 (2021). [CrossRef]  

21. H. Zhu, O. Yaglidere, T.-W. Su, et al., “Cost-effective and compact wide-field fluorescent imaging on a cell-phone,” Lab Chip 11(2), 315–322 (2011). [CrossRef]  

22. M. V. D’Ambrosio, M. Bakalar, S. Bennuru, et al., “Point-of-care quantification of blood-borne filarial parasites with a mobile phone microscope,” Sci. Transl. Med. 7(286), 1 (2015). [CrossRef]  

23. W. Zhu, G. Pirovano, P. K. O’Neal, et al., “Smartphone epifluorescence microscopy for cellular imaging of fresh tissue in low-resource settings,” Biomed. Opt. Express 11(1), 89 (2020). [CrossRef]  

24. H. Im, C. M. Castro, H. Shao, et al., “Digital diffraction analysis enables low-cost molecular diagnostics on a smartphone,” Proc. Natl. Acad. Sci. 112(18), 5613–5618 (2015). [CrossRef]  

25. H. Zhu, I. Sencan, J. Wong, et al., “Cost-effective and rapid blood analysis on a cell-phone,” Lab Chip 13(7), 1282 (2013). [CrossRef]  

26. Q. Wei, H. Qi, W. Luo, et al., “Fluorescent imaging of single nanoparticles and viruses on a smart phone,” ACS Nano 7(10), 9147–9155 (2013). [CrossRef]  

27. S. Kheireddine, Z. J. Smith, D. V. Nicolau, et al., “Simple adaptive mobile phone screen illumination for dual phone differential phase contrast (DPDPC) microscopy,” Biomed. Opt. Express 10(9), 4369 (2019). [CrossRef]  

28. M. K. Aslan, Y. Ding, S. Stavrakis, et al., “Smartphone Imaging Flow Cytometry for High-Throughput Single-Cell Analysis,” Anal. Chem. 95(39), 14526–14532 (2023). [CrossRef]  

29. T. O’Connor, A. Anand, B. Andemariam, et al., “Deep learning-based cell identification and disease diagnosis using spatio-temporal cellular dynamics in compact digital holographic microscopy,” Biomed. Opt. Express 11(8), 4491 (2020). [CrossRef]  

30. K. de Haan, H. Ceylan Koydemir, Y. Rivenson, et al., “Automated screening of sickle cells using a smartphone-based microscope and deep learning,” npj Digit. Med. 3(1), 76 (2020). [CrossRef]  

31. Y. Wu, A. Kirillov, F. Massa, et al., “Detectron2,” (2019).

32. P. Isola, J.-Y. Zhu, T. Zhou, et al., “Image-to-Image Translation with Conditional Adversarial Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017), 2017-Janua, pp. 5967–5976.

33. M. Monici, “Cell and tissue autofluorescence research and diagnostic applications,” Biotechnol. Annu. Rev. 11(SUPPL), 227–256 (2005). [CrossRef]  

34. B. Elsevier, M. Monici, R. Pratesi, et al., ““Natural fluorescence of white blood cells” spectroscopic and imaging study,” J. Photochem. Photobiol. B Biol. 30, 29–37 (1995). [CrossRef]  

35. A. B. Shrirao, R. S. Schloss, Z. Fritz, et al., “Autofluorescence of blood and its application in biomedical and clinical research,” Biotechnol. Bioeng. 118, 4550–4576 (2021). [CrossRef]  

36. Y. Liu, A. M. Rollins, R. M. Levenson, et al., “Pocket MUSE: an affordable, versatile and high-performance fluorescence microscope using a smartphone,” Commun. Biol. 4(1), 334 (2021). [CrossRef]  

37. Y. Hou, P. Zhang, X. Xu, et al., “Nonlinear dimensionality reduction by locally linear inlaying,” IEEE Trans. Neural Networks 20(2), 300–315 (2009). [CrossRef]  

38. M. R-cnn, R. Girshick, K. He, et al., “Mask R-CNN,” 42(2), 386–397 (2020).

39. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

40. S. Elmore, “Apoptosis: A Review of Programmed Cell Death,” Toxicol. Pathol. 35(4), 495–516 (2007). [CrossRef]  

41. I. Arganda-Carreras, V. Kaynig, C. Rueden, et al., “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics 33(15), 2424–2426 (2017). [CrossRef]  

42. Y. Sung, F. Campa, and W.-C. Shih, “Open-source do-it-yourself multi-color fluorescence smartphone microscopy,” Biomed. Opt. Express 8(11), 5075 (2017). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplemental Document
Visualization 1       A vertical series of close-up and registered Smart-AM, DeepSmart-AM, and Giemsa-stained hematological images of a blood smear.
Visualization 2       A horizontal series of close-up and registered Smart-AM, DeepSmart-AM, and Giemsa-stained hematological images of a blood smear.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but could be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Motivation, design, and simulation of Smart-AM. a) Conventional blood smear analysis in a hospital. b) Proposed home-based blood imaging and diagnosis by the Smart-AM system. c) Schematic of the Smart-AM system. The major components of the Smart-AM system are UV-LEDs, focus lenses, a sample slide, an external lens module, and a smartphone camera. The UV light is obliquely focused onto the sample by UV-fused silica plano-convex lenses. The excited autofluorescence signal is collected by an external lens module, refocused by the built-in lens, and subsequently detected by the smartphone camera sensor. d) Ray-tracing simulation of the external reversed lens module served as an objective for the smartphone-based microscope. e1–e6) Simulated spot diagrams at the image plane of six different fields (Δy = 0, 0.25, 0.50, 0.75, 1.00, and 1.25 mm from the center of the field at the sample plane). Minimal aberration was observed within the center 1-mm radius region. RMS: root mean square.
Fig. 2.
Fig. 2. The Smart-AM system resolution characterization. a) A Smart-AM image of fluorescent polymer microspheres of 200 nm in diameter. b) The zoomed-in view of one bead in the orange box in (a) and the profile along the red dashed line is extracted for averaging (red circles). The FWHM of the Gaussian fitting profile (solid blue line) is about 1.42 µm. The pixel size of the image (the distance between two consecutive data points) is 0.33 µm.
Fig. 3.
Fig. 3. Smart-AM images and corresponding Giemsa-stained images of different blood cells. Different morphological details can be observed among different blood cells. a–c) Smart-AM and corresponding Giemsa-stained images of neutrophils. d,e) Smart-AM and corresponding Giemsa-stained images of eosinophils. f) Smart-AM and corresponding Giemsa-stained images of a monocyte. g–i) Smart-AM and corresponding Giemsa-stained images of lymphocytes. j) Smart-AM and corresponding Giemsa-stained images of a basophil. k) Smart-AM and corresponding Giemsa-stained images of erythrocytes. l) Smart-AM and corresponding Giemsa-stained images of thrombocytes (yellow arrowheads).
Fig. 4.
Fig. 4. Smart-AM imaging of abnormal blood samples. a) Smart-AM (top) and corresponding Giemsa-stained (bottom) images over a ∼1 mm × 1.4 mm region of a blood smear with abnormal leukocytes. b–d) Zoomed-in Smart-AM and Giemsa-stained images of orange, green, and yellow boxes marked in (a), respectively. There are some leukocytes under apoptosis, such as cytoplasmic blebbing on the surface (red arrows in (c), (d)), and apoptotic bodies from one eosinophil (yellow arrows in (c)). Besides, some clumps of leukocytes are observed (pink arrows in (d), (e)) due to inflammation or bacterial infections. e) Smart-AM images over a ∼1 mm × 1.4 mm region of a blood smear with abnormal thrombocytes. The insert at the bottom left of the Smart-AM image is the corresponding Giemsa-stained image. f) Zoomed-in Smart-AM and corresponding Giemsa-stained image of orange dash region marked in (e), showing the excess platelets. g,h) Zoomed-in Smart-AM and corresponding Giemsa-stained images of yellow and green boxes marked in (e), respectively, showing platelets clumping. i) Smart-AM (top) and corresponding Giemsa-stained (bottom) images over a ∼1 mm × 1.4 mm region of a blood smear with cancer. j–l) Zoomed-in Smart-AM and Giemsa-stained images of orange, green, and yellow boxes marked in (i), respectively. Excess neutrophils and larger lymphocytes (red arrows in (j), (k)) compared with normal lymphocytes (pink arrows in (k), (l)) are observed. m) Distribution of leukocyte features extracted from normal and cancer blood samples with median cross-sectional areas of 134 µm2 and 175 µm2, and median intercellular distances of 186 µm and 17 µm, respectively. n) Distributions of leukocyte features extracted from (j). The significance is defined as p ≤ 0.05 in all cases.
Fig. 5.
Fig. 5. Automatic and high-accuracy differential of five leukocytes with the Detectron2 platform. a) The LLE visualization of blood cell features in two-dimensional and three-dimensional (inset at the bottom right) space. b) Smart-AM images of five leukocyte subtypes from the detection results. The results contain the bounding boxes, predicted leukocyte labels, and the confidence coefficient. c) Confusion matrix for five-part leukocyte differential counts. d) Performance evaluation of differential results by quantitative metrics (accuracy, sensitivity, specificity, and F1-score). The average values among all leukocytes are 0.982 and 0.925 for accuracy and F1-score, respectively. Neu.: neutrophil, Lym.: lymphocyte, Mon.: monocyte, Eos.: eosinophil, Bas.: basophil.
Fig. 6.
Fig. 6. Virtual staining of the Smart-AM (DeepSmart-AM) images with a conditional adversarial network. a) Virtual staining by a conditional adversarial network with two generators and a discriminator. SSIM: structural similarity index measure, Rec: recovered. b–e) Smart-AM (top/left) and DeepSmart-AM (bottom/right) validation with blood smear samples. f–m) Zoomed-in Smart-AM and DeepSmart-AM images of orange and green solid boxes in (b–e), respectively. n–u) The corresponding Giemsa-stained images as the ground truth. Blood cells including erythrocytes, different leukocytes (lymphocyte in (f), eosinophils in (g) and (h), and neutrophils in (i–k)), and platelets, were well virtually stained to mimic the appearance of real Giemsa-stained images.
Fig. 7.
Fig. 7. Distribution of blood cell features extracted from both DeepSmart-AM images and clinical standard Giemsa-stained images. a,c) Cross-sectional areas of blood cells extracted from mouse blood samples and human blood samples, respectively. b,d) Intercellular distances of blood cells extracted from mouse blood samples and human blood samples, respectively. Wilcoxon rank-sum testing is performed for each distribution. The significance is defined as p ≤ 0.05.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

L = L c G A N ( G , D ) + λ L L 1 ( G , F ) + γ L S S I M ( G , F )
L c G A N ( G , D ) = E x , y D ( x , y ) 2 + E x 1 D ( x , G ( x ) ) 2
L L 1 ( G , F ) = G ( x ) y 1 + F ( G ( x ) ) x 1
L S S I M ( G , F ) = 1 E x p d a t a ( x ) [ SSIM ( x , G ( x ) ) ] + 1 E G ( x ) p d a t a ( G ( x ) ) [ SSIM ( G ( x ) , F ( G ( x ) ) ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.