Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Label- and slide-free tissue histology using 3D epi-mode quantitative phase imaging and virtual hematoxylin and eosin staining

Open Access Open Access

Abstract

Histological staining of tissue biopsies, especially hematoxylin and eosin (H&E) staining, serves as the benchmark for disease diagnosis and comprehensive clinical assessment of tissue. However, the typical formalin-fixation, paraffin-embedding (FFPE) process is laborious and time consuming, often limiting its usage in time-sensitive applications such as surgical margin assessment. To address these challenges, we combine an emerging 3D quantitative phase imaging technology, termed quantitative oblique back illumination microscopy (qOBM), with an unsupervised generative adversarial network pipeline to map qOBM phase images of unaltered thick tissues (i.e., label- and slide-free) to virtually stained H&E-like (vH&E) images. We demonstrate that the approach achieves high-fidelity conversions to H&E with subcellular detail using fresh tissue specimens from mouse liver, rat gliosarcoma, and human gliomas. We also show that the framework directly enables additional capabilities such as H&E-like contrast for volumetric imaging. The quality and fidelity of the vH&E images are validated using both a neural network classifier trained on real H&E images and tested on virtual H&E images, and a user study with neuropathologists. Given its simple and low-cost embodiment and ability to provide real-time feedback in vivo, this deep-learning-enabled qOBM approach could enable new workflows for histopathology with the potential to significantly save time, labor, and costs in cancer screening, detection, treatment guidance, and more.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. INTRODUCTION

Histopathology is the gold standard for diagnosing disease, guidance of surgical margins during lesion resection, and overall clinical evaluation of tissue [1]. To visualize tissue architecture, labor-and time-intensive tissue processing is currently required. During the most common histopathology procedure, an excised tissue specimen is fixed in formalin and paraffin-embedded (FFPE), sectioned to generate micron-thick slices, and then mounted onto microscope slides. Those slides can then undergo a number of different staining procedures with the most common being hematoxylin and eosin (H&E) staining, in which hematoxylin stains cell nuclei purple and eosin stains the extracellular matrix, stroma, and cytoplasm pink [1]. This standard, widely used process typically takes eight hours or more to complete. The frozen-section alternative, while often available, comes with significant technical and quality challenges [2]. Consequently, fast, real-time tissue assessment with H&E-like contrast would have the potential to improve several medical procedures, ranging from surgical margin assessment to cancer screening and more.

In an effort to gain real-time histopathology-level tissue assessments for use in surgery and other clinical fields, alternate microscopy techniques have been employed to provide imaging feedback during tissue excision. Some of these techniques include rapid tissue staining followed by linear [3,4] and nonlinear fluorescence microscopy [5], as well as label-free approaches ranging from ultraviolet-based methods and autofluorescence [68] to more complex nonlinear techniques [912]. Many of these methods have also incorporated virtual staining pipelines to obtain images that are familiar to pathologists and thus avoid the need for further training on each imaging modality [35,1215]. While promising, these methods have certain downsides, as they variously rely on staining the imaged tissues, employ UV light (which has very limited penetration depth and may be challenging to implement in vivo due to phototoxicity), and/or use complex and expensive nonlinear methods to achieve virtual histology. Further, translation of these technologies to in-vivo applications is challenging or infeasible given the need for exogenous agents, concerns regarding tissue damage, and technological hurdles. These challenges limit the applicability of virtually stained microscopy and slide-free histology and point out the need for a microscopy method that could provide histopathologic information quickly (real-time), non-destructively, and with high resolution in 3D, using simple, low-cost instrumentation.

To achieve these desired capabilities, we propose the use of virtual-H&E-stained images obtained with quantitative oblique back-illumination microscopy (qOBM) as a method of real-time histopathology for excised tissue samples, and with a clear path to future in-vivo applications [16,17]. qOBM is a label- and fixative-free, wide-field, low-cost microscopy technique capable of obtaining subcellular resolution, quantitative phase images of thick, scattering tissue samples using same-side epi-illumination [18,19]. (Thick, scattering samples refer to, for example, excised tissues without sectioning or intact organs such as brains, which cannot be imaged with transmission microscopes.) The level of 3D cellular and subcellular structural detail provided by this technology is comparable to that provided by label-free nonlinear microscopy methods, but with an embodiment that is simple and significantly cheaper, as it uses LEDs instead of femto-/pico-second lasers, is faster (wide-field versus point scanning), gentle on tissues and cells, and can be easily modified and miniaturized for in-vivo applications [20,21]. Here, we advance qOBM and the field of slide-free histology by introducing an image translation method by which qOBM images are virtually stained to resemble H&E-stained images.

The approach leverages deep learning, specifically generative adversarial networks (GANs) [22], which have been employed to generate virtual H&E histology from alternative microscopy modalities such as quantitative phase imaging [15], reflectance confocal microscopy [23], and photoacoustic remote sensing microscopy [24], among others [14]. This approach typically requires training datasets in which the alternative microscopy images can be pixel-registered with the target domain images (e.g., H&E), and most often relies on the use of thin tissue sections. Here, such pixel-registered datasets are unobtainable as qOBM imaging is performed on fresh tissue, whereas ground-truth H&E images are subject to tissue distortions from histological processes. To work around the lack of one-to-one pixel matching, we turn to cycle-consistent GANs (CycleGANs) [25]. Recent reports (using fluorescently labeled tissue and/or UV light) have demonstrated the utility of such networks for virtual H&E staining while relaxing the pixel-matching constraint [2628]. Here, we demonstrate the efficacy of CycleGANs for virtual H&E staining of qOBM images. This combination has the potential to reduce the time needed to acquire H&E images from hours or even days to ${\lt}{{1}}\;{\rm{s}}$.

To demonstrate the clinical utility of this method, we primarily focus on imaging brain tissue and differentiating between healthy and tumor regions (this represents one of many potential applications). To date, identifying brain tumor margins intraoperatively remains a significant clinical challenge; thus, neurosurgeons are often conservative with excised margins to minimize damage to healthy brain tissue vital for neurological function. However, this approach can lead to incomplete resections and tumor recurrence. Novel intraoperative methods such as 5-aminolevulinic acid (5-ALA) in vivo staining have shown promise for improving clinical outcomes [2931], but they are not without their limitations. For example, 5-ALA exhibits variable uptake based on brain morphology [30] and has limited sensitivity for low-grade disease and infiltrative tumor cells even in high-grade tumors [3234]. Real-time, label-free image guidance with H&E-like contrast has the potential to significantly improve neurosurgical outcomes, particularly if deployed in situ (that is, in the surgical site rather than on excised specimens).

In this study, we first demonstrate the conversion of qOBM images to vH&E (i.e., qOBM-to-vH&E conversion) using mouse liver specimens, which have a simple and homogenous structure, to establish the feasibility and effectiveness of the approach, as well as to show its utility for imaging a variety of tissue types. Then, we demonstrate qOBM-to-vH&E conversion using tissues from a rat glioma tumor model and human glioma specimens. To validate the results, we (1) trained a classifier on real H&E images of tumor and healthy tissues and then tested on virtual H&E images, and (2) performed a user study with five board-certified neuropathologists. The proposed qOBM-to-vH&E conversion pipeline permits a novel histopathology workflow (Fig. 1) that has the potential to reduce the time and costs associated with obtaining histological H&E images. Further, the level of histological detail with H&E-like contrasts achieved by the proposed simple and label-free method is exemplary and paves the way for novel capabilities in a number of medical applications.

 figure: Fig. 1.

Fig. 1. Deep-learning-enabled qOBM imaging workflow. (A) The standard histology workflow requires several sample preparation steps before viewing under a brightfield microscope and interpretation. This process can take about 8 h or longer. (B) Our proposed workflow utilizes qOBM imaging to image a fresh specimen of tissue and virtual staining to obtain similarly interpretable images in about 1 s.

Download Full Size | PDF

2. RESULTS

A. Virtual Staining of Label-Free qOBM Images of Fresh Mouse Liver

To establish the feasibility and effectiveness of unpaired image-to-image translation from qOBM to H&E, we first attempted to generate vH&E images of healthy mouse liver specimens, which demonstrate a consistent, well-defined microanatomy primarily comprising well-organized hepatic cells and blood vessels. qOBM images of freshly excised liver tissue specimens (${\rm{N}} = {{8}}$), donated from otherwise discarded tissue, were obtained with a ${{60}} \times$ objective (0.7 NA, ${{270}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$ field of view, with an experimentally measured lateral resolution of 0.6 µm and cross-sectional/axial resolution of 3.5 µm). qOBM images, including real-time processing, were acquired at 10 Hz. All animal experimental protocols were approved by Institutional Animal Care and Use Committee (IACUC) of the Georgia Institute of Technology and Emory University. Tissues were subsequently submitted for histological processing to obtain H&E slides (sections were ${\sim} 5 \; {{\unicode{x00B5}{\rm m}}}$ thick, corresponding to the typical thickness of tissue sections used for brain tumor histopathology). Before CycleGAN training, the qOBM images were contrast-enhanced and grayscale inverted. The images were divided into ${{512}} \times {{512}}\;{\rm{pixel}}$ (${\sim}{{70}} \times {{70}}\;{{\unicode{x00B5}{\rm m}}}$) tiles for training. We used a standard ResNet-based generator architecture and a PatchGAN discriminator, training on 2358 qOBM and 1737 H&E tiles for 200 epochs at a batch size of four.

Figure 2 shows representative results. First, the native qOBM phase images Fig. 2(A) show clear cellular and subcellular detail that closely parallels the structure of the traditional H&E images Fig. 2(C), making qualitative assessment of the translation relatively simple. For example, in qOBM, with contrast generated by the refractive index properties of the tissue, hepatocyte nuclei appear dark and possess subtle but appreciable subnuclear structure/texture (shown in insets), while red blood cells appear bright. Figure 2(B) shows the translated vH&E image, which preserves the general structure of the qOBM image, with a high-fidelity style conversion to H&E. Specifically, nuclear and even subnuclear structures of the hepatocytes are converted appropriately, with the expected purple hue and texture. It is worth emphasizing that the subnuclear structures clearly present in the vH&E (and H&E) images are in fact also present in the qOBM images, albeit with lower contrast (see Fig. 2 insets and white squares). The network also correctly enhances and converts nuclei that can be difficult to identify in qOBM (white arrow), although some are occasionally missed (yellow arrow). The missed nuclei occur in areas near capillaries; this is likely due to the fact that the capillary structures in the qOBM images of fresh tissues are different (better preserved and continuous) than in the target H&E images of processed tissue sections in which the capillaries appear more fragmented (blue arrows highlight vessel structures). Consequently, the network may at times not appropriately deal with such structures. Training with more data could potentially resolve these small errors; nevertheless, the overall structure of the tissue is well preserved and is consistent with the appearance of healthy mouse liver.

 figure: Fig. 2.

Fig. 2. qOBM-to-vH&E conversion of mouse liver tissue. (A) Label-free $60 \times$ qOBM image of mouse liver tissue. (B) Corresponding vH&E image. (C) Standard brightfield H&E image provided for comparison. The white boxes and insets show a representative appropriately converted hepatocyte with appreciable subnuclear detail, the yellow arrows refer to nuclei missed by the conversion, and the blue arrows refer to capillaries. Scale bar is 50 µm.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. qOBM-to-vH&E conversion of brain tissue from the 9L gliosarcoma rat tumor model. (A)–(D) Label-free $60\times$ qOBM images of each of the four rat brain tissue subtypes, including two types of tumor structure (A) and (B), heathy basal ganglia (C), and healthy cortex (D). (E)–(H) Corresponding vH&E images produced by a CycleGAN trained on rat brain images. (I)–(L) Standard brightfield H&E images of the same tissue subtypes, provided for comparison. Scale bar is 50 µm.

Download Full Size | PDF

We also note that structures observed in qOBM that are not present in H&E, such as small bright white droplets—likely composed of lipids—are correctly ignored in the vH&E images and do not produce unwanted artifacts. Red blood cells, also depicted in bright white in the phase image, are correctly translated to their characteristic bright red hue in H&E. These results confirm that CycleGANs can successfully translate qOBM quantitative phase images of thick fresh tissue to H&E-like images without needing pixel-matched paired images.

B. Virtual Staining of Label-Free Microscopy Images of Rat Brain Tumor

Having established the qualitative ability to translate qOBM images into vH&E using a relatively simple and homogeneous sample type, we next turn to the more challenging task of virtually staining complex brain tissue (healthy and tumor) and later providing quantitative metrics of translation fidelity.

qOBM imaging of fresh tissues from a 9L gliosarcoma rat tumor model (${\rm{N}} = {{14}}$) was performed as described in Costa et al. [20] (also see Section 4); this tumor was chosen because of its similarity to high-grade human gliomas. Treated animals had tumors confined to one hemisphere, leaving the other as control. Two healthy mice were also imaged and analyzed as additional controls (thus a total of 16 animals were analyzed). Images were acquired with a ${{60}} \times$, 0.7 NA objective. During the imaging sessions, the brains were scanned laterally and axially (volumetrically) in an automated manner to acquire data from different regions of the brain. Following qOBM imaging, the brains were formalin-fixed, embedded in paraffin wax, cut into thin (5 µm) sections, and stained with H&E.

 figure: Fig. 4.

Fig. 4. qOBM-to-vH&E conversion for images with a mix of healthy and tumor rat brain tissue, never seen during training. (A), (B) Label-free ${{60}} \times$ qOBM images of mixed rat brain tissue. (C), (D) Corresponding vH&E images. (E)–(F) Standard brightfield H&E images of the same tissue subtypes, provided for comparison. Scale bar is 50 µm.

Download Full Size | PDF

Four general tissue subtypes were observed and characterized with qOBM in the 9 L gliosarcoma model. Figures 3(A) and 3(B) show two densely hypercellular tumor regions, one with a malignant sarcomatous population Fig. 3(A), and another with a malignant glial component Fig. 3(B). This biphasic tumor tissue pattern is characteristic of gliosarcomas. Additionally, Fig. 3(C) demonstrates healthy basal ganglia (with the presence of white matter bundles), and Fig. 3(D) shows healthy cortex. We trained a single CycleGAN model for qOBM-to-vH&E conversion on an image set representing all four subtypes with a total of 1377 qOBM and 1744 H&E tiles of size ${{512}} \times {{512}}\;{\rm{pixels}}$, trained for 200 epochs at a batch size of four. Qualitatively, the CycleGAN provides qOBM-to-vH&E conversions Figs. 3(E), 3(F), 3(G), and 3(H)) that are remarkably similar to standard H&E, provided for comparison in Figs. 3(I), 3(J), 3(K), and 3(L). For instance, Fig. 3(E) (vH&E) clearly shows the same overall pleomorphic, herringbone-shaped spindle cell structure shown in Fig. 3(I) (real H&E), and Fig. 3(F) shows hyperchromatic appearance (dark purple color) of the tumor cells. In the basal ganglia, the vH&E image clearly shows the eosinophilic (deep pink color) white matter bundles, consistent with the real H&E image [Fig. 3(K)]. Finally, cortex regions such as normal basal ganglia exhibit the appropriate cellularity, with blood cells (with large phase values) correctly translated to an intense red hue.

 figure: Fig. 5.

Fig. 5. Quantitative evaluation of qOBM-to-vH&E conversion for rat brain tissue. A classifier trained on standard H&E images to differentiate between tumor and healthy images is assessed using vH&E images. Summary of results: (A) accuracy of training H&E set with five-fold cross-validation and the accuracy of the vH&E test set. (B) Confusion matrix of this H&E healthy/tumor classifier applied to the vH&E images.

Download Full Size | PDF

Figure 4 shows qOBM-to-vH&E conversion of specimens that were not shown to the network during training, consisting of an admixture of healthy brain and tumor. The conversion is successful and shows excellent agreement with the style and appearance of real H&E images. Specifically, the examples in Fig. 4 show clear lines of delineation between the tumor and brain tissue and a mesenchymal transition characteristic of the 9L gliosarcoma rat model. The non-tumor brain tissue also demonstrates reactive characteristics such as high cellularity as expected of tissue adjacent to tumor. These results highlight the ability of the qOBM-to-vH&E conversion network to make correct inferences even when presented with structures outside of those explicitly provided in training. We attribute this capability, in part, to the close resemblance of the native qOBM phase images to histology, where again the style/mode difference between qOBM and H&E is relatively minor (particularly when compared to other label-free 3D scattering-based imaging technologies [23,35]).

 figure: Fig. 6.

Fig. 6. Label-free qOBM imaging of a strip of rat brain tissue and corresponding vH&E. Virtual H&E strip mosaic of rat brain tumor obtained by applying the CycleGAN trained on rat brain to the whole mosaic at once. (A) Virtual-H&E mosaic (${6.3}\;{\rm{mm}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$). Scale bar is 300 µm. (B) Zoomed-in region of the label-free $60\times$ qOBM strip (${{600}}\;{{\unicode{x00B5}{\rm m}}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$). Scale bar is 50 µm. (C) Zoomed-in region of the corresponding vH&E strip. (D) Zoomed-in region of a brightfield H&E strip region provided for comparison.

Download Full Size | PDF

To quantitatively evaluate the qOBM-to-vH&E conversion, we trained a convolutional neural network classifier to discriminate between H&E images of healthy and tumor tissue and observed its performance on vH&E images. The tissue class (i.e., ground truth healthy versus tumor) for each qOBM and H&E image was known a priori based on the anatomical location of the implanted tumor cells. The classifier was trained with five-fold cross-validation on 1395 real H&E tiles to discriminate between healthy and tumor, which yielded an accuracy of ${99.4}\;{{\pm}}\;{0.8}\%$ on a held-out test set of ${{349}}\;{{512}} \times 512 {\text -} {\rm{pixel}}$ tiles. The classifier was then employed on 270 vH&E tiles generated by the CycleGAN and displayed an accuracy of ${95.2}\;{{\pm}}\;{2.8}\%$ (Fig. 5). This suggests that the translated images preserve both the style and diagnostic information content of the traditional H&E images.

C. Virtual H&E Staining of Mosaics and Tomographic Volumes

The qOBM system used in these studies was equipped with lateral and axial automated stages that enable scanning tissue in all directions to create large mosaics, as well as tomographic volumetric datasets. Figure 6 demonstrates a virtual H&E strip mosaic (${6.3}\;{\rm{mm}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$) of a rat brain, while Fig. 7 and Supplement 1, Visualization 1 and Visualization 2 show a vH&E 3D rendered volume (${{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{100}}\;{{\unicode{x00B5}{\rm m}}}$) of a rat brain tumor margin. In Fig. 6, the overall margin delineation between tumor tissue and normal tissue based on the cellularity is clearly apparent, with excellent agreement to H&E. Figure 7 demonstrates a transition from a glial tumor subtype surrounded by basal ganglia tissue structures into the sarcomatous tumor subtype, which is evident in the vH&E images. Here the robustness of the vH&E translation is evident and demonstrates a consistent color and structure in the reconstructed images stitched or stacked together using a standard process (see Section 4), with no special consideration for the mosaic or volumetric nature of the datasets.

D. Virtual Staining of Label-Free Microscopy Images of Human Glioma Specimens

To demonstrate the potential clinical utility of the approach, the CycleGAN deep learning pipeline was employed to virtually stain qOBM images of human astrocytoma specimens. Samples consisted of freshly excised human brain tumor and tumor-edge regions of infiltrating grade 2 and grade 3 astrocytoma specimens discarded from neurosurgery. Five patient samples were analyzed. All tissues were imaged fresh within 6 h of removal, and no modifications were made to the tissues prior to the qOBM imaging process. It is important to note that the margins of these types of infiltrating tumors, especially grade 2 astrocytomas, are extremely difficult to identify intraoperatively, particularly in vivo where existing assessment tools lack sensitivity. All human samples were de-identified and obtained through the Winship Cancer Institute of Emory University using approved protocols.

 figure: Fig. 7.

Fig. 7. qOBM and corresponding vH&E 3D volumetric stack of a rat brain tumor margin. vH&E volumetric stack (${{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{100}}\;{{\unicode{x00B5}{\rm m}}}$) obtained by applying the trained rat brain CycleGAN to each image in the stack. (A) qOBM-to-vH&E conversion of the volumetric stack is depicted. (B) vH&E volume, with X-Y (60 µm deep), X-Z, and Y-Z cross sections shown. Tissue surface is on top of the volume. See Supplement 1, Fig. 6 for the qOBM equivalent. (C) qOBM image slices at various depths and the corresponding vH&E image slices. Scale bar is 50 µm. See Supplement 1, Visualization 1 and Visualization 2 for a depth sweep-through of this volumetric stack.

Download Full Size | PDF

We continued training the CycleGAN developed for rat brain tissue on an additional 837 qOBM and 372 H&E tiles of human glioma tissue. This process is often referred to as transfer learning or fine-tuning [36]. Qualitatively, this fine-tuned model performed significantly better on human specimens than when we attempted to apply a neural network trained exclusively on rat specimens (Supplement 1, Fig. 5 and Supplement 1, Note 5). We also compared our fine-tuned model to training from scratch on the human glioma images alone (Supplement 1, Figs. 5E, F), observing that the fine-tuned model demonstrates better subnuclear detail (Fig. 8).

 figure: Fig. 8.

Fig. 8. qOBM-to-vH&E conversion of human gliomas. Each row contains a qOBM image, corresponding vH&E, and standard brightfield H&E images, provided for comparison. (A)–(I) Three separate human grade 3 glioma specimens (one per row). (J)–(L) Human grade 2 (low-grade) glioma specimen. (M)–(O) Healthy human tissue specimen from the edge of a grade 3 astrocytoma. Scale bar is 50 µm.

Download Full Size | PDF

Figures 8(A)–8(F) show two human grade 3 astrocytomas, clearly identifiable due to their hypercellular and hyperchromatic tumor cells. In the qOBM phase images, the cells are tightly packed and display rough intranuclear texture; these are appropriately translated in the vH&E image. In Figs. 8(G)–8(I), we see another hypercellular human grade 3 astrocytoma. Both the virtual and real H&E show atypically shaped cells and nuclei that are an important indicator of tumor presence. Note that the qOBM image [Fig. 8(G)] contains small bright white dots throughout the image that we have exclusively observed in brain samples from patients who have received prior radiation treatments (data from a parallel study [37]). These features are only visible in the qOBM images of fresh tissues and vanish after FFPE H&E processing. Interestingly, the digital conversion to vH&E also suppresses the appearance of these structures. This is similar to the results presented in Fig. 2, where the lipid-like structures present in the qOBM images of liver are not displayed in the corresponding vH&E image, as they are absent in the target domain H&E images. Figures 8(J)–8(L) present a human grade 2 (low-grade) astrocytoma. Here, we observe, in both the virtual and real H&E, moderate cellularity and nuclear pleomorphism. This shows the potential of the proposed method to correctly capture H&E-like histological detail indicative of low-grade disease, which again, is extremely difficult to identify intraoperatively with existing intraoperative tools. Finally, Figs. 8(M)–8(O) present a healthy human tissue specimen from the edge of a grade 3 astrocytoma tumor, where the vH&E image resembles the real H&E image, with both showing regularly shaped cell nuclei without hyperchromasia and at the expected density for normal tissue.

Volumetric stacks of human glioma specimens can also be obtained and virtually stained, allowing us to gain additional insight about the specimen. For example, Fig. 9 and Supplement 1, Visualization 3 and Visualization 4 show a volume of a human grade 3 astrocytoma where the first (most shallow) image exhibits a structure consistent with normal brain tissue with the exception of a single atypical cell (as indicated by the arrow in Fig. 9(C) at a depth of ${\rm{Z}} = {{3}}\;{{\unicode{x00B5}{\rm m}}}$). These characteristics alone would not be sufficient to diagnose as tumor or warrant excision of the tissue if seen in vivo intraoperatively. However, as we image deeper into the sample, the tissue exhibits higher cellularity with larger, hyperchromatic cells becoming evident, reflecting the presence of tumor. By being able to move axially (deeper) into the tissue, we can gain additional information, including seeing increased counts of more irregular nuclei, which indicates tumor.

 figure: Fig. 9.

Fig. 9. qOBM and corresponding vH&E 3D volumetric stack of a human glioma margin. Virtual H&E volumetric stack obtained by applying the trained human glioma CycleGAN to each image in the stack. (A) qOBM-to-vH&E conversion of the volumetric stack is depicted. (B) vH&E volume, with X-Y (55 µm deep), X-Z, and Y-Z cross sections shown. Tissue surface is on top of the volume. (C) qOBM image slices at various depths and the corresponding vH&E image slices. The white arrow highlights an irregular nucleus. Scale bar is 50 µm. See Supplement 1, Visualization 3 and Visualization 4 for a depth sweep-through of this volumetric stack.

Download Full Size | PDF

E. Neuropathologist Validation of Virtually Stained qOBM Images

To further validate the potential clinical utility of the virtually stained qOBM images, we performed a user study with American Board of Pathology certified neuropathologists. We collated a set of 30 vH&E images of the rat brain tumor model and 20 vH&E images of human gliomas along with corresponding real H&E images, giving a total of 100 images. These images were reviewed by five neuropathologists, who were asked to respond to three questions: (1) Are tumor cells present in the image? (Y/N/Cannot assess); (2) If this field of view were representative of a larger region, would you recommend continued resection? (Y/N); and (3) How confident are you in this evaluation? (1, unsure, to 5, very confident).

To assess accuracy, we designated the following criteria: for the H&E and vH&E images of the animal model, ground truth was based on a priori knowledge of the location of the tumor (see Section 4). For the human H&E images, ground truth was taken to be the consensus answer from the five neuropathologists. For the vH&E images, ground truth was based on the evaluation of the same specimens after H&E processing, which in this case also agreed with consensus of the vH&E images.

Tables Icon

Table 1. Neuropathologist User Study Comparing Standard H&E and Virtual H&E for Interpretationa

The responses of the neuropathologists (results summarized in Table 1) further validate that the vH&E images and the H&E-stained tissue sectioned images are of similar quality. Both the accuracy and the quality ratings between the two modalities were high, with no statistically significant differences, suggesting that the virtual staining method produced high-quality discernible images that would be clinically useful for interpretation by neuropathologists. Specifically, the overall accuracy for assessing the presence of tumor cells on the real H&E and vH&E images was 94% and 96%, respectively. The inter-group concordance using the average pairwise Cohen’s Kappa value for recommended continued resection demonstrates a near-perfect level of concordance among the pathologists for both the H&E and virtually stained results (0.74 and 0.81 for the real and virtual H&E, respectively). Finally, the diagnostic confidence was also similar for both types of images (4.6 and 4.7 for the real and virtual H&E) with a standard deviation of 0.38 and 0.33 for real H&E and vH&E, respectively.

This survey supports the effectiveness of qOBM-to-vH&E conversion for clinical applications including intraoperative guidance and more.

3. DISCUSSION

Traditional biopsies require tissue excision, histological processing, and examination by a pathologist, a long process that is challenging to accomplish in a surgical environment; the logistics also affect the feasibility of many other clinical tasks such as cancer screening. For intraoperative surgical applications, rapid pathological assessments have thus far been limited because standard FFPE histology requires time-consuming (overnight or longer) tissue processing, leading to the usage of faster but technically challenging approaches such as frozen sections. Various slide-free and label-free microscopy technologies have been developed to address these problems, but those that do so successfully face significant challenges for in vivo applications and require complex, bulky, and expensive systems to achieve H&E-like images. Here, we demonstrate the feasibility of qOBM imaging for rapid assessments, supplementing it with a deep-learning-based framework to obtain H&E-like results from its otherwise clinically unfamiliar grayscale phase-contrast images. To this end, we made use of an unpaired image-to-image translation algorithm known as a CycleGAN to perform a qOBM-to-virtual H&E conversion. We demonstrated this approach with both liver and brain tissue, from three species (mouse, rat, and human). The converted images rendered the subcellular and cytoplasmic detail present in the original qOBM image to resemble familiar H&E contrast. The ability of qOBM to provide real-time, label-free, tomographic images of thick tissue specimens with remarkable agreement with traditional H&E histology is largely because of an underlying similarity between qOBM- and H&E-derived histology. This resemblance facilitates the use of unpaired image-to-image translation tools.

Previous studies have explored the use of CycleGANs for virtual H&E staining with confocal fluorescence [26], MUSE [27], and UV photoacoustic microscopy [28]. Two alternative methods were also compared for MUSE-to-H&E conversion but it was observed that best performance was obtained with CycleGANs [27]. Here, we identified several steps that improved CycleGAN performance for qOBM-to-vH&E conversion. First, grayscale inverting the qOBM images was necessary for the success of conversion since nuclei (especially of tumor cells) have higher refractive indices and thus show a higher brightness in qOBM images whereas the background is dark, opposite to how such structures appear in standard H&E. Inverting the images yields inputs with dark nuclei that align better with H&E (Supplement 1, Note 1). Other virtual staining studies have also observed that the use of inversion provides a significant improvement in performance [14,27,38]. Second, transfer learning helped with the performance of human glioma qOBM-to-vH&E conversion (Supplement 1, Note 2). We also found that our models for transforming individual FOVs generalized well to volumetric stacks and stitched large fields of view, which had been a challenge in other image translation pipelines [23]. We evaluated our conversion efforts with a proxy deep learning classification task, observing that a classifier trained on standard H&E performs similarly on vH&E images. Additionally, we validated our model performance with a study involving five neuropathologists, who found the virtual H&E images functionally equivalent to the standard H&E images for potential surgical guidance.

Moreover, qOBM enables 3D sectioning with vH&E contrast, overcoming limitations of many current slide-free histology methods. Volumetric imaging can be especially important as it can provide a more comprehensive understanding of a tissue specimen and therefore enables more accurate diagnoses [39]. In fact, in this work, we observed that the volumetric imaging capabilities of qOBM can provide critical insights for human specimens that could otherwise be missed with surface-level-only (2D) technologies, even ex vivo. Note that while the deepest vH&E slices we show here are 100 µm from the cut surface, qOBM can achieve a penetration depth of ${\gt}{{120}}\;\unicode{x00B5}{\rm m}$ with 720 nm LED illumination (data not shown). The full depth range of qOBM could potentially be used for vH&E with improvements in signal-to-noise ratio. Moreover, further improvements in the penetration depth of qOBM and hence vH&E can be achieved by using longer wavelengths extending into the near-IR.

Recent work using reflectance confocal microscopy (RCM) and deep learning also showed an ability to provide pseudo-H&E virtual staining [23]. While extremely promising, this approach is not without limitations. In contrast to the case with qOBM, RCM is generally unable to capture the same level of cellular and subcellular details, resulting from inherent differences in the object-frequency content acquired with each method [18,19]. Consequently, the RCM to pseudo-H&E pipeline [23] requires a two-step process with “ground-truth” pseudo-H&E images constructed from tissues stained with acetic acid and an analytical pseudo-H&E algorithm. This approach did not make use of real H&E images as ground truth to compare the pseudo-H&E accuracy. While acetic acid can also enhance the nuclear contrast in qOBM images [40], we find that acquiring pixel-matched pairs of qOBM images before and after staining is quite challenging, especially in soft tissue like brain (see Supplement 1, Fig. 8). The proposed pipeline using qOBM and direct conversion to H&E overcomes these limitations and enables improved histological detail with simpler instrumentation (wide field versus point scanning, and LED light sources versus lasers) while achieving the same penetration depth.

In terms of computational speed, our implementation of CycleGAN takes less than 1 ms to acquire and virtually stain an FOV using an NVIDIA A100 GPU, while it only takes $\sim 4\;{\rm ms}$ with an NVIDIA GeForce RTX 2080 GPU. For eventual clinical applications, we expect such a model to be run on more modest computer units where inference time could be longer. However, we believe there are many opportunities for further optimization, either through the use of deep learning compilers that speed up the existing model, or compression/distillation approaches [41] that train a smaller, faster model that matches the performance of the original model.

While the qOBM-to-vH&E conversion algorithm serves as a useful visualization tool for clinicians to interpret qOBM images, we envision the usage of qOBM-to-vH&E conversion as part of an AI-based diagnostic and decision support pipeline. Various diagnostic AI systems have been developed for H&E-stained images with high accuracy [42,43]. In contrast, due to the limited data available for a novel technology like qOBM, it would be challenging to develop diagnostic AI systems from scratch. Instead, the qOBM images can be converted to vH&E, and diagnostic pipelines already developed based on standard H&E-derived data can then be applied. A proof-of-concept example was demonstrated here by the use of a simple CNN trained on H&E images that was subsequently applied to the vH&E images (Fig. 5). The utilization of qOBM-to-vH&E conversion may allow us to leverage recent advances in computational pathology in new settings, widening the potential of qOBM imaging and slide-free histology.

While our virtual staining results are promising and vH&E images retain diagnostically relevant features, conversion is not pixel-wise perfect. As shown in Supplement 1, Fig. 4, the CycleGAN occasionally has the tendency to hallucinate nuclei or omit them (Fig. 2), primarily around blood vessels. We believe this is due to inherent differences between fresh tissues imaged in qOBM and processed tissues imaged in standard brightfield H&E images, which makes the unpaired image-to-image translation difficult in certain scenarios. We note that the main difficulties appear when artifacts are present in the target domain (H&E of fixed tissues) that are not observed in the original domain (qOBM of fresh tissues). However, the model does well when additional features are present in the original domain but missing in the target domain. This is further discussed in Supplement 1, Note 4. Future work can examine unpaired image-to-image translation techniques that better ensure the content of the original image is preserved appropriately. However, the underlying challenge limiting conversion efforts is the lack of paired pixel-matched ground-truth data. Specifically, the exact same cells and structures cannot be captured by both qOBM and standard brightfield H&E due to the additional tissue processing and sectioning steps involved in the latter. This challenge is what necessitated the use of unpaired image-to-image translation. Therefore, for further improvements and pixel-wise agreement, an alternative approach could be to incorporate a complementary slide-free microscopy technology as part of the training process that can provide images similar to H&E in a multimodal system.

Given the lack of pixel-wise ground truth, we validated the virtual H&E brain images by conducting a neuropathologist study, which indicated no significant difference between how board-certified neuropathologists interpret standard brightfield H&E and vH&E images. This, together with the results of the CNN trained on H&E images and applied to the vH&E images, strongly supports the potential utility of the approach to provide clinically interpretable, valuable, and actionable information, even in the absence of pixel-wise ground truth for the vH&E images. Future work will focus on imaging in vivo and in real-time, to be evaluated using a handheld-probe-based system [44] to collect and virtually stain images.

The proposed technology has the potential to significantly save time, labor, and expense while enabling new capabilities for non-invasive, in-vivo imaging. For analysis of ex-vivo samples, as demonstrated here, an existing digital brightfield microscope (present in most laboratory and clinical spaces) can be modified to deliver 3D quantitative phase imaging and vH&E with qOBM for less than ${\$}500$ USD (see Supplement 1, Table 1). No reagents for staining are required, as this is a label-free technology. Further, as we have previously shown [20,21,44], qOBM can be configured as a handheld probe or endoscope, which could enable novel in-vivo capabilities.

In this study, we specifically focused on the application of brain tumor margin assessment, where real-time, label-free in-vivo histological analysis is gravely needed; however, the proposed workflow enabled by deep-learning-based virtual staining of qOBM images could be transformative and widely used to improve cancer screening, detection, treatment guidance, and more.

4. METHODS

A. Label-Free qOBM Imaging

The qOBM system consists of a conventional inverted microscope with a modified epi-illumination scheme, as shown in Fig. 1(B). The illumination consists of four LED light sources (720 nm) coupled into 1 mm multimode fiber optics with a 0.5 NA. The fibers are evenly distributed around the microscope objective (Nikon Plan Fluor ELWD, ${{60}} \times$, 0.7 NA) at a 45 deg angle from the optical axis. LEDs illuminate samples sequentially, and for each illumination, a raw brightfield image is collected using a PCO.edge 4.2 LT sCMOS camera. By way of multiple scattering, this illumination configuration produces an effective oblique illumination [18,45]. Upon subtraction of two captures with diametrically opposed illumination, we obtain a differential phase contrast (DPC) image, ${{\rm{I}}_{\rm{DPC}}}$, which provides tomographic cross-sectioning capabilities with qualitative differential phase contrast.

To reconstruct a 3D quantitative phase image with qOBM, two DPC images from orthogonal angles (or shear directions) are processed and deconvoluted with the system’s optical transfer function through a Tikhonov regularized deconvolution following

$$\phi = {\cal F}^{- 1}\!\left\{\frac{\sum_k {\bar I}_{\rm DPC}^k \cdot C_{\rm DPC}^*}{\sum_k | C_{\rm DPC} |^2 + \alpha} \right\}\!.$$

Here, $\phi$ represents the quantitative phase, ${\bar I}_{\rm DPC}^k$ is each DPC image along the ${{\rm{k}}}$th shear direction, alpha is a regularization parameter, and ${C_{\rm{DPC}}}$ is the optical transfer function of the system, which can be obtained by characterizing the distribution of the multiple-scattered light passing through the focal plane within the sample [18,19]. This light distribution at the focal plane is dependent on the optical properties of the tissue. Therefore, the liver images and brain images were processed with independent transfer functions. However, the optical properties of soft tissues remain sufficiently consistent across species to avoid requiring a different transfer function for the human and rat brain images.

The qOBM images capture the quantitative phase of the samples, which is directly correlated to the refractive index and dry mass of the sample. Additionally, the qOBM images show outstanding detail in all directions of illumination, with a lateral resolution of ${\sim}{0.6}\;\unicode{x00B5}{\rm m}$, axial resolution of ${\sim}{3.5}\;{{\unicode{x00B5}{\rm m}}}$ (measured experimentally with a 200 nm polysterene bead) and a sensitivity of ${\sim}{{2}}\;{\rm{nm}}$ [18,20]. qOBM image acquisition is at 10 Hz (limited by the frame rate of the camera), and processing of the quantitative phase images is achieved in real-time using a regular tabletop computer. The penetration depth is limited to approximately one to two mean free scattering pathlengths, which is ${\sim}{{120}}\;{\rm{nm}}$ in the brain for 720 nm light.

B. Sample Preparation and Imaging

In this work, we studied the virtual staining of qOBM images from three types of tissues: mouse liver, rat brain 9L gliosarcoma tumor model, and human brain tumors. All animal tissue excision and imaging protocols were approved by Institutional Animal Care and Use Committee of the Georgia Institute of Technology. All human samples were de-identified and obtained through the Winship Cancer Institute of Emory University using approved protocols. Tissues were imaged fresh and untreated, ex vivo, within 6 to 12 h of removal. The imaged mouse livers from eight healthy animals were donated by the Haider lab at Georgia Tech and Emory University from mice sacrificed for various purposes. The livers were excised and imaged unfixed within 3 h of the procedure. Details about the 9L gliosarcoma rat tumor model protocol and imaging may be found in Costa et al. [20]. In short, 14 Fisher rats were intracranially implanted with 9L gliosarcoma cells. The animals were sacrificed 9–12 days after the implant, and brains were excised, cut coronally to expose the tumor, and imaged unfixed within 12 h of extraction. In this animal model, the tumor is confined to the side of the brain where the tumor cells were implanted, leaving the contralateral side of all treated brains as an additional control. This also allows for a priori knowledge of the location of the tumor. Human tissue specimens from five patients were imaged post-surgery, within 6 h of resection.

The qOBM imaging sessions consisted of multiple lateral and axial scans of different regions of each tissue. These scans were performed in an automated manner, enabled by the X-Y-Z automatic stages built into the microscope. Axial stacks were taken by translating the objective by ${\sim}{{1.5 {\text -} \unicode{x00B5}{\rm m}}}$ steps. The lateral scanning was performed with an overlap of 20% to facilitate stitching of mosaics and combined with axial scans.

After being imaged with qOBM, all tissues were formalin-fixed for 48 h, processed, and embedded in paraffin. Then, the samples were sliced into 5 µm sections and stained with H&E. The whole H&E sample slides were then digitally scanned by an Olympus NanoZoomer whole-slide scanner at either ${{20}} \times$ or ${{40}} \times$ magnification. Finally, the H&E slide scans were inspected to select similar regions to those acquired with qOBM for the CycleGAN training process, described below.

C. Virtual H&E Staining with CycleGAN

We define two image domains, one for qOBM images ($X$), and one for H&E images ($Y$). We attempt to determine a transformation $G{:}\,X \to Y$. In the CycleGAN framework used here [25], there are two tasks: one task is to learn ${G_X}{:}\, X \to Y$ that maps $x \in X$ to $y \in Y$; the auxiliary task is to learn a generator ${G_Y}{:}\,Y \to X$. Additionally, we have adversarial discriminators ${D_X}$ and ${D_Y}$. ${D_X}$ discriminates between the fake outputs of ${G_X}$ and real images from the domain $Y$. On the other hand, ${D_Y}$ discriminates between the fake outputs of ${G_Y}$ and real images from the domain $X$. The CycleGAN framework then exploits the cycle-consistency property that ${G_Y}({{G_X}(x)}) \approx x$ and ${G_X}({{G_Y}(y)}) \approx y$. This is expressed as the following loss:

$$\begin{split}{{\cal L}_{\rm{cycle}}}({{G_X},{G_Y}} ) &= {\mathbb{E}_{x\sim{p_{\rm{data}}}(x )}}\big[\|{{G_Y}({{G_X}(x )} ) - {x}}\|_1 \big] \\&\quad+ {\mathbb{E}_{y\sim{p_{\rm{data}}}(y )}}[\|{{G_X}({{G_Y}(y )} ) - {y}}\|_1 ],\end{split}$$
where $\| \cdot \|_1$ is the L1 norm. This is trained with traditional least-squares adversarial losses:
$${{\cal L}_G}({D,G,X,Y} ) = {\mathbb{E}_{x\sim{p_{\rm{data}}}(x )}}[{{{({D({G(x )} ) - 1} )}^2}} ],$$
$$\begin{split}{{\cal L}_D}({D,G,X,Y} ) &= \frac{1}{2}{\mathbb{E}_{y\sim{p_{\rm{data}}}(y )}}[{{{({D(y ) - 1} )}^2}} ] \\&\quad+ \frac{1}{2}{\mathbb{E}_{x\sim{p_{\rm{data}}}(x )}}[{{{({D({G(x )} )} )}^2}} ].\end{split}$$

Finally, for regularization, an identity constraint is imposed:

$$\begin{split}{{\cal L}_{\textit{idt}}}({{G_X},{G_Y}} ) &= {\mathbb{E}_{x\sim{p_{\rm{data}}}(x )}}[\|{{G_Y}(x ) - {x}}\|_1 ] \\&\quad+ {\mathbb{E}_{y\sim{p_{\rm{data}}}(y )}}[\|{{G_X}(y ) - {y}}\|_1 ].\end{split}$$

Thus, the full objective is

$$\begin{split}\mathop {\min}\limits_{\rm{G}} {{\cal L}_{\rm{full}}} &= {\lambda _{\rm{cyc}}}{{\cal L}_{\rm{cycle}}}({{G_X},{G_Y}} ) + {{\cal L}_G}({{D_Y},{G_X},X,Y} ) \\&\quad+ {{\cal L}_G}({{D_X},{G_Y},X,Y} ) + {\lambda _{\textit{idt}}}{{\cal L}_{\textit{idt}}}({{G_X},{G_Y}} ),\end{split}$$
$$\mathop {\min}\limits_{\rm{D}} {{\cal L}_{\rm{full}}} = {{\cal L}_D}({{D_Y},{G_X},X,Y} ) + {{\cal L}_D}({{D_X},{G_Y},X,Y} ),$$
where ${\lambda _{\rm{cyc}}} = 10$ controls the impact of the cycle-consistency loss, and ${\lambda _{\textit{idt}}} = 0.5$ controls the impact of the identity loss.

The generator architecture (${G_X}$, ${G_Y}$) was a ResNet-based fully convolutional network described by Zhu et al. [25]. Unless otherwise specified, the generator had nine residual blocks. A ${{70}} \times {{70}}$ PatchGAN [46] was used for the discriminator (${D_X}$, ${D_Y}$). Unless otherwise specified, the discriminator had three layers. The same loss function and optimizer as described in the original paper [25] was used. The learning rate (LR) was fixed at ${{2}}{{\rm{e}}^{- 4}}$ in the first 100 epochs and linearly decayed to zero in the next 100 epochs. A batch size of four was used.

qOBM images were center-cropped to ${{1536}} \times {{1536}}\;{\rm{pixel}}$ images and divided into nine 512 × 512 tiles. Unless otherwise noted, all qOBM images were intensity-inverted. The H&E images were upscaled with bilinear interpolation by a factor of either ${1.5} \times$ or ${{2}} \times$ (depending on the dataset) such that the images had features of comparable pixel dimensions to those in the qOBM images.

To enable a scalable inference pipeline, we utilized a tiled inference procedure as described in Abraham et al. [27]. Briefly summarized: the model was applied to overlapping ${{512}} \times {{512}}$ tiles of the original FOV, and the tiles were stitched by defining a given pixel’s intensity as the weighted average of intensity values from the vH&E patches, which overlapped at the given pixel location. The weighting was based on a Gaussian kernel.

Since four raw captures are taken with qOBM, from which two DPC images are reconstructed, single capture- and DPC-to-vH&E conversion was also performed and compared to qOBM-to-vH&E (the images used for training come from the exact same fields of view), except the images were not inverted, since the nuclei appeared dark and therefore should be the best-case scenario for conversion efforts. Neither the raw capture nor the DPC images alone supported high-quality CycleGAN conversions (Supplement 1, Fig. 3 and Supplement 1, Note 3).

For the conversion of mouse liver qOBM images, the imaged mouse livers from seven of the animals were used for training, and the images from one of the animals were used as a held-out test set. A total of 2358 qOBM and 1737 H&E tiles of size ${{512}} \times {{512}}$ pixels were used for training, and 51 qOBM images of size ${{1848}} \times {{1848}}$ pixels were used for testing.

For the conversion of rat brain qOBM images, a single larger model was trained on all four observed tissue subtypes simultaneously. As commonly noted with CycleGANs, model size played a role in conversion performance (see Supplement 1, Note 2). Our larger model had 12 residual blocks in the generator and six layers in the discriminator. Images from a total of 12 rats implanted with tumors were used for training. Some held-out images from the same rat specimens were used for testing (but none of these test images was included in the training set), along with held-out images from two additional rats implanted with tumors. A total of 1377 qOBM and 1744 H&E tiles of size ${{512}} \times {{512}}$ pixels were used for training, and 30 qOBM images of size ${{1848}} \times {{1848}}$ pixels were used for testing.

Fine-tuning of the rat CycleGAN on the human specimens simply consisted of initializing the model with the rat CycleGAN model weights and training at an LR of ${{2}}{{\rm{e}}^{- 4}}$. Images from a total of five specimens were used for training. Held-out test images came both from specimens used for training and from those that were not, but none of these test images was seen during training. A total of 837 qOBM and 372 H&E ${{512}} \times {{512}}$ pixel tiles of human glioma tissue were used for fine-tuning; 38 qOBM images of size ${{1848}} \times {{1848}}$ pixels were used for testing.

For conversion of the qOBM strip, the full stitched strip was taken and passed into our tiled inference algorithm, rather than the individual FOVs from the strip.

D. Quantitative Evaluation of Virtual H&E Staining Results

We first trained a convolutional neural network on standard brightfield H&E images to classify healthy (cortex or basal ganglia) regions and tumor regions. We performed five-fold cross-validation. The model was trained on a total of 1744 standard H&E images, so in each fold, this led to a train-validation split of 1395–349 image tiles. An ImageNet-pretrained ResNet18 [47] was fine-tuned with a batch size of 128 for four epochs. In the first epoch, only the linear head layer was trainable, and for the remaining epochs the model weights were frozen (not updatable). It was trained with an LR of ${{1}}{{\rm{e}}^{- 2}}$ with a short LR warmup followed by a cosine decay. The remaining three epochs were trained with all layers updatable, with a base LR of ${{5}}{{\rm{e}}^{- 3}}$, but using discriminative LRs [48] where early layers in the neural network have even lower LRs. These three remaining epochs were trained with a one-cycle LR schedule [49]. The mean and standard deviations of the accuracies for the classifiers trained on each of the five folds were reported.

Once accurate H&E healthy versus tumor classifiers were trained, they were applied to vH&E images. The accuracy was calculated by comparing the labels predicted by the classifier to the ground-truth labels of the original qOBM images, and the mean and standard deviations of the accuracies were reported.

E. Computational Hardware and Software

All deep learning models were trained on NVIDIA A100 80 GB GPUs. The PyTorch (version 1.9.1) [50], fastai (version 2.6.3) [51], and UPIT (version 0.2.3) [52] libraries were used for training and inference of all models.

F. Clinical Validation of vH&E Images of Brain Tissue

To evaluate the quality and usefulness of the virtually stained qOBM images compared to the gold standard H&E-stained images, we conducted a panel study with five board-certified neuropathologists. In this study, the neuropathologists were asked to evaluate a total of 100 ${{180}}\;{{\unicode{x00B5}{\rm m}}} \times {{180}}\;{{\unicode{x00B5}{\rm m}}}$ images. The image set contained 30 real H&E rat brain images (10 tumor, 10 healthy, and 10 mixed fields of tumor and healthy), 30 virtually stained qOBM images (10 tumor, 10 healthy, and 10 mixed fields of tumor and healthy), 20 real H&E human brain tumor images, and 20 virtually stained qOBM human brain tumor images. The order of images presented in the survey was randomized with healthy and tumor regions from both humans and rats combined. For each image, neuropathologists were asked if tumor cells were present in the image (Y/N/cannot assess), based on the image if they would recommend continued resection of the area (Y/N), and how confident they were in giving that recommendation with numerical scores (1, unsure, to 5, very confident).

Funding

Burroughs Wellcome Fund (CASI 1014540); National Science Foundation (CAREER 1752011, GRFP DGE-2039655); National Institutes of Health (R01EB028635, R21CA223853, R33CA202881, R35GM147437); Marcus Center for Therapeutic Cell Characterization and Manufacturing (MC3M).

Acknowledgment

The authors would like to thank neuropathologists Dr. Stewart Neill, Dr. Bret Mobley, Dr. Joshua Klonoski, Dr. Jian Yi Li, and Dr. Viharkumar Patel for their participation in the vH&E review panel study. This work was funded by NIH, Stability AI PhD Fellowship, Burroughs Wellcome Fund; Marcus Center for Therapeutic Cell Characterization and Manufacturing (MC3M); National Cancer Institute; National Institute of General Medical Sciences; National Institute of Neurological Disorders and Stroke; National Science Foundation; Georgia Institute of Technology. This work was enabled by computing resources provided by the Wicklow AI in Medicine Research Initiative, the 2021 Spell Research Grant, and the Stability AI Academic Research Hardware Grant.

Disclosures

R.L. is a co-founder of Histolix, Inc. T.M.A. is a paid employee of Stability AI. The other authors declare no competing financial interests.

Data availability

Data required to replicate the experimental results are available at Ref. [53].

Supplemental document

See Supplement 1 for supporting content.

REFERENCES

1. J. D. Bancroft and M. Gamble, Theory and Practice of Histological Techniques, 6th ed. (Churchill Livingstone, 2008).

2. V. Rastogi, N. Puri, S. Arora, G. Kaur, L. Yadav, and R. Sharma, “Artefacts: a diagnostic dilemma—a review,” J. Clin. Diagn. Res. 7, 2408–2413 (2013). [CrossRef]  

3. F. Fereidouni, Z. T. Harmany, M. Tian, A. Todd, J. A. Kintner, J. D. McPherson, A. D. Borowsky, J. Bishop, M. Lechpammer, S. G. Demos, and R. Levenson, “Microscopy with ultraviolet surface excitation for rapid slide-free histology,” Nat. Biomed. Eng. 1, 957–966 (2017). [CrossRef]  

4. A. K. Glaser, N. P. Reder, Y. Chen, E. F. McCarty, C. Yin, L. Wei, Y. Wang, L. D. True, and J. T. C. Liu, “Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens,” Nat. Biomed. Eng. 1, 0084 (2017). [CrossRef]  

5. Y. K. Tao, D. Shen, Y. Sheikine, O. O. Ahsen, H. H. Wang, D. B. Schmolze, N. B. Johnson, J. S. Brooker, A. E. Cable, J. L. Connolly, and J. G. Fujimoto, “Assessment of breast pathologies using nonlinear microscopy,” Proc. Natl. Acad. Sci. USA 111, 15304–15309 (2014). [CrossRef]  

6. F. Jamme, S. Kascakova, S. Villette, F. Allouche, S. Pallu, V. Rouam, and M. Réfrégiers, “Deep UV autofluorescence microscopy for cell biology and tissue histology,” Biol. Cell 105, 277–288 (2013). [CrossRef]  

7. K. B. Patel, W. Liang, M. J. Casper, V. Voleti, W. Li, A. J. Yagielski, H. T. Zhao, C. Perez Campos, G. S. Lee, J. M. Liu, E. Philipone, A. J. Yoon, K. P. Olive, S. M. Coley, and E. M. C. Hillman, “High-speed light-sheet microscopy for the in-situ acquisition of volumetric histological images of living tissue,” Nat. Biomed. Eng 6, 569–583 (2022). [CrossRef]  

8. S. Ye, J. Zou, C. Huang, F. Xiang, Z. Wen, N. Wang, J. Yu, Y. He, P. Liu, X. Mei, H. Li, L. Niu, P. Gong, and W. Zheng, “Rapid and label-free histological imaging of unprocessed surgical tissues via dark-field reflectance ultraviolet microscopy,” iScience 26, 105849 (2023). [CrossRef]  

9. H. Tu, Y. Liu, D. Turchinovich, M. Marjanovic, J. Lyngsø, J. Lægsgaard, E. J. Chaney, Y. Zhao, S. You, W. L. Wilson, B. Xu, M. Dantus, and S. A. Boppart, “Stain-free histopathology by programmable supercontinuum pulses,” Nat. Photonics 10, 534–540 (2016). [CrossRef]  

10. S. Witte, A. Negrean, J. C. Lodder, G. T. Silva, C. P. J. de Kock, H. D. Mansvelder, and M. L. Groot, “Label-free live brain imaging with third-harmonic generation microscopy,” in CLEO/Europe and EQEC 2011 Conference Digest (2011) (Optica Publishing Group, 2011), paper CLEB1_1.

11. M. Ji, D. A. Orringer, C. W. Freudiger, S. Ramkissoon, X. Liu, D. Lau, A. J. Golby, I. Norton, M. Hayashi, N. Y. R. Agar, G. S. Young, C. Spino, S. Santagata, S. Camelo-Piragua, K. L. Ligon, O. Sagher, and X. S. Xie, “Rapid, label-free detection of brain tumors with stimulated Raman scattering microscopy,” Sci. Transl. Med. 5, 201ra119 (2013). [CrossRef]  

12. D. A. Orringer, B. Pandian, Y. S. Niknafs, et al., “Rapid intraoperative histology of unprocessed surgical specimens via fibre-laser-based stimulated Raman scattering microscopy,” Nat. Biomed. Eng. 1, 0027 (2017). [CrossRef]  

13. M. G. Giacomelli, L. Husvogt, H. Vardeh, B. E. Faulkner-Jones, J. Hornegger, J. L. Connolly, and J. G. Fujimoto, “Virtual hematoxylin and eosin transillumination microscopy using epi-fluorescence imaging,” PLOS ONE 11, e0159337 (2016). [CrossRef]  

14. B. Bai, X. Yang, Y. Li, Y. Zhang, N. Pillar, and A. Ozcan, “Deep learning-enabled virtual histological staining of biological samples,” Light Sci. Appl. 12, 57 (2023). [CrossRef]  

15. Y. Rivenson, T. Liu, Z. Wei, Y. Zhang, K. de Haan, and A. Ozcan, “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light Sci. Appl. 8, 23 (2019). [CrossRef]  

16. G. N. McKay, N. Mohan, and N. J. Durr, “Imaging human blood cells in vivo with oblique back-illumination capillaroscopy,” Biomed. Opt. Express 11, 2373–2382 (2020). [CrossRef]  

17. M. Shao, R. Liu, C. Li, Z. Chai, Z. Zhong, F. Lu, X. Wei, J. Zhou, and M.-C. Zhong, “In vivo optical trapping of erythrocytes in mouse liver imaged with oblique back-illumination microscopy,” Appl. Phys. Lett. 123, 083701 (2023). [CrossRef]  

18. P. Ledwig and F. E. Robles, “Epi-mode tomographic quantitative phase imaging in thick scattering samples,” Biomed. Opt. Express 10, 3605–3621 (2019). [CrossRef]  

19. P. Ledwig and F. E. Robles, “Quantitative 3D refractive index tomography of opaque samples in epi-mode,” Optica 8, 6–14 (2021). [CrossRef]  

20. P. C. Costa, Z. Guang, P. Ledwig, Z. Zhang, Z. Zhang, S. Neill, S. Neill, J. J. Olson, J. J. Olson, F. E. Robles, F. E. Robles, and F. E. Robles, “Towards in-vivo label-free detection of brain tumor margins with epi-illumination tomographic quantitative phase imaging,” Biomed. Opt. Express 12, 1621–1634 (2021). [CrossRef]  

21. Z. Guang, P. Ledwig, P. C. Costa, C. Filan, and F. E. Robles, “Optimization of a flexible fiber-optic probe for epi-mode quantitative phase imaging,” Opt. Express 30, 17713–17729 (2022). [CrossRef]  

22. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds. (Curran Associates, 2014), pp. 2672–2680.

23. J. Li, J. Garfinkel, X. Zhang, D. Wu, Y. Zhang, K. de Haan, H. Wang, T. Liu, B. Bai, Y. Rivenson, G. Rubinstein, P. O. Scumpia, and A. Ozcan, “Biopsy-free in vivo virtual histology of skin using deep learning,” Light Sci. Appl. 10, 233 (2021). [CrossRef]  

24. M. Boktor, B. R. Ecclestone, V. Pekar, D. Dinakaran, J. R. Mackey, P. Fieguth, and P. Haji Reza, “Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS),” Sci. Rep. 12, 10296 (2022). [CrossRef]  

25. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision (ICCV) (IEEE, 2017), pp. 2242–2251.

26. M. Combalia, J. Pérez-Anker, A. García-Herrera, L. Alos, V. Vilaplana, F. Marqués, S. Puig, and J. Malvehy, “Digitally stained confocal microscopy through deep learning,” in International Conference on Medical Imaging with Deep Learning (PMLR) (2019), pp. 121–129.

27. T. Abraham, A. Shaw, D. O’Connor, A. Todd, and R. Levenson, “Slide-free MUSE Microscopy to H&E histology modality conversion via unpaired image-to-image translation GAN models,” arXiv, arXiv:2008.08579 [cs, eess] (2020). [CrossRef]  

28. R. Cao, S. D. Nelson, S. Davis, Y. Liang, Y. Luo, Y. Zhang, B. Crawford, and L. V. Wang, “Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy,” Nat. Biomed. Eng 7, 124–134 (2023). [CrossRef]  

29. S. Zhao, J. Wu, C. Wang, H. Liu, X. Dong, C. Shi, C. Shi, Y. Liu, L. Teng, D. Han, X. Chen, G. Yang, L. Wang, C. Shen, and H. Li, “Intraoperative fluorescence-guided resection of high-grade malignant gliomas using 5-aminolevulinic acid-induced porphyrins: a systematic review and meta-analysis of prospective studies,” PLoS One 8, e63682 (2013). [CrossRef]  

30. C. G. Hadjipanayis, G. Widhalm, and W. Stummer, “What is the surgical benefit of utilizing 5-ALA for fluorescence-guided surgery of malignant gliomas?” Neurosurgery 77, 663–673 (2015). [CrossRef]  

31. J.-C. Tonn and W. Stummer, “Fluorescence-guided resection of malignant gliomas using 5-aminolevulinic acid: practical use, risks, and pitfalls,” Clin. Neurosurg. 55, 20–26 (2008).

32. P. A. Valdés, A. Kim, M. Brantsch, C. Niu, Z. B. Moses, T. D. Tosteson, B. C. Wilson, K. D. Paulsen, D. W. Roberts, and B. T. Harris, “δ-aminolevulinic acid-induced protoporphyrin IX concentration correlates with histopathologic markers of malignancy in human gliomas: the need for quantitative fluorescence-guided resection to identify regions of increasing malignancy,” Neuro Oncol. 13, 846–856 (2011). [CrossRef]  

33. B. A. Kairdolf, A. Bouras, M. Kaluzova, A. K. Sharma, M. D. Wang, C. G. Hadjipanayis, and S. Nie, “Intraoperative spectroscopy with ultrahigh sensitivity for image-guided surgery of malignant brain tumors,” Anal. Chem. 88, 858–867 (2016). [CrossRef]  

34. W. Stummer, H. J. Reulen, A. Novotny, H. Stepp, and J. C. Tonn, “Fluorescence-guided resections of malignant gliomas-an overview,” Acta Neurochir. Suppl. 88, 9–12 (2003). [CrossRef]  

35. Y. Winetraub, E. Yuan, I. Terem, C. Yu, W. Chan, H. Do, S. Shevidi, M. Mao, J. Yu, M. Hong, E. Blankenberg, K. E. Rieger, S. Chu, S. Aasi, K. Y. Sarin, and A. de la Zerda, “OCT2Hist: non-invasive virtual biopsy using optical coherence tomography,” MedRxiv (2021). [CrossRef]  

36. M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1717–1724.

37. P. C. Costa, “Quantitative oblique back-illumination microscopy in the study of biomedical samples,” Ph.D. Dissertation (Georgia Institute of Technology, 2023).

38. Z. Chen, W. Yu, I. H. M. Wong, and T. T. W. Wong, “Deep-learning-assisted microscopy with ultraviolet surface excitation for rapid slide-free histological imaging,” Biomed. Opt. Express 12, 5920–5938 (2021). [CrossRef]  

39. J. T. C. Liu, A. K. Glaser, K. Bera, L. D. True, N. P. Reder, K. W. Eliceiri, and A. Madabhushi, “Harnessing non-destructive 3D pathology,” Nat. Biomed. Eng. 5, 203–218 (2021). [CrossRef]  

40. Z. Guang, A. Jacobs, P. C. Costa, C. Filan, and F. E. Robles, “Quantitative oblique back-illumination microscopy with enhanced nuclear phase contrast using acetic acid,” Proc. SPIE 12389, 43–45 (2023). [CrossRef]  

41. M. Li, J. Lin, Y. Ding, Z. Liu, J.-Y. Zhu, and S. Han, “GAN Compression: Efficient Architectures for Interactive Conditional GANs,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 5284–5294.

42. G. Campanella, M. G. Hanna, L. Geneslaw, A. Miraflor, V. Werneck Krauss Silva, K. J. Busam, E. Brogi, V. E. Reuter, D. S. Klimstra, and T. J. Fuchs, “Clinical-grade computational pathology using weakly supervised deep learning on whole slide images,” Nat. Med. 25, 1301–1309 (2019). [CrossRef]  

43. M. Y. Lu, D. F. K. Williamson, T. Y. Chen, R. J. Chen, M. Barbieri, and F. Mahmood, “Data-efficient and weakly supervised computational pathology on whole-slide images,” Nat. Biomed. Eng. 5, 555–570 (2021). [CrossRef]  

44. Z. Guang, P. C. Costa, A. Jacobs, C. Filan, and F. E. Robles, “Handheld quantitative phase imaging probe for in-vivo brain tumor margin assessment,” Proc. SPIE 12391, 69–71 (2023). [CrossRef]  

45. J. Mertz, “Optical sectioning microscopy with planar or structured illumination,” Nat. Methods 8, 811–819 (2011). [CrossRef]  

46. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5967–5976.

47. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

48. J. Howard and S. Gugger, Deep Learning for Coders with Fastai and PyTorch (O’Reilly Media, 2020).

49. L. N. Smith, “A disciplined approach to neural network hyper-parameters: Part 1—learning rate, batch size, momentum, and weight decay,” arXiv, arXiv:1803.09820 (2018). [CrossRef]  

50. A. Paszke, S. Gross, F. Massa, et al., “PyTorch: an imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, R. Garnett, eds. (Curran Associates, 2019), pp. 8026–8037.

51. J. Howard and S. Gugger, “Fastai: a layered API for deep learning,” Information 11, 108 (2020). [CrossRef]  

52. T. M. Abraham, “UPIT—a fastai/PyTorch package for unpaired image-to-image translation,” GitHub [accessed 16 November 2023] (2021), https://github.com/tmabraham/UPIT.

53. T. M. Abraham, “UPIT—a fastai/PyTorch package for unpaired image-to-image translation,” GitHub (2021), https://github.com/tmabraham/qOBMtoHE.

Supplementary Material (5)

NameDescription
Supplement 1       Supplemental Materials
Visualization 1       A depth sweep-through of qOBM images in a volumetric image stack (270 µm x 270 µm x 100 µm) of rat gliosarcoma tissue.
Visualization 2       A depth sweep-through of vH&E images corresponding to the qOBM images in Visualization 1.
Visualization 3       A depth sweep-through of qOBM images in a volumetric image stack (270 µm x 270 µm x 85 µm) of a freshly excised grade 3 human astrocytoma tissue specimen.
Visualization 4       A depth sweep-through of vH&E images corresponding to the qOBM images in Visualization 3.

Data availability

Data required to replicate the experimental results are available at Ref. [53].

53. T. M. Abraham, “UPIT—a fastai/PyTorch package for unpaired image-to-image translation,” GitHub (2021), https://github.com/tmabraham/qOBMtoHE.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Deep-learning-enabled qOBM imaging workflow. (A) The standard histology workflow requires several sample preparation steps before viewing under a brightfield microscope and interpretation. This process can take about 8 h or longer. (B) Our proposed workflow utilizes qOBM imaging to image a fresh specimen of tissue and virtual staining to obtain similarly interpretable images in about 1 s.
Fig. 2.
Fig. 2. qOBM-to-vH&E conversion of mouse liver tissue. (A) Label-free $60 \times$ qOBM image of mouse liver tissue. (B) Corresponding vH&E image. (C) Standard brightfield H&E image provided for comparison. The white boxes and insets show a representative appropriately converted hepatocyte with appreciable subnuclear detail, the yellow arrows refer to nuclei missed by the conversion, and the blue arrows refer to capillaries. Scale bar is 50 µm.
Fig. 3.
Fig. 3. qOBM-to-vH&E conversion of brain tissue from the 9L gliosarcoma rat tumor model. (A)–(D) Label-free $60\times$ qOBM images of each of the four rat brain tissue subtypes, including two types of tumor structure (A) and (B), heathy basal ganglia (C), and healthy cortex (D). (E)–(H) Corresponding vH&E images produced by a CycleGAN trained on rat brain images. (I)–(L) Standard brightfield H&E images of the same tissue subtypes, provided for comparison. Scale bar is 50 µm.
Fig. 4.
Fig. 4. qOBM-to-vH&E conversion for images with a mix of healthy and tumor rat brain tissue, never seen during training. (A), (B) Label-free ${{60}} \times$ qOBM images of mixed rat brain tissue. (C), (D) Corresponding vH&E images. (E)–(F) Standard brightfield H&E images of the same tissue subtypes, provided for comparison. Scale bar is 50 µm.
Fig. 5.
Fig. 5. Quantitative evaluation of qOBM-to-vH&E conversion for rat brain tissue. A classifier trained on standard H&E images to differentiate between tumor and healthy images is assessed using vH&E images. Summary of results: (A) accuracy of training H&E set with five-fold cross-validation and the accuracy of the vH&E test set. (B) Confusion matrix of this H&E healthy/tumor classifier applied to the vH&E images.
Fig. 6.
Fig. 6. Label-free qOBM imaging of a strip of rat brain tissue and corresponding vH&E. Virtual H&E strip mosaic of rat brain tumor obtained by applying the CycleGAN trained on rat brain to the whole mosaic at once. (A) Virtual-H&E mosaic (${6.3}\;{\rm{mm}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$). Scale bar is 300 µm. (B) Zoomed-in region of the label-free $60\times$ qOBM strip (${{600}}\;{{\unicode{x00B5}{\rm m}}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}}$). Scale bar is 50 µm. (C) Zoomed-in region of the corresponding vH&E strip. (D) Zoomed-in region of a brightfield H&E strip region provided for comparison.
Fig. 7.
Fig. 7. qOBM and corresponding vH&E 3D volumetric stack of a rat brain tumor margin. vH&E volumetric stack (${{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{270}}\;{{\unicode{x00B5}{\rm m}}} \times {{100}}\;{{\unicode{x00B5}{\rm m}}}$) obtained by applying the trained rat brain CycleGAN to each image in the stack. (A) qOBM-to-vH&E conversion of the volumetric stack is depicted. (B) vH&E volume, with X-Y (60 µm deep), X-Z, and Y-Z cross sections shown. Tissue surface is on top of the volume. See Supplement 1, Fig. 6 for the qOBM equivalent. (C) qOBM image slices at various depths and the corresponding vH&E image slices. Scale bar is 50 µm. See Supplement 1, Visualization 1 and Visualization 2 for a depth sweep-through of this volumetric stack.
Fig. 8.
Fig. 8. qOBM-to-vH&E conversion of human gliomas. Each row contains a qOBM image, corresponding vH&E, and standard brightfield H&E images, provided for comparison. (A)–(I) Three separate human grade 3 glioma specimens (one per row). (J)–(L) Human grade 2 (low-grade) glioma specimen. (M)–(O) Healthy human tissue specimen from the edge of a grade 3 astrocytoma. Scale bar is 50 µm.
Fig. 9.
Fig. 9. qOBM and corresponding vH&E 3D volumetric stack of a human glioma margin. Virtual H&E volumetric stack obtained by applying the trained human glioma CycleGAN to each image in the stack. (A) qOBM-to-vH&E conversion of the volumetric stack is depicted. (B) vH&E volume, with X-Y (55 µm deep), X-Z, and Y-Z cross sections shown. Tissue surface is on top of the volume. (C) qOBM image slices at various depths and the corresponding vH&E image slices. The white arrow highlights an irregular nucleus. Scale bar is 50 µm. See Supplement 1, Visualization 3 and Visualization 4 for a depth sweep-through of this volumetric stack.

Tables (1)

Tables Icon

Table 1. Neuropathologist User Study Comparing Standard H&E and Virtual H&E for Interpretationa

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ϕ = F 1 { k I ¯ D P C k C D P C k | C D P C | 2 + α } .
L c y c l e ( G X , G Y ) = E x p d a t a ( x ) [ G Y ( G X ( x ) ) x 1 ] + E y p d a t a ( y ) [ G X ( G Y ( y ) ) y 1 ] ,
L G ( D , G , X , Y ) = E x p d a t a ( x ) [ ( D ( G ( x ) ) 1 ) 2 ] ,
L D ( D , G , X , Y ) = 1 2 E y p d a t a ( y ) [ ( D ( y ) 1 ) 2 ] + 1 2 E x p d a t a ( x ) [ ( D ( G ( x ) ) ) 2 ] .
L idt ( G X , G Y ) = E x p d a t a ( x ) [ G Y ( x ) x 1 ] + E y p d a t a ( y ) [ G X ( y ) y 1 ] .
min G L f u l l = λ c y c L c y c l e ( G X , G Y ) + L G ( D Y , G X , X , Y ) + L G ( D X , G Y , X , Y ) + λ idt L idt ( G X , G Y ) ,
min D L f u l l = L D ( D Y , G X , X , Y ) + L D ( D X , G Y , X , Y ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.