Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated bone cell classification for confocal laser scanning microscopy volumes

Open Access Open Access

Abstract

Manual cell classification in microscopy images is a time-consuming process that heavily relies on the subjective perception of the investigator. Identifying bone cells introduces additional difficulties with irregular geometries, and in some culture conditions, the presence of bone mineral. As fluorescence-based lineage tracing becomes more common, classifying cell types based upon cell color can further increase subjectivity. Our goal is to develop and validate a fully automated cell classification algorithm that can (i) objectively identify cells in flattened volumetric image stacks from three-dimensional (3D) bone cell cultures and (ii) classify the cells (osteoblast-lineage) based on the color of their cell bodies. The algorithm used here was developed in MATLAB 2019a and validated by comparing code outputs to manual labeling for eleven images. The precision, recall, and F1 scores were higher than 0.75 for all cell classifications, with the majority being greater than 0.80. No significant differences were found between the manually labelled and automated cell counts or cell classifications. Analysis time for a single image averaged seventeen seconds compared to more than ten minutes for manual labeling. This demonstrates that the program offers a fast, repeatable, and accurate way to classify bone cells by fluorescence in confocal microscopy image data sets. This process can be expanded to improve investigation of other pre-clinical models and histological sections of pathological tissues where color or fluorescence-based differences are used for cell identification.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Cell culture is a common tool used across many biomedical disciplines. In bone biology, this approach is used to investigate a range of cellular activities including cellular differentiation, metabolism, response to pharmacologic agents and hormones, and regulation of bone formation and resorption by osteoblasts, osteoclasts, and osteocytes [1]. Measurements derived from these cultures, such as cell counts and cell morphology, can help when drawing conclusions about the effect of certain culture conditions or environments on bone cell development and overall cellular health. Cell counts and classification of different cell types can also indicate the metabolic state of cells in culture.

Cell culture approaches can be used to track cell and organoid development through time. Lineage tracing is the process by which the origin and fate of cells can be linked [2]. Lineage tracing has traditionally been conducted using vital dyes, which dilute through cell division, making it increasingly difficult to classify cells over time [2]. Different cell types can also be identified by their physical characteristics, although this introduces subjectivity and more complex classification problems. One of the most efficient and objective ways to lineage trace is by inducing the production of fluorescent genetic markers that are made by and differentiate in cells, preventing the dilution issue [3]. This can be done using Cre-LoxP recombination systems which allow for cell-specific spatial and temporal activation based upon the activation of certain promoters tied to cell differentiation states [4,5]. Lineage tracing systems are not limited to cell culture, as they have also been applied to mouse models, for example, to track organ differentiation and development using fluorescent histological methods. These methods allow for the tracking and differentiation of cells in culture and ex vivo by providing visual cues that can be distinguished by an investigator or an automated image processing program.

Distinguishing cells is necessary for cell counting, which is one metric used to identify culture response to perturbations. Manual cell identification and classification are common tasks in cell culture studies. This task is time consuming and subjective, introducing individual-based variation that can have detrimental effects on the accuracy of the study [6]. Many tools have been developed to expedite the process and increase accuracy, however these approaches often still require some form of manual input [7,8] or are not broadly applicable to multiple imaging or culture types [8,9]. With increasing accessibility to confocal laser-scanning microscopy (CLSM), using fluorescent stains or genetics-based fluorescent markers can provide additional ways to classify cells, which has not often been addressed in automated image analyses. Some analysis programs even elect to ignore color information, instead relying on grayscale analysis [10].

Bone cell cultures present extremely variable cell shapes that introduce more complexity to the classification problem. Osteocytes possess dendritic processes that form cell-to-cell connections that collectively form a vast network, creating increasingly complex geometries [11]. Cultured osteocytes alter their shape depending on their micro-environment, which can vary significantly between cells or cultures [12]. The adaptability of these cells creates complex image objects that complicate automatic detection. Additionally, cultured osteoblasts can deposit mineral (hydroxyapatite, HA) that creates noticeable autofluorescent background noise and non-cell objects in CLSM [13] that can confuse cell classification programs. Furthermore, the increased use of three-dimensional culture models artificially creates overlapping cells when flattened to a 2D image for analysis [1417]. Extensive variation, complexity, and overlap in cell culture and the use of fluorescent markers to detect cellular differentiation all increase the complexity when obtaining manual or automated cell counts for bone cell cultures. To the best of our knowledge, no tool exists that can address these concerns and accurately identify bone cells in CSLM images. Such a tool would he useful for assessing data for pre-clinical models as well as clinical evaluations of histological samples.

The goal of this study is to develop an automated method for classifying fluorescently labeled cell cultures using volumetric image stacks captured by CLSM. Specifically, we will employ a novel 3D collagen-hydroxyapatite culture model that uses a fluorescent lineage tracing approach to identify bone cell differentiation from osteoblasts to osteocytes. This automated method will be developed in MATLAB utilizing the Image Analysis package. This method should be comparable to human labeling while minimizing human interaction and processing time.

2. Methods

2.1. Animals and manufacturing the in vitro 3D culture model

Primary osteoblasts were harvested from the long bones of membrane-tomato/membrane-green (mTmG) x Dentin Matrix Acidic Phosphoprotein 1 (DMP1) – 8 kilobase (kb) - Cre mice for eventual use in three-dimensional (3D) cell cultures to examine the differentiation of osteoblasts to osteocytes.The isolated osteoblasts from these mice had cell membranes that expressed red (tdTomato) fluorescence. Late osteoblasts and osteocytes expressed a Cre knock-in mutation under the control of the Dmp1-8kb promotor, which caused these cells to express enhanced green fluorescent protein (EGFP) in the cell membrane in place of tdTomato [4,18,19]. At the time of harvest and expansion, the primary osteoblasts uniformly expressed tdTomato in their cell membranes. These red osteoblasts were placed in a self-assembling 3D culture varying in both HA and Type 1 collagen concentrations with the goal of differentiating these cells to green osteocytes [20,21]. Here, our objective is to describe the development of an automated program to objectively and quickly count cell numbers and classify these cells based on membrane fluorescence. The result of this analysis in full application, along with other measurements relative to the different 3D culture conditions, will be discussed in a subsequent paper.

2.2. Laser scanning confocal microscopy of cells in the 3D culture model

Images of primary osteoblasts and differentiated osteocytes were obtained from the 3D cultures by CLSM (Nikon A1R, Melville, NY) using a 20x magnification (1.01 µm x 1.01 µm x2 µm voxel size). For each CLSM z-stack captured, a maximum intensity projection was used to create a z-compressed 2D tiff image (NIS Elements, Nikon). Most images were saved as files with 512 × 512 pixels, meaning each image has approximately a 517 µm field of view. In 3D culture, these cells initially fluoresced red (osteoblasts), transitioned to yellow during differentiation, and then finally expressed only green fluorescence (late osteoblasts and osteocytes). These three classifications were defined for each cell in the confocal images. The coverslip mounting media contained 4′,6-diamidino-2-phenylindole (DAPI), which stained the nuclei blue (ProLong Diamond Antifade Mountant with DAPI, Sigma Aldrich).

2.3 Development of the image processing algorithm for detecting and classifying bone cells in the 3D culture model

An analysis program was written in MATLAB (MathWorks, 2019a) (Natick, MA) to identify and classify the cells in the culture images. The program had two different filter settings, one for images with high green autofluorescence (GAF) and another for images with low-GAF. The different filter options were developed because the image dataset was split evenly between these two subpopulations, and because these populations exhibited very different cell and background characteristics. The standard filtering process (low-GAF) had difficulty distinguishing real cells from the prominent GAF background in the high-GAF images. Therefore, different filter paradigms were developed for each population, with the subsequent analysis being identical for both (Fig. 1,2). After filtering, the processed image was used for object identification, both for the cell bodies and nuclei. The nuclei were matched to their respective cell bodies whose color content was analyzed to produce a color classification which is linked to cell type.

 figure: Fig. 1.

Fig. 1. Fluorescence-based cell detection and classification workflow. After the images were loaded, the user specified the filter method (high-GAF or low-GAF) to pre-process the images. The resulting filtered image was used for detecting nuclei through the blue channel of the RGB image. The identified nuclei were matched with their respective cell bodies, which were then given a classification based on their color content.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Example images for low and high-GAF filtering. (a) Unfiltered low-GAF image (b) Image from (a) filtered using the low-GAF method (c) Unfiltered high-GAF image (d) Image from (c) filtered using high-GAF method.

Download Full Size | PDF

2.3.1. Low-GAF noise reduction and filtering

The original image (Fig. 3(A)) was first converted to grayscale (Fig. 3(B)) where a hysteresis filter is applied. This process defined two thresholds, retaining pixels that pass the less strict threshold only if they were connected to pixels that pass the strict threshold. This process sought to eliminate noise while retaining low intensity cell structures on the cell’s edges. Both thresholds were determined based on a modified Otsu threshold, which allowed the filtering to account for image variation, either from the image subject or from imaging parameters such as laser power, gain, or z-stack distance. A morphological closing operation with a 5 × 5 square neighborhood was performed to connect any separated cell structures, after which small objects (< 50 pixels, 51 µm2) were removed from the image.

 figure: Fig. 3.

Fig. 3. Example of the image processing steps for a low-GAF image. (a) Unfiltered image with noise cloud indicated by red arrows. (b) Original image converted to grayscale. (c) Image filtered with low-GAF filtering method. (d) Cell bodies identified by applying the watershed algorithm to the filtered image. (e) Binary image of identified nuclei from filtered image. (f) Original image with overlaid cell classification labels. The color of the circle indicates the color to which the cell was assigned.

Download Full Size | PDF

As osteocytes often have processes protruding from the main cell body, these structures needed to be retained. A morphological opening operation with a 7 × 7 square neighborhood was used to produce an image that is subtracted from the input image, leaving only thin objects resembling cell processes. This was followed by the removal of small objects (< 25 pixels, 25.5 µm2). The resulting cell processes were stored and added back at the end of filtering, since filtering had destructive effects on cell processes. Next, extremely large objects (> 2000 pixels, 2040 µm2) were identified using component labeling. This process iteratively propagated labels through connected objects in a binary image so that all distinct and separate objects had their own unique label. The size of an object (e.g. nucleus, cell, mineral particle) could be determined by counting the number of pixels with a specific label. Large objects were identified in this way and removed from the original image. These objects were filtered using a threshold set at a multiple of the mean pixel intensity within the object (tuned to be 0.8) and added back to the image. This process was used to target large clouds of mineral autofluorescence that were present in otherwise low-GAF images. An additional repetition of this filter was applied to the blue channel alone to reduce the possibility of erroneous nucleus detection (Fig. 3(C)). The final image (Fig. 2(B)) is passed on for nucleus detection.

2.3.2. High-GAF noise reduction and filtering

Following loading of the image, the mean of each color channel was subtracted from each channel to reduce background noise. Hysteresis thresholding was applied to each color channel individually because of its superior performance compared to grayscale application. When color-specific hysteresis filtering was applied in low-GAF image, noticeable color distortion was produced, hence its omission from the low-GAF algorithm. This was followed by a morphological closing operation with a 5 × 5 square neighborhood and the removal of small objects (< 50 pixels, 51 µm2). Osteocytes in high-GAF images tended to have very few if any protruding canaliculi, thus eliminating the need for operations that retained cell processes. A large object filter (> 500 pixels, 510 µm2), similar to the low-GAF algorithm, was applied to the image. The size threshold for this filter was decreased from its low-GAF counterpart to target the many non-cell objects that arose from the high-GAF background. An additional large object filter (> 450 pixels/ 459 µm2 for blue, > 800 pixels/ 816 µm2 for green) was applied to the blue and green channels to avoid erroneous nucleus detection and to minimize the effect of green autofluorescence. The final image (Fig. 2(D)) is passed on for nucleus detection.

2.3.3 Nucleus detection

The filtered image was converted to grayscale, where a watershed algorithm was used to identify objects within the image and give them unique labels (Fig. 3(D)). This algorithm treated the image as a topological map of pixel intensities, essentially flooding the basins and identifying the borders produced by the watershed lines. These borders could be used to separate connected objects. Objects with less than 10 pixels (10.2 µm2) were removed. The blue channel was then isolated from the filtered image. An Otsu threshold was first applied and then a second watershed operation was performed to separate connected nuclei (Fig. 3(E)). This identified nuclei within the image. The centroid of each nucleus was then calculated and stored. Only objects within the image that overlap with nuclei were considered cells.

2.3.4. Cell body matching and color classification

Each nucleus was matched with a corresponding object within the image by identifying the most common object label within the nucleus’ vicinity. This object was extracted from the image and converted to a Hue-Saturation-Brightness (HSB) representation from the original Red-Green-Blue (RGB) representation. Since the Hue channel values represent color on a complete spectrum, each pixel could be assigned a color (red, green, yellow) based upon the magnitude of the corresponding Hue channel pixel (Fig. 4).

 figure: Fig. 4.

Fig. 4. Circular Hue channel representation indicating cutoff values for each color classification.

Download Full Size | PDF

The cell object’s color content was determined by grouping pixel values based on the Hue channel and finding the summation of the scaled Brightness channel values for each color. A Gaussian roll off was used to weight the brightness of pixels closer to the nuclei more highly than distant pixels. The Brightness values were taken to the fourth power (tuned manually) to prioritize brighter pixels. This summation created a color score for each color classification (red, green, yellow). The greatest classification score was taken as the final classification for the cell (Fig. 5). This color classification could then be linked to cell type. This process was completed for every nucleus identified in the image. A cutoff was defined for the minimum required color pixels to be classified as a cell (70 pixels, 71.4 µm2). This was done to prevent small numbers of pixels being identified as a cell body. The final output (Fig. 3(F)) was labeled in MATLAB and the cell-classified labeled image and number of cells in each class were stored as a .jpg and .txt files, respectively. Additionally, the code output the cell density for each classification if the resolution of the image was provided.

 figure: Fig. 5.

Fig. 5. Cells with their corresponding pixel classifications. The left side of each pair (a-c) is the unprocessed image, the right side indicates the corresponding pixel classifications. The blue nucleus shown signifies the cell being evaluated within the object. Other nuclei within the object are not shown. (a) Osteoblast (red). (b) Late Osteoblast/Osteocyte (green). (c) Osteoblast (indicated by the location of nucleus shown in blue) within overlapping cell object. In (c), all non-black pixels are considered a connected object, and thus are all associated with the nucleus shown.

Download Full Size | PDF

2.4. Validation of automated program and comparison to manual labeling

Eleven images were chosen for validating the program (N = 6 for low-GAF, N = 5 for high-GAF). The dataset was naturally skewed toward osteoblasts since the images contained much fewer late osteoblasts/osteocytes and differentiating cells (Fig. 6(A)). Left unadjusted, this class imbalance problem would result in precision and recall values that are not representative of the true program performance [22]. To account for this, synthetic images were created from a sub-set of the test dataset. To create images with predominantly late osteoblasts/osteocytes (green), the red and green channels were swapped (Fig. 6(B)) for both the low and high-GAF groups (N = 2 for both). Since the high-GAF option has filters specific to the green channel, these filters were changed to operate on the red channel for these synthetic images. To create images with primarily differentiating cells, red color content was mapped to yellow in the HSB domain (Fig. 6(C), Table I) for both the low-GAF and high-GAF groups (N = 2 for both). The original yellow color content was mapped to red in the same fashion. With the addition of these synthetic images, the dataset used totaled N = 15 images.

 figure: Fig. 6.

Fig. 6. Synthetic images used for validation. (a) Original image (b) Image with the green and red channel swapped (c) Image with the yellow and red color swapped.

Download Full Size | PDF

The original eleven images were independently labeled by the code and three investigators (BTF, XX, RPM). No adjustments, such as contrast or brightness enhancement, were made during manual labeling. Manual labels for the synthetic images were determined by swapping labels based on the color change that was applied. The investigator-labelled results were pooled and averaged as a ground truth dataset. Cell by cell comparisons were made between the automated code labeling and the ground truth to determine the amount of false positive, false negative, true positive, and true negative classifications for each cell type in each image. These values were pooled across the high-GAF and low-GAF groups individually, and then combined, to determine precision, recall, and F1 score for each cell classification. Additionally, the total cell counts in each cell classification for each original image (excluding synthetics) were compared to ground truth by paired t-test (α= .05) to determine whether the code labeling was statistically different than the human investigators. This comparison was conducted with the low-GAF group, the high-GAF group, and the entire test dataset.

3. Results

The automated program performed with good accuracy relative to the manual labeling. Of the total number of manual-labelled osteoblasts (1059), 84% (888) were also identified as osteoblasts by the automated program. For differentiating cells and late osteoblasts/osteocytes, the percent of correctly identified cells were 75% and 77%, respectively, across all images. Further examination of the cell counts outside of the confusion matrix diagonal (Table 1) shows most mislabeled cells were missed entirely (predicted as non-cell objects, last horizontal row) or were not supposed to be labelled (true classification was non-cell object, last column in each sub-table). Relatively few misclassifications between different cell types were observed.

Tables Icon

Table 1. Confusion matrix for cell classifications of osteoblasts (Ob), late osteoblasts/osteocytes (Ot), differentiating cells (D) and non-cell objects (N). Shading indicates the diagonal of the matrix. Example: for the Low-GAF dataset, there were 749 cells identified as red by the program (predicted) that were also identified as red by the investigators (actual). Non-cell objects (N) indicates that an object was not labeled as a cell, and as such all values are zero for the bottom right corner of the matrix.

Comparing program labeling to manual labeling resulted in F1 scores above 0.70 for all subsets of the data (Table 2). When considering the whole dataset, the minimum F1 score observed was 0.79 (late osteoblasts/osteocytes) while the maximum was 0.85 (osteoblasts). Precision and recall scores were often greater than 0.80 for many subsets of the data. The F1 scores for the low-GAF subset were larger than the scores for the high-GAF subset for all cell classifications. In our study, precision was valued more highly than recall. Precision was consistently greater than recall for the low-GAF group but was similar or lower than recall for the high-GAF group. As a result of this classification performance, the program output and manual cell counts were not significantly different for any image set or cell type (Table 3).

Tables Icon

Table 2. Performance metrics for different cell classifications and datasets

Tables Icon

Table 3. P-values from paired t-test comparisons between manual and automated cell counts. Significance indicated at p < 0.05.

4. Discussion

The program demonstrated accurate classification for cells in the low-GAF group and less accurate classification for the high-GAF group. The low-GAF group showed precision values all greater than 0.88 and recall values greater than 0.74. The high-GAF group generally showed similar results, with a noticeably lower precision score for late osteoblasts/osteocytes. Paired t-tests demonstrated no significant differences between manual labeling and the program, indicating that the program would be indistinguishable from an additional investigator. Additionally, the automated program greatly outperformed humans in terms of labeling time. Whereas labeling would take investigators more than ten minutes per image, the program averaged seventeen seconds per image. For large datasets which would be inconceivable to label by hand, this program offers a strong alternative with good accuracy. With minor modifications this program can be used to analyze image datasets from culture or histological samples with high accuracy and speed in support of lineage tracing, immunohistochemical studies, or pathological investigations.

Compared to other published cell identification programs, the presented algorithm performed similarly. Automated classification of bone marrow cells (monocytes, neutrophils, basophils, etc.) produced precision and recall values between 0.6 and 0.95 [23], which is comparable to the results found here. Classification of mouse brain cells (Purkinje cells) imaged using confocal microscopy produced precision and recall values between 0.89 and 0.97 [24]. This precision is better than the algorithm presented here for most color classifications. However, in this brain cell study, performance was based on a single classification group in grayscale images instead of the three classification groups in color images examined in our study. Therefore, some difference in performance can be expected given the increased complexity of our data. Comparison with the commonly used open-source software CellProfiler shows similar performance for cell counting. This software reported cell counts within 6–17% of manual labeling averages for human and Drosophila cell counting in culture [25]. The algorithm presented here demonstrated cell counts within 2–11% of the manual labeled averages for the whole dataset.

Examining the mislabeled cells across both groups demonstrates that the program rarely misclassified a cell as another classification. Most of the discrepancy in the low-GAF performance arose from the program labeling cells when the ground truth data did not (non-cell object classification). Whereas some of these cells could be considered mislabeled, it is also possible that the program was able to identify underlying color around these nuclei that was difficult to see with the human eye. This is a key advantage of computational approaches to image analysis, since variance in human eyesight, lighting conditions, and different display settings on computer monitors can change the perception of the image.

The difference in low-GAF and high-GAF performance indicated that the GAF is a limiting factor in labeling these kinds of images. The algorithm’s precision values on the high-GAF group were noticeably lower than the low-GAF group for every classification, and particularly low for late osteoblasts/osteocytes. This is expected since these images were separated for having low signal-to-noise and a pervasive presence of non-cell objects. These elements confused the classifier to some degree, despite having a dedicated filtering process for these images. Therefore, our approach might not be optimal for low signal-to-noise samples. Additionally, there were fewer cells in the high-GAF images compared to the low-GAF images. This skews the performance metrics for the overall dataset towards the low-GAF group, however the performance metrics on the data subsets individually are still appropriately representative. Despite the disparity in cell numbers between the two subsets, the classification was still not significantly different than manual labeling.

Despite these shortcomings, the program offers a fast, repeatable, and accurate way to classify cells in microscopy image data sets. This program can be deployed to analyze confocal or histological image sections for insight into preclinical models and human disease states. With small changes to the color classifier, the classifications can be altered to account for different staining schemes and immunohistochemical detection of stained cells. The program is not limited to bone cell classification, as the principals used for segmentation can be applied to many other cell types, such as neurons or bone marrow cells. This can also be applied to lineage tracing studies across the anatomy [4,26,27]. Classification outputs from cell culture or histological section data can be used to compare different culture conditions to draw insights about each group’s underlying causality. Additionally, analysis of pathological tissues from histological sections or immunohistochemical stained tissues can be used to assess disease progression. Automated analysis of histological sections has been previously applied to assess diseases like cancer [28,29] and our program provides a platform to do the same. Considering the speed and accuracy of this program relative to manual labeling, both preclinical model evaluation and pathological sample analysis could see potential benefits.

Funding

National Institutes of Health (R21 AR065659); Purdue University (Discovery Park Research Fellows Initiative).

Acknowledgments

We would like to acknowledge Maxime Gallant and Whitney Bullock in helping to develop the 3D culture model and J. Andrew Schaber and the Bindley Imaging Facility for their assistance in gathering the data used in this project.

Disclosures

There are no conflicts to declare.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. L. Boskey and R. Roy, “Cell culture systems for studies of bone and tooth mineralization,” Chem. Rev. 108(11), 4716–4733 (2008). [CrossRef]  

2. C. D. Stern and S. E. Fraser, “Tracing the lineage of tracing cell lineages,” Nat. Cell Biol. 3(9), E216–E218 (2001). [CrossRef]  

3. B. G. Matthews, N. Ono, and I. Kalajzic, “Methods in lineage tracing,” Principles of Bone Biology 92, 1887–1898 (2016). [CrossRef]  

4. M. D. Muzumdar, B. Tasic, K. Miyamichi, N. Li, and L. Luo, “A global double-fluorescent cre reporter mouse,” genesis 45(9), 593–605 (2007). [CrossRef]  

5. I. Kalajzic, B. Matthews, E. Torreggiani, M. Harris, P. Pajevic, and S. Harris, “In vitro and in vivo approaches to study osteocyte biology,” Bone 54(2), 296–306 (2013). [CrossRef]  

6. A. Perné, J. A. Hainfellner, I. Womastek, A. Haushofer, T. Szekeres, and I. Schwarzinger, “Performance evaluation of the sysmex XE-5000 hematology analyzer for white blood cell analysis in cerebrospinal fluid,” Arch. Pathol. Lab. Med. 136(2), 194–198 (2012). [CrossRef]  

7. S. De Boodt, A. Poursaberi, J. Schrooten, D. Berckmans, and J. M. Aerts, “A semiautomatic cell counting tool for quantitative imaging of tissue engineering scaffolds,” Tissue Engineering Part C: Methods 19(9), 697–707 (2013). [CrossRef]  

8. J. Lojk, U. Čibej, D. Karlaš, L. Šajn, and M. Pavlin, “Comparison of two automatic cell-counting solutions for fluorescent microscopic images,” J. Microsc. 260(1), 107–116 (2015). [CrossRef]  

9. J. O’Brien, H. Hayder, and C. Peng, “Automated quantification and analysis of cell counting procedures using Imagej Plugins,” JoVE 2016(117), 54719 (2016). [CrossRef]  

10. J. G. Kelly and M. J. Hawken, “Quantification of neuronal density across cortical depth using automated 3D analysis of confocal image stacks,” Brain Struct. Funct. 222(7), 3333–3353 (2017). [CrossRef]  

11. A. M. Ashique, L. S. Hart, C. D. L. Thomas, J. G. Clement, P. Pivonka, Y. Carter, D. D. Mousseau, and D. M. L. Cooper, “Lacunar-canalicular network in femoral cortical bone is reduced in aged women and is predominantly due to a loss of canalicular porosity,” Bone Reports 7, 9–16 (2017). [CrossRef]  

12. R. F. M. van Oers, H. Wang, and R. G. Bacabac, “Osteocyte Shape and Mechanical Loading,” Curr Osteoporos Rep 13(2), 61–66 (2015). [CrossRef]  

13. A. I. Prentice, “Autofluorescence of bone tissues,” J. Clin. Pathol. 20(5), 717–719 (1967). [CrossRef]  

14. J. W. Haycock, “3D cell culture: a review of current approaches and techniques,” Methods in molecular biology (Clifton, N.J.) 695, 1–15 (2011). [CrossRef]  

15. B. Weigelt, C. M. Ghajar, and M. J. Bissell, “The need for complex 3D culture models to unravel novel pathways and identify accurate biomarkers in breast cancer,” Adv. Drug Delivery Rev. 69-70, 42–51 (2014). [CrossRef]  

16. C. Feder-Mengus, S. Ghosh, A. Reschner, I. Martin, and G. C. Spagnoli, “New dimensions in tumor immunology: what does 3D culture reveal?” Trends Mol. Med. 14(8), 333–340 (2008). [CrossRef]  

17. J. Eyckmans and C. S. Chen, “3D culture models of tissues under tension,” J. Cell Sci. 130(1), 63–70 (2016). [CrossRef]  

18. N. Bivi, K. W. Condon, M. R. Allen, N. Farlow, G. Passeri, L. R. Brun, Y. Rhee, T. Bellido, and L. I. Plotkin, “Cell autonomous requirement of connexin 43 for osteocyte survival: Consequences for endocortical resorption and periosteal bone formation,” J. Bone Miner. Res. 27(2), 374–389 (2012). [CrossRef]  

19. A. R. Stern, M. M. Stern, M. E. van Dyke, K. Jähn, M. Prideaux, and L. F. Bonewald, “Isolation and culture of primary osteocytes from the long bones of skeletally mature and aged mice,” BioTechniques 52(6), 361–373 (2012). [CrossRef]  

20. J. L. Bailey, P. J. Critser, C. Whittington, J. L. Kuske, M. C. Yoder, and S. L. Voytik-Harbin, “Collagen oligomers modulate physical and biological properties of three-dimensional self-assembled matrices,” Biopolymers 95(2), 77–93 (2011). [CrossRef]  

21. S. P. Pathi, D. D. W. Lin, J. R. Dorvee, L. A. Estroff, and C. Fischbach, “Hydroxyapatite nanoparticle-containing scaffolds for the study of breast cancer bone metastasis,” Biomaterials 32(22), 5112–5122 (2011). [CrossRef]  

22. S. C. Wong, A. Gatt, V. Stamatescu, and M. D. McDonnell, “Understanding data augmentation for classification: when to warp?” 2016 Int. Conf. Digit. Image Comput. Tech. Appl. DICTA 2016, (2016)

23. T. C. Yu, “Automatic Bone Marrow Cell Identification and Classification By Deep Neural Network,” Blood 134(Supplement_1), 2084 (2019). [CrossRef]  

24. P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014). [CrossRef]  

25. A. E. Carpenter, T. R. Jones, and M. R. Lamprecht, “CellProfiler: image analysis software for identifying and quantifying cell phenotypes,” Genome Biol. 7(10), R100 (2006). [CrossRef]  

26. J. Ma, Z. Shen, Y. C. Yu, and S. H. Shi, “Neural lineage tracing in the mammalian brain,” Curr. Opin. Neurobiol. 50(1), 7–16 (2018). [CrossRef]  

27. C. Gil-Sanz, “Lineage Tracing Using Cux2-Cre and Cux2-CreERT2 Mice,” Neuron 86(4), 1091–1099 (2015). [CrossRef]  

28. L. He, L. R. Long, S. Antani, and G. R. Thoma, “Histology image analysis for carcinoma detection and grading,” Comput. Methods Programs Biomed. 107(3), 538–556 (2012). [CrossRef]  

29. M. T. McCann, J. A. Ozolek, C. A. Castro, B. Parvin, and J. Kovačević, “Automated Histology Analysis: Opportunities for signal processing,” IEEE Signal Process. Mag. 32(1), 78–87 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Fluorescence-based cell detection and classification workflow. After the images were loaded, the user specified the filter method (high-GAF or low-GAF) to pre-process the images. The resulting filtered image was used for detecting nuclei through the blue channel of the RGB image. The identified nuclei were matched with their respective cell bodies, which were then given a classification based on their color content.
Fig. 2.
Fig. 2. Example images for low and high-GAF filtering. (a) Unfiltered low-GAF image (b) Image from (a) filtered using the low-GAF method (c) Unfiltered high-GAF image (d) Image from (c) filtered using high-GAF method.
Fig. 3.
Fig. 3. Example of the image processing steps for a low-GAF image. (a) Unfiltered image with noise cloud indicated by red arrows. (b) Original image converted to grayscale. (c) Image filtered with low-GAF filtering method. (d) Cell bodies identified by applying the watershed algorithm to the filtered image. (e) Binary image of identified nuclei from filtered image. (f) Original image with overlaid cell classification labels. The color of the circle indicates the color to which the cell was assigned.
Fig. 4.
Fig. 4. Circular Hue channel representation indicating cutoff values for each color classification.
Fig. 5.
Fig. 5. Cells with their corresponding pixel classifications. The left side of each pair (a-c) is the unprocessed image, the right side indicates the corresponding pixel classifications. The blue nucleus shown signifies the cell being evaluated within the object. Other nuclei within the object are not shown. (a) Osteoblast (red). (b) Late Osteoblast/Osteocyte (green). (c) Osteoblast (indicated by the location of nucleus shown in blue) within overlapping cell object. In (c), all non-black pixels are considered a connected object, and thus are all associated with the nucleus shown.
Fig. 6.
Fig. 6. Synthetic images used for validation. (a) Original image (b) Image with the green and red channel swapped (c) Image with the yellow and red color swapped.

Tables (3)

Tables Icon

Table 1. Confusion matrix for cell classifications of osteoblasts (Ob), late osteoblasts/osteocytes (Ot), differentiating cells (D) and non-cell objects (N). Shading indicates the diagonal of the matrix. Example: for the Low-GAF dataset, there were 749 cells identified as red by the program (predicted) that were also identified as red by the investigators (actual). Non-cell objects (N) indicates that an object was not labeled as a cell, and as such all values are zero for the bottom right corner of the matrix.

Tables Icon

Table 2. Performance metrics for different cell classifications and datasets

Tables Icon

Table 3. P-values from paired t-test comparisons between manual and automated cell counts. Significance indicated at p < 0.05.

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.