Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Increasing a microscope’s effective field of view via overlapped imaging and machine learning

Open Access Open Access

Abstract

This work demonstrates a multi-lens microscopic imaging system that overlaps multiple independent fields of view on a single sensor for high-efficiency automated specimen analysis. Automatic detection, classification and counting of various morphological features of interest is now a crucial component of both biomedical research and disease diagnosis. While convolutional neural networks (CNNs) have dramatically improved the accuracy of counting cells and sub-cellular features from acquired digital image data, the overall throughput is still typically hindered by the limited space-bandwidth product (SBP) of conventional microscopes. Here, we show both in simulation and experiment that overlapped imaging and co-designed analysis software can achieve accurate detection of diagnostically-relevant features for several applications, including counting of white blood cells and the malaria parasite, leading to multi-fold increase in detection and processing throughput with minimal reduction in accuracy.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Automatic analysis of cells, microorgansims, or other subcellular features within microscope images is essential for a wide range of biomedical and diagnostic applications. Over the past several years, the application of convolutional neural networks (CNNs) has dramatically improved the performance of different counting tasks in various scenarios, including for cancer and tumor diagnosis [1,2], infectious disease detection [3,4], and computerized automation of complete blood cell (CBC) counting [5]. However, the overall efficiency of the automated microscope-based cell counting is still constrained by the limited field of view (FOV) of conventional microscopes, especially for tasks which require high-resolution images [6]. This limitation is compounded by the fact that for many applications, the diagnostically-relevant features of interest is highly sparse. For example, to diagnose infection with the malaria parasite (Plasmodium falciparum) [7,8], a trained expert often needs to examine a blood smear with a 100$\times$ microscope objective to identify the parasite [9] across several hundred fields of view, with the aim of visually identifying just one or a few parasites [6], which can lead to a serious bottleneck within the diagnostic pipeline.

A core reason why we cannot simultaneously obtain high spatial resolutions over large FOVs stems from lens aberrations. The higher the desired spatial resolution, the more difficult it is for lens designers to correct for aberrations at off-axis positions. As a result, most standard microscopes are limited to a space-bandwidth-product (SBP) of 10s of megapixels [10]. Current solutions to the limited SBP problem include whole slide imaging (WSI) scanners [11,12], Fourier ptychographic microscopy (FPM) [1315], and multi-aperture systems [16,17]. However, WSI requires high-precision mechanically scanning of the sample or the imaging system [18], which in turn may require repeated focus adjusting, making the entire process time consuming and expensive [19].

Digital super-resolution techniques have also been proposed that can be applied to enhance the SBP of imaging systems. These include classical pixel-super-resolution approaches that involve solving an inverse problem based on multiple low-resolution frames [20] as well as more recently proposed data-driven, deep learning-based single-image-super-resolution techniques [2128], including dataset-free approaches [29,30]. These deep-learning-based approaches have also been recently applied to microscopy [31,32]. While such single-shot techniques can improve interpretability, they do not inherently increase the information content captured within a lower-resolution image.

In this work, we propose a new solution that overlaps multiple microscope images from different FOVs onto a single image sensor. After acquiring a single snapshot, a machine learning algorithm then extracts relevant information over the larger effective FOV for cell counting tasks. Our method takes advantage of two qualities of many cell counting tasks. First, the target itself is often sparse and thus has low probability of overlapping with each other in the composite image. Second, the positions of the cells are not important, and thus, unlike previous overlap-based multiplexed imaging approaches [3335], we do not need to reconstruct the extended FOV, but rather the CNN can be trained and deployed on image patches [36]. Thus, in forming superimposed images, our approach is efficient not only in the data capture, but also in the computational analysis. We demonstrate our approach in experiment on the specific clinical tasks of white blood cell (WBC) [37,38] and malaria parasite counting [39].

2. Proposed method

2.1 Principle and design of overlapped imaging

Figure 1 presents an overview of the general principle of overlapped imaging for rapid classification and counting. Instead of relying on a single objective lens, our goal here is to use an array of sub-lenses, each of which images a unique, independent FOV onto a common sensor. With $n$ sub-lenses, our approach can in principle capture light from an $n$ times larger FOV as compared with a standard microscope, albeit from disjoint slide areas (i.e., FOVs that are not necessarily directly adjacent). While the individual sub-images overlap and thus reduce image contrast, experimental results show this approach is still effective for tasks where the goal is to search for sparsely distributed features across a relatively uniform or repetitive background, such as detecting WBCs or malaria parasites from blood smears.

 figure: Fig. 1.

Fig. 1. (a) Pipeline of overlapped microscope imaging. Multiple independent sample FOVs are illuminated by LEDs and then imaged onto a common sensor through a multi-lens array to form an overlapped image. A CNN-based post processing framework is then employed to identify target features or objects within the overlapped image. (b) i. 2 single-FOV images (FOV1, FOV2) of a hematological sample, ii. the lens array, and iii. the overlapped image ($n$ = 2) for WBC counting task. (c) Overlapped imaging setup. Left: imaging geometry with sample imaged by three lenses and overlapped on a single image sensor. Right: FOVs of 7 lenses marked as gray circles at the sample plane, where FOVs of diameter $a_o$ are separated by lens pitch $w$ and do not overlap. At the image plane, FOVs denoted by red circles of diameter $a_i$ have significant overlap. Marked variables are listed in Table 1.

Download Full Size | PDF

For most of our experiments, we used $n=7$ sub-lenses (Fig. 1(b)(ii)) to image up to seven unique FOVs, as this number of lenses leads to an efficient hexagonal packing geometry and offers a good balance between the increase in effective FOV and the proportional decrease in dynamic range. In our imaging configuration, all lenses are placed parallel to the image sensor (Sony IMX477R) to ensure all the FOVs are approximately in focus at the same plane. The sub-lens size, spacing, object distance, and image distance (detailed in Table 1) were chosen to ensure that the individual image from every sub-lens covers the majority of the sensor. While WBC and malaria parasite detection tasks are traditionally implemented with microscopes with more than 40$\times$ magnifications, here we used slightly lower resolution (25$\times$), based upon findings in our prior work [40].

Tables Icon

Table 1. Table of parameters, overlapped microscope

Specifically, we set the working distance to the microscope slide, $d_o$, to 1.2 mm and the image distance, $d_i$, to 30 mm, creating $M=25\times$ magnification sub-images. The diameter of each sub-FOV at the object plane, $a_o$, and its associated diameter at the image plane, $a_i$, are set by the selected lens and obey the relationship $a_i = M a_o$. In our experiments, $a_o$ = 0.84 mm and $a_i$ = 21 mm. The distance between each lens within our lens array, $w = 3.7$ mm, defines the distance between the center of the FOV of each sub-lens at the sample plane. Each sub-FOV of diameter $a_o$ = 0.84 mm was thus separated from adjacent sub-FOVs by approximately $w = 3.7$ mm as well (Fig. 1(b)(i,iii)). Using $n$ = 7 lenses in total leads to a total array diameter of approximately $3w = 11$ mm, which approximately matches the width of a typical blood smear slide (typically around 1.5 cm $\times$ 2 cm). Pinhole illumination arrays are also used to prevent cross-talk of light from individual FOVs. A complete list of the parameters used in our initial experiments is presented in Table 1.

Although overlapped imaging increases the effective FOV of our microscope, the general approach suffers from two disadvantages: first, the dynamic range of each sub-image, $D$, decreases with an increase in the image overlap number, $n$. Image sensors exhibit a limited total dynamic range, $D_t$. Typically, $D_t=256$ grayscale values per pixel for an 8-bit sensor, so to avoid sensor saturation, these values must be divided amongst all sub-images. On average, we can expect the sub-image dynamic range to be $D = D_t/n$. For example, with $n$ = 7 overlapped images, an 8-bit sensor can only dedicate 36 - 37 grayscale values to each sub-image. For tasks where grayscale variations are important for accurate classification, it is beneficial to perform overlapped imaging with either a high-dynamic-range sensor or a high-dynamic-range capture strategy on a standard sensor. Second, the signal-to-noise ratio (SNR) of each sub-image decreases with an increase in the image overlap number, $n$. Assuming that the image sensor can capture $m$ photons before saturation when detecting just a single non-overlapped image ($n$ = 1), there are on average $m/n$ photons per sub-image. Assuming shot noise as the dominant noise source, our SNR per sub-image scales with the square root of the number of photons per sub-image, 1/ $\sqrt {n}$.

As we will see, these disadvantages, nonetheless, may not dramatically impact the accuracy of current deep learning-based classification for a wide selection of tasks. Images often contain a high amount of redundancy, especially when the end goal is a global classification decision. To verify our idea, we show how WBCs and malaria parasites can still be accurately detected from overlapped images with extended FOVs in both simulation data with a realistic noise model and experimental data from our prototype microscope.

2.2 Deep learning-based cell detection

In this work, we adopted a 10-layer VGG-like [41] framework with He-normal initialization [42] and leaky ReLU activation to detect target objects from acquired overlapped image data. We use four sets of convolutional filters, with each set containing two 2D convolutions with the second convolution having a nonunary stride to reduce the spatial dimension of the tensor. The architecture of the CNN remained nearly the same across both tasks. The full details of the network are provided in the Appendix C.

In a number of learning-based tasks, annotation of regions of interest (ROIs), such as bounding boxes and segmentation masks, are desired. To avoid the relatively tedious process of segmentation mask annotation and to ensure our model remained as widely applicable to different input data types as possible, regardless of the requirement of special annotation, we adopted a binary-classification framework for image post-analysis. During training, we utilized image patches as input. During testing and experimental use across full-FOV image data, we then generated classification heatmaps via a standard sliding window approach [4345], which is widely used for spatial density prediction. When generating the whole-FOV heatmaps on the test images, we selected image patches using a sliding window across the whole FOV with a single-pixel stride. Each selected image patch centered at each location across the whole FOV is fed into the trained CNN, which outputs a single prediction. The resulting heatmap displays this prediction probability at each location, and is thus a pixel-wise probability of the presence of the target object across the whole FOV of the overlapped image, from which various statistics, including counting, are subsequently derived.

3. Simulation procedures and datasets

3.1 Simulation of overlapped images with correct noise statistics

We first demonstrated our method in simulation by applying it to digitally overlapping images (Fig. 2). For a given number of lenses $n$, we created synthetic overlapped imaging datasets by digitally averaging $n$ images, yielding $X_{avg}$. To counteract the resultant artifactual $\sqrt {n}$ improvement in SNR, as expected from averaging $n$ independent measurements, we added Gaussian noise $Z$, whose standard deviation was scaled based on the averaged value at each pixel location. Finally, the resultant image was quantized into 8 bits, simulating sensor digitization. See Appendix A for noise model details.

 figure: Fig. 2.

Fig. 2. Simulation of overlapped imaging for malaria parasite identification. (a) High-resolution, large-FOV images of malaria-infected thick blood smears (from SimuData) were cropped into $96\times 96$ patches, and labeled as positive or negative based on the presence of the malaria parasite. (b) We generate digitally overlapped images by averaging patches and adding pixel-adaptive Gaussian noise to compensate for an artificially inflated SNR. Malaria-positive overlapped patches consisted of 1 malaria-infected patch and $n-1$ negative patches.

Download Full Size | PDF

For each cell-counting task, we dynamically overlapped small image patches across the available FOVs. The overlapped images were assigned labels based on the presence of target objects in at least one of the sub-FOV images. Figure 2(b) illustrates this process for the $n$ = 3 case. Our synthetic approach allowed us to easily vary the number of overlapped images. We capitalized on this degree of freedom to characterize the performance of our system as a function of number of overlapping images for each task. This method of synthetically creating overlapped images also serves to augment our data for CNN training – by varying which regions are overlapped, the number of overall unique overlapped images increases combinatorially with $n$.

3.2 Datasets for cell-counting tasks

3.2.1 SimuData

We first conducted a simulation experiment to study the effectiveness of our method to automatically detect malaria parasites in thick blood smears. This task is based on an open-source dataset [46] that we term SimuData, where thick blood smears were stained and imaged via a modified mobile phone camera. This dataset contains 1800 large-FOV images captured at 100$\times$ magnification, each with a pixel count of 3024$\times$4032, from 150 individual patients. Each image was labelled by an expert to indicate where in the FOV the malaria parasite was visible. This task represents an ideal scenario for overlapped imaging, given the high contrast and sparsity of the parasites. The patients were split into training, validation, and test sets ($70\%$, $15\%$, and $15\%$ of patients, respectively). For the training and validations datasets, square regions of 96$\times$96 pixels were extracted from the full FOVs (1400 for training, 600 for validations) and marked infected if the parasite annotation lay within the inner third (32$\times$32 pixels) of the image. The datasets were balanced, so that the infection to no-infection ratio was 1:1.

3.2.2 DukeData

Using the proposed overlapped imaging system (Fig. 1) with physical parameters specified in Table 1, we collected images from Wright-stained human peripheral blood smear slides (Carolina) to form the DukeData dataset. The task of interest was to automatically identify and count WBCs from within acquired images with different amounts of overlap. For experimental data collection, we first imaged peripheral blood smear slides with the seven sub-lenses of our microscope individually, using the illumination provided by a single white LED for each captured image. This produced a set of seven non-overlapped images per specimen position. Subsequently, from the seven captured “sub-images", we cropped and resized 500 WBCs and 500 RBCs patches to be $201\times 201$ pixels in size to form a “single-lens" dataset. This dataset was used create synthetically overlapped image data using the procedure described above for SimuData, with which we trained CNNs for a variable number of overlaps $n$. The simulated datasets were split into training and validation with a ratio of 7:3.

Finally, $n$ = 2-7 sub-lenses were used to simultaneously image and capture physically overlapped images to form experimental datasets. We captured 35 groups of data, where one “group” of data includes 7 non-overlapped, single-FOV images and 6 overlapped images with different levels overlap ($n$ = 2-7, by blocking sub-apertures). The non-overlapped, single-FOV images were used to provide accurate annotations for locations of WBCs in the corresponding overlapped images. In these 35 groups, a total of 43 WBCs were observed. A sliding window approach with a 10-pixel step size was used to split the whole image into $201\times 201$ pixels pitches. Patches containing whole WBCs were labeled as positive, while patches only containing RBCs and background were labeled as negative. The CNNs trained on synthetically overlapped data were applied to these experimentally-overlapped data to evaluate performance.

4. Results

4.1 Simulation results

We first investigated the impact of overlapped imaging (i.e., the resulting reduced contrast and SNR) on classification accuracy in simulation with a malaria parasite counting task (SimuData). We characterized classification performance across a wide range of $n$ by digitally adding images and noise as discussed above and in the Appendix A (Fig. 3). Figure 3(a) shows how the number of overlapped images impacts the classification task performance. In particular, we can see that although performance degrades roughly linearly with the number of overlapped images for detecting the malaria parasite, at $n$ = 7, a detection accuracy above 80% is still maintained for both the training and validation sets.

 figure: Fig. 3.

Fig. 3. Results of overlapped malaria parasite counting task (SimuData). (a) Task performance for all non-overlap and overlap conditions. (b) ROC curves for all overlap conditions.

Download Full Size | PDF

A receiver operating characteristic (ROC) curve for classifying the malaria parasite for $n$ = 1 - 7 overlapped images is shown in Fig. 3(b), with the area under the curve (AUC) for each curve displayed in the legend. The ROC gives a different perspective on task performance, showing how with a relatively low false positive rate we can achieve a high true positive rate, even for the highly overlapped condition of $n$ = 7. The simulation results thus show that under a certain degree of overlap, the CNN model can still identify targets with relatively high accuracy.

4.2 Overlapped imaging system characterization

We characterized the resolution of our overlapped imaging system by capturing unoverlapped images of a USAF target (Fig. 4(a)-c). First, we used an iris to restrict incident light to illuminate the sample beneath just one sub-lens at a time. This allowed us to effectively capture each of the seven sub-images, one at a time, by simply moving the position of the iris. An example segment of one image of the resolution target, positioned and illuminated from the center sub-lens, is shown in Fig. 4(a). Here, we can resolve group 8 element 6, demonstrating a maximum full-pitch resolution of approximately 2 $\mathrm {\mu }$m. We note here that the illumination source (a single white LED placed 10 mm away) provides spatially coherent illumination to the sample, suggesting that it may be possible to improve image resolution using a lower coherence source. At the same time, our source has a spatial coherence length less than w, thus ensuring that the overlapped image is an incoherent superposition of all sub-images.

With the resolution target in the same position, we then opened the iris to illuminate the entire resolution target slide, imaging through all 7 sub-lenses and capturing an $n$ = 7 overlapped image (Fig. 4(a), bottom). As our USAF target slide only contained features across a 0.5 mm diameter area, most of the other sub-images that contribute to this overlapped image do not contain any obvious features and simply decrease the image contrast, as expected. We quantified the decrease in image contrast by taking traces through the resolution target at similar locations (group 8, element 1, shown as the colored horizontal line in each image). These trace values are plotted in Fig. 4(d) (averaged over 20-pixel rows). We used the maximum difference between peak and valley in each trace curve to define the contrast in the corresponding image. Here, a normalized contrast of 0.9 for the single non-overlapped image (Fig. 4(d), top) dropped to approximately 0.15 for the $n$ = 7 overlapped image (Fig. 4(d), bottom), roughly as expected. Deviations from our expected image contrast drop of 7$\times$ may be attributed to a slightly non-uniform brightness of each sub-image across the image plane. We repeated this resolution target imaging experiment for all 7 of the sub-lenses, shifting both the resolution target (to lie beneath each sub-lens) and the iris (to selectively illuminate the sample) to 7 unique positions. At each position, we captured a non-overlapped image and opened the iris to capture an $n$ = 7 overlapped image. Results from performing this experiment with two other sub-lenses in our lens array are shown in Fig. 4(b)-c. We can see that the resolution is approximately constant over the entire sub-FOV of each sub-lens, with a cutoff resolution of approximately 2 $\mathrm {\mu }$m (see blue boxes). Furthermore, the contrast drop for each sub-lens is approximately constant at $6\times$. However, the sub-lenses do contain some non-uniform intensity variations, which we attribute to imperfections in the mounting process, as well as non-uniformities across the sample.

 figure: Fig. 4.

Fig. 4. Overlapped images of a resolution target determine system resolution and contrast. Resolution target is positioned beneath one sub-lens at a time, with all other sub-lenses blocked, for non-overlapped image capture. Example non-overlapped images (a) from the center sub-lens, (b) left-bottom sub-lens and (c) Right top sub-lens all exhibit an approximately 2-$\mathrm {\mu }$m full-pitch resolution (see blue boxes). $n$= 7 overlapped images (bottom row), captured by illuminating all sub-FOVs at each resolution target position, demonstrate the sample is still visible. (d) Traces through group 8 element 1 (colored lines, averaged over 20 rows) show approximately a 6$\times$ higher contrast for the non-overlapped images (top) versus the corresponding overlapped images (bottom).

Download Full Size | PDF

4.3 Experimental results

As a preliminary investigation of automated cell counting with our experimental microscope for overlapped imaging, we collected and processed the DukeData dataset (see Sec. 3.2), with results in Fig. 57. When only the center lens is used, the system has a resolution comparable to that of a standard 25$\times$ microscope, allowing our proposed system to maintain crucial morphological features of different types of WBCs (see Fig. 5(a)).

 figure: Fig. 5.

Fig. 5. WBC classification accuracy from digitally overlapped experimental data. (a) Different types of WBCs in DukeData, imaged by the central lens of our proposed microscope. (b) Example digitally overlapped image from experimentally acquired DukeData, varying from $n$ = 1 - 10. (c) Classification accuracy versus number of overlapped images.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Results for the WBC counting task using experimentally-overlapped data, reported as ROC curves (a) and confusion matrices (b-h) aggregated under various degrees of overlap ($n$ = 1-7). Solid circles on each ROC curve denote the true positive rate (TPR) and false positive rate (FPR) corresponding to the threshold at which the confusion matrices were calculated, which was chosen to maximize the geometric mean of TPR and true negative rate (TNR).

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Classification heatmaps for experimentally-overlapped images using CNNs trained on digitally-overlapped data. (a) Example single-FOV images captured by the proposed setup. (b) Overlapped images ($n$ = 1-4) captured by the proposed setup. The contributing FOVs from (a) for each overlapped image in (b) are tagged in the upper left corners. The bottom row shows the classification heatmaps generated by the CNNs overlaid on top of the overlapped images. Dotted circles identify the ground-truth locations of WBCs.

Download Full Size | PDF

As a first test, we digitally-overlapped DukeData images and examined WBC classification accuracy as a function of $n$ (see Fig. 5). As shown in Fig. 5(b), the distinction between a WBC and red blood cells (RBC) or other background material becomes relatively visually unclear at $n=3$ or 4 overlapped images. However, our CNN classifier still maintains 95% accuracy at $n$ = 5, and monotonically decreases with increasing overlap (Fig. 5(c)) as expected. Due to dynamic augmentation during training, the validation accuracies in this series of tests is slightly higher than the training accuracy. These trends are consistent with those observed in the malaria parasite counting task (Fig. 3).

Next, we attempted to automatically identify WBCs in experimentally-overlapped images, using CNN models pre-trained on digitally-overlapped data (as described in Sec. 3.2). A unique CNN model was used for each value of $n$. Each model was independently trained 3 times with different random seeds, with their predictions averaged. Results are reported as both ROC curves (Fig. 6(a)) and confusion matrices (Fig. 6(b)-h) for each overlap condition ($n$ = 1-7). For the confusion matrices, we chose the threshold for the CNN outputs based on that which maximizes the geometric mean of the true positive rate and the true negative rate [47], which accounts for dataset imbalance. In this experiment, we obtained the following aggregate detection accuracies for $n$ = 1 - 7: 96.7%, 89.7%, 81.7%, 68.9%, 71.1%, 64.0%, and 59.6%.

Models obtained relatively high true positive rates for low overlap ($n$ = 1 - 4: 94.0%, 84.5%, 74.5%, 74.0%), with overall performance generally decreasing with $n$, which is consistent with our previous simulation results. The ROC curves also give similar trends (Fig. 6(a)). We also observed that the detection accuracy in our experimentally overlapped imaging results were lower, compared to performance with digitally overlapped experimental image data (Fig. 5). This could be attributed to non-uniform brightness and imperfections in the optical assembly across different sub lenses in our first experimental prototype, and the fact that training was only performed on digitally overlapped imagery, so that our CNN did not adapt to such imperfections that were specific to our setup. While we did take measures to mitigate the effect of these optical imperfections by ensuring consistency of contrast and resolution for each lens (Fig. 4), in the future we aim to improve upon our optical hardware and optimize other aspects of the system, such as illumination

In a third set of experiments, we applied our trained CNNs to entire images by passing a sliding window as the input to generate co-registered classification heatmaps (Fig. 7). Figure 7(a) shows the non-overlapped images collected by 7 single sub-lenses of the proposed system, which were used for ground truth WBC annotations (dotted circles). Figure 7(b) shows the overlapped images experimentally captured by the proposed microscope, with the classification heatmaps overlaid in the bottom row. These results qualitatively confirm that the models trained with digitally overlapped data can still identify the WBCs under $n$ = 4 overlap for subsequent counting, despite decreased contrast, albeit at reduced accuracy. In future work, we aim to investigate post-processing strategies to facilitate high-accuracy cell counting from such acquired overlapped image heatmaps.

5. Discussion and conclusion

In this work, we have demonstrated a new imaging system that can capture and overlap images of multiple independent FOVs on a common detector, which may offer a significant speed-ups for tasks requiring analysis of large FOVs. For the malaria parasite and WBC counting tasks, we investigated the relationship between CNN-based classification accuracy and number of overlapped images. For current CMOS image sensors, we showed that it is in principle possible to overlap up to 4 images while maintaining over 90% accuracy on standard 8-bit detectors. We then presented initial results from our prototype hardware system, which can capture 2-$\mathrm {\mu }$m resolution images over a several square millimeter areas using a set of seven small lenses. This resolution and imaging FOV of one sub-lens of our first prototype is comparable to those of a standard 25$\times$ objective lens. We also showed that CNNs trained entirely on synthetic data is able to generalize to experimental data for the WBC counting task.

It is worth comparing the proposed overlapped imaging system to conventional microscopes that image at lower resolution but with a larger FOV, similar to an overlapped microscope’s total effective FOV. For most diagnostic tasks, resolution is critical for identifying the specific distinguishing biomarkers or subcellular morphological features. For example, the malaria-counting task featured in our work often requires imaging systems with 100X magnification to identify the malaria parasites [9]. High resolution is also important for accurate WBC counting [48], and even more importantly for differentiating the WBC type, which is essential for diagnosing certain diseases [49]. Put another way, in our overlapped imaging system, the high spatial frequencies comprising the target of interest, though mixed with other information, are still nonetheless present and can potentially be extracted for accurate classification. Even though high SNR is important in many cases [50], the reduction of contrast and SNR from overlapping images can be compensated by leveraging neural networks that are robust to noise, or using high-dynamic-range image capture strategies. Finally, although there are high-SBP, albeit expensive, objectives that can cover the equivalent FOV without loss in resolution, our approach offers a low-cost alternative.

It is also necessary to further analyze the various factors that influence detection accuracy. From an intuitive point of view, the detection accuracy on overlapped images should be negatively correlated with the number of overlapping fields of view, the expected size of the target of interest, and the target density, due to the increasing probability of spatially overlapped targets in the final overlapped image. In the Appendix B, we simulate the probability that WBCs at different distribution densities remain non-overlapped with one another under ideal imaging conditions. The simulation result (as shown in Fig. 8) suggests that one should aim to capture fewer than approximately 10 WBCs in a final overlapped image to minimize the chance of the spatially overlapped targets. In our experiments, we observed the number of WBCs captured per snapshot to rarely surpass 4 in the final overlapped image (n = 2 - 7). In future work, we plan to further explore this relationship by quantifying the effects of the above-mentioned factors and their influence on the detection accuracy, and attempt to improve the robustness of the CNN to overlapping targets by including them during network training.

We also note that there is a gap in performance between our experimental results (Fig. 6 and Fig. 7) and simulation-based results (Fig. 3 and Fig. 5), in that the classification performance drops more rapidly as a function of n for the former as compared to the latter. Further, the false positive rates reported in Fig. 6 for higher values of n are relatively high, leaving room for improvement before our technique can be deployed within a diagnostic device. To improve the performance of our initial proof-of-concept prototype and decrease false positive rates, we aim to employ higher-quality lenses and introduce focus-tuning mechanisms in future designs, to make it easier to ensure that the focal planes of all sub-lenses coincide. In optimizing the hardware, we expect to close the gap in performance between the simulation results and the experimental results. Furthermore, the sliding window approach that we used is subject to a trade-off between spatial resolution and accuracy, as the smaller the window size the less context the CNN has for classification. In future work, we will address this issue by casting these counting tasks as segmentation tasks and training models to predict pixel-wise segmentations. We also plan to take advantage of computational microscopy techniques for optimizing the illumination parameters and improve the detection accuracy [40,5154].

There are a number of avenues for future work that extend our proof-of-principle experiments here. Apart from exploring different CNN architectures, we could consider other types of annotation appropriate for cell-counting tasks, such as global count [55]. In addition to sparse cell-counting tasks, we envision our technique being applied to other similar tasks, such as identifying defects in otherwise pristine surfaces, such as semiconductor wafers. Alternative hardware implementations may also improve performance for such tasks. For example, while our current prototype uses brightfield illumination, it may be advantageous to use darkfield illumination, which would substantially reduce the background and thus improve the contrast of the overlapped images. Further, using higher-dynamic-range sensors could compensate for lost contrast due to overlapping. Another potential direction is to use the recently proposed random access parallel microscopy setup [56], which images multiple FOVs sequentially with a single parabolic mirror as a common tube lens. Such a setup has the advantage that all sub-images overlap completely on the sensor.

We are hopeful that our initial demonstration will encourage additional exploration into the various benefits of overlapped imaging, in particular when coupled with machine learning to automatically process the acquired data. As machine learning techniques continue to hit impressive benchmarks, we believe that our approach can leverage successive advances and inspire new research into high-throughput imaging devices that do not necessarily clearly resolve an entire scene or sample, but can still excel at specific tasks.

Appendix A

Here, we provide additional details about the noise model used to ensure our synthetically overlapped image data exhibits an experimentally accurate SNR.We prove that adding the Gaussian random variable,

$$Z[i,j]\sim \mathcal{N}(\mu=0,\sigma^{2}=X_{avg}[i,j](1-1/n)2^{n_{bit}}/v),$$
pixel-wise (indexed by $i$ and $j$) to the digitally superimposed images produces an image with the correct noise statistics in the shot noise limit. The factor of $2^{n_{bit}}/v$ in Eq. (1), where $v$ is the pixel well depth, simply converts photoelectron count to an $n_{bit}$ value.

Let $x_q\sim {Pois}(\lambda _q)$ be the number of photons coming from the $q^{}{th}$ FOV, which follows a Poisson distribution with an unknown rate parameter $\lambda _q$. Then the total number of photons $x_{real}$ detected from all $n$ fields of view is $x_{real}\sim {Pois}(\Lambda )$, where $\Lambda =\sum _{q=1}^{n} \lambda _q$. Thus, $E(x_{real})={Var}(x_{real})=\Lambda$, which are are the desired target statistics. However, in the case of our digitally simulated overlapped images, we collect $n$ single field of view images with $n$ times larger illumination intensity, such that $x_q' \sim {Pois}(n\lambda _q)$. Then, as described in the main text, the simulated value is, $x_{sim}=\sum _{q=1}^{n} x_q'$, where we are ignoring discretization effects for now. Thus, $E(x_{sim})=\Lambda$, as desired, but ${Var}(x_{sim})=\Lambda /n$. We thus require a new transformed variable $x_{sim}'=f(x_{sim})$ such that $x_{sim}'$ has the correct mean and variance. We posit one possible function:

$$x_{sim}'=x_{sim}+Z$$
$$Z|x_{sim}\sim \mathcal{N}(\mu=0,\sigma^{2}=kx_{sim})$$
where $k$ is a constant that does not depend on $\Lambda$ (as it’s unknown). Because $Z$ is 0-mean, $E(x_{sim}')$ has the same (correct) mean of $\Lambda$. However, we desire a value of $k$ such that
$${Var}(x_{sim}')={Var}(x_{sim})+{Var}(Z)+2{Cov}(x_{sim},Z)=\Lambda,$$
as required by Poisson statistics. Of these three terms, we know only the first, $Var(x_{sim})=\Lambda /n$. The second and third terms can be computed from the joint distribution,
$$P(Z,x_{sim})=P(Z|x_{sim})P(x_{sim}).$$

Unfortunately, Gaussian and Poisson distributions are not conjugate distributions, meaning further analysis would not permit analytical solutions. From Bayesian statistics, if we have a Gaussian likelihood with a known mean and unknown variance, an inverse-gamma distribution on the variance is a conjugate prior. Thus, we approximated the distribution of $x_{sim}$ with an inverse-gamma distribution that has the same mean and variance (respectively, $\Lambda$ and $\Lambda /n$):

$$x_{sim}\sim {InvGam}(\alpha=\Lambda n+2,\beta=\Lambda(\Lambda n+1))$$

Then the joint distribution is a normal-inverse-gamma distribution:

$$Z,x_{sim}\sim {NormInvGam}(\mu=0,\lambda=1/k, \alpha=\Lambda n+2,\beta=\Lambda(\Lambda n+1))$$

From this joint distribution, we know that ${Var}(x_{sim})={\lambda}\left/\ n\right.$, ${Var}(Z)=k\Lambda$ and ${Cov}(Z,x_{sim})=0$. Evaluating Eq. (4), we obtain

$$k=1-1/n$$
which ensures that ${Var}(x_{sim}')=\Lambda$, thus justifying Eq. (1).

Appendix B

To provide a quantitative analysis of the factors that influence detection accuracy, we simulated the probability that a target feature (here, a WBC) does not overlap with another WBC, as a function of target distribution density, under ideal imaging conditions. The figure below plots the results of a Monte Carlo simulation using our experimental setup parameters (188x252 µm2 field of view (FOV) at the sample plane, assuming a WBC has a 15-µm diameter on average). The number of total WBCs is the number of WBCs in each FOV multiplied by the number of overlapped lenses, assuming all the WBCs are independently and identically distributed. Due to the central limit theorem, the probability distribution of no overlapping targets for a certain density level exhibits a normal distribution. This suggests that to avoid having more overlapped targets, the imaging system should be designed such that there are around 10 or fewer total WBCs per captured snapshot. In our experiments, there were rarely more than 3 WBCs in final overlapped images.

 figure: Fig. 8.

Fig. 8. Increasing the expected number of WBCs in the overlapped image increases the probability of at least one overlapping WBC event.

Download Full Size | PDF

Appendix C

Here is the detailed structures of CNN models in this work.

Tables Icon

Table 2. Structures of CNN models in the experiments

Funding

Duke University; National Institutes of Health (1RF1NS113287-01).

Acknowledgments

We would like to thank Eric Wahlstedt for his assistance with the experiments and Yihui Du for helping with the data processing.

Disclosures

RH: Ramona Optics Inc. (I,S).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Liu, K. Gadepalli, M. Norouzi, G. E. Dahl, T. Kohlberger, A. Boyko, S. Venugopalan, A. Timofeev, P. Q. Nelson, G. S. Corrado, J. D. Hipp, L. Peng, and M. C. Stumpe, Detecting cancer metastases on gigapixel pathology images, arXiv preprint arXiv:1703.02442 (2017 ).

2. M. N. Kashif, S. E. A. Raza, K. Sirinukunwattana, M. Arif, and N. Rajpoot, Handcrafted features with convolutional neural networks for detection of tumor cells in histology images, in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), (IEEE, 2016), pp. 1029–1032.

3. D. R. Loh, W. X. Yong, J. Yapeter, K. Subburaj, and R. Chandramohanadas, “A deep learning approach to the screening of malaria infection: Automated and rapid cell counting, object detection and instance segmentation using mask r-cnn,” Computerized Medical Imaging and Graphics 88, 101845 (2021). [CrossRef]  

4. R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, Convolutional neural networks that teach microscopes how to image, arXiv preprint arXiv:1709.07223 (2017).

5. M. Habibzadeh, A. Krzyżak, and T. Fevens, White blood cell differential counts using convolutional neural networks for low resolution images, in International Conference on Artificial Intelligence and Soft Computing, (Springer, 2013), pp. 263–274.

6. D. K. Das, R. Mukherjee, and C. Chakraborty, “Computational microscopic imaging for malaria parasite detection: a systematic review,” J. Microsc. 260(1), 1–19 (2015). [CrossRef]  

7. M. Kotepui, D. Piwkham, B. PhunPhuech, N. Phiwklam, C. Chupeerach, and S. Duangmano, “Effects of malaria parasite density on blood cell parameters,” PLoS One 10(3), e0121057 (2015). [CrossRef]  

8. P. Mangal, S. Mittal, K. Kachhawa, D. Agrawal, B. Rath, and S. Kumar, “Analysis of the clinical profile in patients with plasmodium falciparum malaria and its association with parasite density,” J Global Infect Dis 9(2), 60 (2017). [CrossRef]  

9. W. H. Organization, Malaria microscopy quality assurance manual-version 2 (World Health Organization, 2016), pp. 12–13.

10. G. Zheng, X. Ou, R. Horstmeyer, J. Chung, and C. Yang, “Fourier ptychographic microscopy: A gigapixel superscope for biomedicine,” Optics and Photonics News 25(4), 26–33 (2014). [CrossRef]  

11. N. Kumar, R. Gupta, and S. Gupta, “Whole slide imaging (wsi) in pathology: current perspectives and future directions,” J Digit Imaging 33(4), 1034–1040 (2020). [CrossRef]  

12. A. D. Borowsky, E. F. Glassy, W. D. Wallace, N. S. Kallichanda, C. A. Behling, D. V. Miller, H. N. Oswal, R. M. Feddersen, O. R. Bakhtar, A. E. Mendoza, D. P. Molden, H. L. Saffer, C. R. Wixom, J. E. Albro, M. H. Cessna, B. J. Hall, I. E. Lloyd, J. W. Bishop, M. A. Darrow, D. Gui, K.-Y. Jen, J. A. S. Walby, S. M. Bauer, D. A. Cortez, P. Gandhi, M. M. Rodgers, R. A. Rodriguez, D. R. Martin, T. G. McConnell, S. J. Reynolds, J. H. Spigel, S. A. Stepenaskie, E. Viktorova, R. Magari, J. Wharton, A. Keith, J. Qiu, and T. W. Bauer, “Digital whole slide imaging compared with light microscopy for primary diagnosis in surgical pathology a multicenter, double-blinded, randomized study of 2045 cases,” Archives of pathology & laboratory medicine 144(10), 1245–1253 (2020). [CrossRef]  

13. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

14. P. C. Konda, L. Loetgering, K. C. Zhou, S. Xu, A. R. Harvey, and R. Horstmeyer, “Fourier ptychography: current applications and future promises,” Opt. Express 28(7), 9603–9630 (2020). [CrossRef]  

15. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

16. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef]  

17. J. Fan, J. Suo, J. Wu, H. Xie, Y. Shen, F. Chen, G. Wang, L. Cao, G. Jin, Q. He, T. Li, G. Luan, L. Kong, Z. Zheng, and Q. Dai, “Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution,” Nat. Photonics 13(11), 809–816 (2019). [CrossRef]  

18. G. Bueno, O. Déniz, M. D. M. Fernández-Carrobles, N. Vállez, and J. Salido, “An automated system for whole microscopic image acquisition and analysis,” Microsc. Res. Tech. 77(9), 697–713 (2014). [CrossRef]  

19. A. J. Evans, T. W. Bauer, M. M. Bui, T. C. Cornish, H. Duncan, E. F. Glassy, J. Hipp, R. S. McGee, D. Murphy, C. Myers, D. G O’Neill, A. V. Parwani, A. Rampy B., M. E. Salama, and L. Pantanowitz, “Us food and drug administration approval of whole slide imaging for primary diagnosis: a key milestone is reached and new questions are raised,” Archives of pathology & laboratory medicine 142(11), 1383–1387 (2018). [CrossRef]  

20. S. Farsiu, M. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. on Image Process. 13(10), 1327–1344 (2004). [CrossRef]  

21. W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: A brief review,” IEEE Trans. Multimedia 21(12), 3106–3121 (2019). [CrossRef]  

22. C. Dong, C. C. Loy, K. He, and X. Tang, Learning a deep convolutional network for image super-resolution, in European conference on computer vision, (Springer, 2014), pp. 184–199.

23. C. Dong, C. C. Loy, and X. Tang, Accelerating the super-resolution convolutional neural network, in European conference on computer vision, (Springer, 2016), pp. 391–407.

24. J. Kim, J. K. Lee, and K. M. Lee, Accurate image super-resolution using very deep convolutional networks, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 1646–1654.

25. W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 1874–1883.

26. W. S. Lai, J. B. Huang, N. Ahuja, and M.-H. Yang, Deep laplacian pyramid networks for fast and accurate super-resolution, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2017), pp. 624–632.

27. C. You, G. Li, Y. Zhang, X. Zhang, H. Shan, M. Li, S. Ju, Z. Zhao, Z. Zhang, W. Cong, M. W. Vannier, P. K. Saha, E. A. Hoffman, and G. Wang, “Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle),” IEEE Trans. Med. Imaging 39(1), 188–203 (2019). [CrossRef]  

28. Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin, Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2018), pp. 701–710.

29. D. Ulyanov, A. Vedaldi, and V. Lempitsky, Deep image prior, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 9446–9454.

30. R. Heckel and P. Hand, Deep decoder: Concise image representations from untrained non-convolutional networks, arXiv preprint arXiv:1810.03982 (2018).

31. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

32. C. Guo, S. Jiang, L. Yang, P. Song, T. Wang, X. Shao, Z. Zhang, M. Murphy, and G. Zheng, “Deep learning-enabled whole slide imaging (deepwsi): oil-immersion quality using dry objectives, longer depth of field, higher system throughput, and better functionality,” Opt. Express 29(24), 39669–39684 (2021). [CrossRef]  

33. V. Treeaporn, A. Ashok, and M. A. Neifeld, “Increased field of view through optical multiplexing,” Opt. Express 18(21), 22432–22445 (2010). [CrossRef]  

34. R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express 18(22), 23041–23053 (2010). [CrossRef]  

35. R. H. Shepard, Y. Rachlin, V. Shah, and T. Shih, “Design architectures for optically multiplexed imaging,” Opt. Express 23(24), 31419–31435 (2015). [CrossRef]  

36. A. Khan, S. Gould, and M. Salzmann, Deep convolutional neural networks for human embryonic cell counting, in European conference on computer vision, (Springer, 2016), pp. 339–348.

37. C. Zhang, X. Xiao, X. Li, Y.-J. Chen, W. Zhen, J. Chang, C. Zheng, and Z. Liu, “White blood cell segmentation by color-space-based k-means clustering,” Sensors 14(9), 16128–16147 (2014). [CrossRef]  

38. S. N. M. Safuan, M. R. M. Tomari, and W. N. W. Zakaria, “White blood cell (wbc) counting analysis in blood smear images using various color segmentation methods,” Measurement 116, 543–555 (2018). [CrossRef]  

39. M. Poostchi, I. Ersoy, K. McMenamin, E. Gordon, N. Palaniappan, S. Pierce, R. J. Maude, A. Bansal, P. Srinivasan, L. Miller, K. Palaniappan, G. Thoma, and S. Jaeger, “Malaria parasite detection and cell counting for human and mouse using thin blood smear microscopy,” J. Med. Imag. 5(04), 1 (2018). [CrossRef]  

40. A. Muthumbi, A. Chaware, K. Kim, K. C. Zhou, P. C. Konda, R. Chen, B. Judkewitz, A. Erdmann, B. Kappes, and R. Horstmeyer, “Learned sensing: jointly optimized microscope hardware for accurate image classification,” Biomed. Opt. Express 10(12), 6351–6369 (2019). [CrossRef]  

41. K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014).

42. K. He, X. Zhang, S. Ren, and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in Proceedings of the IEEE international conference on computer vision, (2015), pp. 1026–1034.

43. Z. Ma, L. Yu, and A. B. Chan, Small instance detection by integer programming on object density maps, in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 3689–3697.

44. A.-C. Woerl, M. Eckstein, J. Geiger, D. C. Wagner, T. Daher, P. Stenzel, A. Fernandez, A. Hartmann, M. Wand, W. Roth, and S. Foersch, “Deep learning predicts molecular subtype of muscle-invasive bladder cancer from conventional histopathological slides,” Eur. Urol. 78(2), 256–264 (2020). [CrossRef]  

45. J.-Y. Lee, N. C. Sadler, R. G. Egbert, C. R. Anderton, K. S. Hofmockel, J. K. Jansson, and H.-S. Song, “Deep learning predicts microbial interactions from self-organized spatiotemporal patterns,” Computational and structural biotechnology journal 18, 1259–1269 (2020). [CrossRef]  

46. F. Yang, M. Poostchi, H. Yu, Z. Zhou, K. Silamut, J. Yu, R. J. Maude, S. Jaeger, and S. Antani, “Deep learning for smartphone-based malaria parasite detection in thick blood smears,” IEEE J. Biomed. Health Inform. 24(5), 1427–1438 (2019). [CrossRef]  

47. C. Xie, R. Du, J. W. Ho, H. H. Pang, K. W. Chiu, E. Y. Lee, and V. Vardhanabhuti, “Effect of machine learning re-sampling techniques for imbalanced datasets in 18 f-fdg pet-based radiomics model on prognostication performance in cohorts of head and neck cancer patients,” Eur. J. Nucl. Med. Mol. Imaging 47(12), 2826–2835 (2020). [CrossRef]  

48. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting white blood cells from a blood smear using fourier ptychographic microscopy,” PLoS One 10(7), e0133489 (2015). [CrossRef]  

49. J. Thachil and I. Bates, “Approach to the diagnosis and classification of blood cell disorders,” Dacie and Lewis Practical Haematology 137, 497–510 (2017). [CrossRef]  

50. M. Habibzadeh, A. Krzyzak, T. Fevens, and A. Sadr, “Counting of rbcs and wbcs in noisy normal blood smear microscopic images,” Proc. SPIE 7963, 79633I (2011). [CrossRef]  

51. N. Guo, L. Zeng, and Q. Wu, “A method based on multispectral imaging technique for white blood cell segmentation,” Comput. Biol. Med. 37(1), 70–76 (2007). [CrossRef]  

52. P. Lebel, R. Dial, V. N. Vemuri, V. Garcia, J. DeRisi, and R. Gómez-Sjöberg, “Label-free imaging and classification of live p. falciparum enables high performance parasitemia quantification without fixation or staining,” PLoS Comput. Biol. 17(8), e1009257 (2021). [CrossRef]  

53. A. Tareef, Y. Song, D. Feng, M. Chen, and W. Cai, Automated multi-stage segmentation of white blood cells via optimizing color processing, in 2017 IEEE 14th international symposium on Biomedical imaging (ISBI 2017), (IEEE, 2017), pp. 565–568.

54. L. Tian, L. H. Yeh, R. Eckert, and L. Waller, Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3d phase imaging, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (IEEE, 2017), pp. 6225–6229.

55. Y. Xue, N. Ray, J. Hugh, and G. Bigras, Cell counting by regression using convolutional neural network, in European Conference on Computer Vision, (Springer, 2016), pp. 274–290.

56. M. Ashraf, M. Sharika, B. R. Sim, A. Tam, K. Rahemipour, D. Brousseau, S. Thibault, A. D. Corbett, and G. Bub, “Random access parallel microscopy,” eLife 10, e56426 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) Pipeline of overlapped microscope imaging. Multiple independent sample FOVs are illuminated by LEDs and then imaged onto a common sensor through a multi-lens array to form an overlapped image. A CNN-based post processing framework is then employed to identify target features or objects within the overlapped image. (b) i. 2 single-FOV images (FOV1, FOV2) of a hematological sample, ii. the lens array, and iii. the overlapped image ($n$ = 2) for WBC counting task. (c) Overlapped imaging setup. Left: imaging geometry with sample imaged by three lenses and overlapped on a single image sensor. Right: FOVs of 7 lenses marked as gray circles at the sample plane, where FOVs of diameter $a_o$ are separated by lens pitch $w$ and do not overlap. At the image plane, FOVs denoted by red circles of diameter $a_i$ have significant overlap. Marked variables are listed in Table 1.
Fig. 2.
Fig. 2. Simulation of overlapped imaging for malaria parasite identification. (a) High-resolution, large-FOV images of malaria-infected thick blood smears (from SimuData) were cropped into $96\times 96$ patches, and labeled as positive or negative based on the presence of the malaria parasite. (b) We generate digitally overlapped images by averaging patches and adding pixel-adaptive Gaussian noise to compensate for an artificially inflated SNR. Malaria-positive overlapped patches consisted of 1 malaria-infected patch and $n-1$ negative patches.
Fig. 3.
Fig. 3. Results of overlapped malaria parasite counting task (SimuData). (a) Task performance for all non-overlap and overlap conditions. (b) ROC curves for all overlap conditions.
Fig. 4.
Fig. 4. Overlapped images of a resolution target determine system resolution and contrast. Resolution target is positioned beneath one sub-lens at a time, with all other sub-lenses blocked, for non-overlapped image capture. Example non-overlapped images (a) from the center sub-lens, (b) left-bottom sub-lens and (c) Right top sub-lens all exhibit an approximately 2-$\mathrm {\mu }$m full-pitch resolution (see blue boxes). $n$= 7 overlapped images (bottom row), captured by illuminating all sub-FOVs at each resolution target position, demonstrate the sample is still visible. (d) Traces through group 8 element 1 (colored lines, averaged over 20 rows) show approximately a 6$\times$ higher contrast for the non-overlapped images (top) versus the corresponding overlapped images (bottom).
Fig. 5.
Fig. 5. WBC classification accuracy from digitally overlapped experimental data. (a) Different types of WBCs in DukeData, imaged by the central lens of our proposed microscope. (b) Example digitally overlapped image from experimentally acquired DukeData, varying from $n$ = 1 - 10. (c) Classification accuracy versus number of overlapped images.
Fig. 6.
Fig. 6. Results for the WBC counting task using experimentally-overlapped data, reported as ROC curves (a) and confusion matrices (b-h) aggregated under various degrees of overlap ($n$ = 1-7). Solid circles on each ROC curve denote the true positive rate (TPR) and false positive rate (FPR) corresponding to the threshold at which the confusion matrices were calculated, which was chosen to maximize the geometric mean of TPR and true negative rate (TNR).
Fig. 7.
Fig. 7. Classification heatmaps for experimentally-overlapped images using CNNs trained on digitally-overlapped data. (a) Example single-FOV images captured by the proposed setup. (b) Overlapped images ($n$ = 1-4) captured by the proposed setup. The contributing FOVs from (a) for each overlapped image in (b) are tagged in the upper left corners. The bottom row shows the classification heatmaps generated by the CNNs overlaid on top of the overlapped images. Dotted circles identify the ground-truth locations of WBCs.
Fig. 8.
Fig. 8. Increasing the expected number of WBCs in the overlapped image increases the probability of at least one overlapping WBC event.

Tables (2)

Tables Icon

Table 1. Table of parameters, overlapped microscope

Tables Icon

Table 2. Structures of CNN models in the experiments

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Z [ i , j ] N ( μ = 0 , σ 2 = X a v g [ i , j ] ( 1 1 / n ) 2 n b i t / v ) ,
x s i m = x s i m + Z
Z | x s i m N ( μ = 0 , σ 2 = k x s i m )
V a r ( x s i m ) = V a r ( x s i m ) + V a r ( Z ) + 2 C o v ( x s i m , Z ) = Λ ,
P ( Z , x s i m ) = P ( Z | x s i m ) P ( x s i m ) .
x s i m I n v G a m ( α = Λ n + 2 , β = Λ ( Λ n + 1 ) )
Z , x s i m N o r m I n v G a m ( μ = 0 , λ = 1 / k , α = Λ n + 2 , β = Λ ( Λ n + 1 ) )
k = 1 1 / n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.