Abstract

Rapid cell identification is achieved in a compact and field-portable system employing single random phase encoding to record opto-biological signatures of living biological cells of interest. The lensless, 3D-printed system uses a diffuser to encode the complex amplitude of the sample, then the encoded signal is recorded by a CMOS image sensor for classification. Removal of lenses in this 3D sensing system removes restrictions on the field of view, numerical aperture, and depth of field normally imposed by objective lenses in comparable microscopy systems to enable robust 3D capture of biological volumes. Opto-biological signatures for two classes of animal red blood cells, situated in a microfluidic device, are captured then input into a convolutional neural network for classification, wherein the AlexNet architecture, pretrained on the ImageNet database is used as the deep learning model. Video data was recorded of the opto-biological signatures for multiple samples, then each frame was treated as an input image to the network. The pre-trained network was fine-tuned and evaluated using a dataset of over 36,000 images. The results show improved performance in comparison to a previously studied Random Forest classification model using extracted statistical features from the opto-biological signatures. The system is further compared to and outperforms a similar shearing-based 3D digital holographic microscopy system for cell classification. In addition to improvements in classification performance, the use of convolutional neural networks in this work is further demonstrated to provide improved performance in the presence of noise. Red blood cell identification as presented here, may serve as a key step toward lensless pseudorandom phase encoding applications in rapid disease screening. To the best of our knowledge this is the first report of lensless cell identification in single random phase encoding using convolutional neural networks.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging through diffusive media is a pervasive issue for many imaging tasks with important potential applications for imaging in turbid conditions or imaging biological tissues. While the scattering effects of diffusive media can be problematic for many imaging and visualization tasks, recent studies in this area have utilized the properties of diffusers in several promising approaches for lensless imaging systems [14]. These approaches focus on recovering the object behind a diffuser using either correlation-based, deconvolution-based, or deep learning frameworks. In addition to several lensless approaches, it should not go without mention that there exist several promising lens-based imaging strategies for imaging through scattering, including the recently presented confocal diffuse tomography [5]. However, in this work, we are primarily interested in considering lensless-based arrangements with aims at cell identification.

The removal of lenses from an imaging system can result in several key benefits including larger field of view, removal of one limitation to the system’s numerical aperture, and increased depth of field, which may enable better inspection of large 3D volume samples in comparison to lens-based systems [6,7]. Absent a finite NA imposed by an imaging lens, lensless imaging systems are capable of providing a large space-bandwidth product [8,9]. Moreover, the resolution of lensless imaging systems are theoretically limited by the evanescent decay of high frequency components, the pixel size of the detector, and the refractive index of the medium between the sample and sensor [10].

In addition to studies aimed at recovering an object behind or through diffusive media, previous studies have also shown that substantial information about a sample under study can be obtained directly from the recorded signature [8,11] to allow for object classification without removing the effects of the diffuser for visualization purposes. These approaches instead use a laser source to probe the 3D volume under inspection then examine the encoded complex amplitude generated by the interaction between the sample and a random phase mask. Lensless cell identification based on pseudorandom phase encoding techniques were first presented in [8] and considered a single diffuser and a single cell within the field of view. The diffuser pseudo-randomly encodes the complex amplitude of a sample, which includes information pertaining to its 3D structure and composition, then the opto-biological signature is recorded by an imaging sensor. Variations in opto-biological signatures arise due to differences in 3D morphology, size, shape, complex sub-cellular structures, material properties, refractive indices etc. These variations in the recorded signatures between various classes of samples allow for fast, and efficient classification of samples without lenses. This technology has also been extended to consider double random phase encoding [1115] arrangements for classification of multiple cells within the field of view [11]. Both single random phase encoding (SRPE) and double random phase encoding (DRPE) cell identification systems have been presented for cell identification based on statistical features extracted from the opto-biological signatures recorded at the sensor plane. Here, we provide continued development of the lensless pseudo-random phase encoding systems by considering classification using convolutional neural networks.

Convolutional Neural Networks (CNNs) are multi-layer representation learning networks within the field of machine learning that learn features for classification directly from an input [16]. This is accomplished through concatenated layers wherein the complexity of features grows as the depth of the layers increases. Convolutional neural networks in particular, use convolutional layers that connect each unit of a current layer to patches of the previous layer providing a strong ability to build upon local features in the generation of increasingly complex features and to identify motifs that recur in different regions of an image [16]. These abilities have resulted in remarkable success for CNNs within the image processing fields, and especially in applications of image recognition and object detection [16].

In this paper, we present a compact lensless imaging system for 3D capture and cell identification based on the single random phase encoding configuration using convolutional neural networks. The system is used to classify between live cow and horse RBCs. Samples of red blood cells are loaded into a custom-made microfluidic device, then the opto-biological signatures are recorded and considered for classification. This paper is organized as follows: in section 2, we outline the system design and experimental procedure. Section 3 provides the experimental results for classification and the associated discussions. Finally, the conclusions are presented in section 4.

2. Methods

2.1 System design

Data was recorded using a compact, low-cost, and field-portable 3D-printed device containing a laser diode, sample loaded into a microfluidic device, diffuser, beam splitter, and a CMOS image sensor. The 3D printed instrument has dimensions of 50 mm x 125 mm x 160 mm. The beam splitter allows for an imaging arm of the device be used for monitoring of samples during system calibration and to ensure only cells of interest are present in the field of view prior to recording the opto-biological signatures. The beam splitter is not included for any purpose beyond confirmation of the system calibration. A custom-made microfluidic device is attached to a standard glass microscopy slide and filled with red blood cells (RBCs) then input into the lensless cell identification system.

The microfluidic device was prepared using standard methods of photolithography and soft photolithography, as previously described with slight modifications [17]. A 2D rendering of the device was created using computer-aided design software (AutoCAD). The design contained a central channel with large side chambers for ease of sample investigation with minimal flow. A negative master mold was created from 4-inch diameter Si wafer (Nova Electronic Materials, test grade), spin coated with SU-8 2025 (Kayaku Advanced Materials) to an approximate height of 65 ± 2 µm. The SU-8 layer was then crosslinked by exposure to UV light through a chrome-in-glass photolithography mask (Advanced Reproductions). Features were revealed using SU-8 developer (Kayaku Advanced Materials) where uncrosslinked SU-8 was removed. Replicate devices where cast using the master in polydimethylsiloxane (PDMS, Sylgard 184). The PDMS microdevices were then cut from the wafer and removed from the master mold. A 4 mm biopsy punch was used to create four perimeter input wells. Each well passed through a 200 µm wide channel before entering the 16.7 mm x 16.7 mm central chamber. During data acquisition a single PDMS microfluidic device was affixed to clean glass slide, feature side down, by non-specific adhesion as it allowed the device to be repositioned should an air bubble get trapped during filling. The microfluidic device was inserted into the 3D printed system such that the source beam passed through the center of the device’s central channel during experiments. The use of a reusable microfluidic device in this work enables consistency in the volume of cells examined across trials, reducing the number unknown variables which may contribute to the resulting optical biological signatures. Following data acquisition, the microfluidic device can be easily detached from the glass slide to be cleaned with isopropyl alcohol between imaging of different samples.

As the source laser passes through the cells of interest, it probes the 3D volume of the cells under inspection, then the complex amplitude of the illuminated cells is encoded by the subsequent diffuser placed in close contact after the microfluidic device. The light exiting the diffuser propagates to and is recorded by the CMOS sensor, and the recorded pattern is referred to as the opto-biological signature. Schematic representation and depiction of the imaging system are shown in Fig. 1. During experiments, a 1.2 mW 635 nm laser diode (CPS635R, Thorlabs) is used along with a Thorlabs DCC1545M CMOS image sensor. The sensor provides 1080 × 1024-pixel dimensions with a square pixel size of 5.2 µm.

 figure: Fig. 1.

Fig. 1. (a) Schematic representation of a single random phase encoding cell identification system. (b) Experimental setup for SRPE cell identification. Inset of (a) shows the complex amplitude being randomly scattered by the diffuser prior to recording by the CMOS image sensor, allowing for capture of 3D information of the cells under inspection. SRPE: Single random phase encoding. The 3D printed instrument has dimensions of 50 mm x 125 mm x 160 mm.

Download Full Size | PPT Slide | PDF

The samples considered for classification are cow RBCs and horse RBCs. Both blood samples were washed by the manufacturer and packed at 10% volume in phosphate buffered saline (PBS) solution. The animal red blood cells were further diluted in PBS solution down to a concentration of 2% RBC by volume. Blood was then loaded into a custom-made, reusable microfluidic device to provide consistency between trials, then the microfluidic device is placed into the imaging system for data recording. All data was taken within 1 week of the samples arriving to ensure cell vitality according the manufacturers recommendations. Video OBSs were recorded at 20 fps for 10 seconds with an exposure time of 0.46 ms.

2.2 Mathematical formulation

The process of recording an opto-biological signature can be mathematically described by considering the Fresnel diffraction and propagation of a sample’s complex amplitude through the SRPE system [8,10,11]. We can define the complex field of the input object as ${u_1}({x,y} )= |{{A_{obj}}({x,y} )} |exp [{j{\phi_{obj}}({x,y} )} ]$ where ${A_{obj}}({x,y} )$ is the amplitude, and ${\phi _{obj}}({x,y} )$ is the phase of our microscopic sample under study. The sample is placed as close as feasible to the diffuser at a distance z1, such that the complex field at the diffuser plane (ξ, η) can be described as

$${u_2}({\xi ,\eta } )= \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty {u_1}({x,y} ){h_1}({\xi - x,\eta - y} )dxdy,$$
where the convolution kernel is given as
$${h_1}({\xi ,\eta } )= \frac{{{e^{jk{z_1}}}}}{{j\lambda {z_1}}}exp \left[ {\frac{{jk}}{{2{z_1}}}({{\xi^2} + {\eta^2}} )} \right].$$
In the equations, k is the wave number equal to 2π/λ, and λ represents the vacuum wavelength of the source which is given as 635 nm. This complex amplitude is multiplied by the random phase of the diffuser such that the field just exiting the diffuser plane may be given as [18]
$${u_3}({\xi ,\eta } )= {u_2}({\xi ,\eta } )exp [{j{\phi_{D1}}({\xi ,\eta } )} ].$$

This complex amplitude is then propagated to the sensor plane (α, β) a distance z2 away as follows:

$${u_4}({\alpha ,\beta } )= \mathop \smallint \nolimits_{ - \infty }^\infty \mathop \smallint \nolimits_{ - \infty }^\infty {u_3}({\xi ,\eta } ){h_2}({\alpha - \xi ,\beta - \eta } )d\xi d\eta ,$$
where the convolution kernel is given as
$${h_2}({\alpha ,\beta } )= \frac{{{e^{jk{z_2}}}}}{{j\lambda {z_2}}}exp \left[ {\frac{{jk}}{{2{z_2}}}({{\alpha^2} + {\beta^2}} )} \right].$$
Noting that the CMOS image sensor records only the intensity, the magnitude squared of the complex amplitude $I = {|{{u_4}({\alpha ,\beta } )} |^2}$ is recorded as the opto-biological signature. Notably, while only the final intensity is recorded at the sensor, the complex amplitude arriving at the sensor plane is affected by the 3D distribution and composition of the cells under inspection.

Following recording of video opto-biological signatures, all video frames are converted to images to serve as inputs to the convolutional neural network. For each class, data was recorded in 3 sets of 30 videos over a 3-day period. Data was collected in this way to ensure the system does not overfit to external environmental factors or internal system noise during the data recording process and to confirm that the system is able to classify on testing data that was taken at a different date and time from the training data. After converting all video frames to images, the dataset consisted of 36,188 images. Because the cells are living biological micro objects, the opto-biological signature should be expected to vary over time. Furthermore, intrinsic system noise such as sensor read noise may randomly affect each image frame. Thus, by recording video data then converting to image frames, the dataset can be efficiently increased in size. This process was important for this work as the training of neural networks typically requires substantial amounts of data, and standard image data augmentation strategies may not have been appropriate for this dataset. However, because of this batched nature of data collection, images from the same video will have similarities with each other and cannot be considered as truly independent data points.

To deal with this relationship, cross-validation is used during classification. In the cross-validation procedure, all images from a given video set are grouped together and either all considered as being part of the testing or training group. The testing group is then changed to a different video set and the process is repeated until all videos serve as the test group and the results are combined. This method ensures no images from a given video set are simultaneously used in both training and testing which could provide an overly optimistic assessment of the system performance. In short, this allows us to alleviate issues of the network overfitting to external factors such as environmental noise which may be specific to the individual video being considered. The diffuser was kept stationary throughout the experiments, and no further data augmentation was applied to the dataset.

2.3 CNN-based classification

In this work, we use the AlexNet architecture [19], pretrained on the ImageNet database [20] as our initial model, which was chosen because it provides a suitable balance of accuracy and speed. AlexNet is an 8-layer deep convolutional neural network with five convolutional layers followed by three fully connected layers. The first four convolutional layers are followed by max pooling layers prior each subsequent convolutional layer. The rectified linear unit (ReLu) is used as the activation function for each layer up until the classification output which has a softmax activation function. The convolutional layers use sliding convolutional kernels to extract feature maps whereas the pooling layers aggregate information using a maximum operation within local neighborhoods. Furthermore, the network employs two dropout layers with a dropout rate of 0.5 following the first two fully connected layers, where dropout layers are a form of regularization that work by setting a percentage of neurons to zero during the training procedure to aid in preventing overfitting.

Due to its relatively short depth in comparison to more recent deep learning networks, this network provides an efficient network for quickly training on new tasks. Pretraining on ImageNet initializes the weights of the network and allows the network to adapt to new tasks more quickly. This is especially helpful when data of the target task is limited or expensive to obtain. Here, while 36,188 images may be sufficient to train a model from scratch, the data is taken from only 90 video sequences per class, therefore, we feel it is more appropriate to use a pretrained network in this classification task.

To fine-tune the CNN for the target task, the classification and final fully connected layers of the network were removed and replaced with new layers having the correct number of outputs (i.e. two in this binary classification task). During each fold of the cross-validation procedure the network was trained on 2/3 of the data, and the remaining 1/3 of the data was used as the test set. The network was optimized using stochastic gradient descent and a learn rate of 0.003. However, the learn rate for the two newly added layers was set at 0.006 to compensate for these layers being randomly initialized. Using a minibatch size of 24, the network was trained for 3 epochs taking approximately six and a half hours on a NVIDIA Quadro P4000 GPU. Testing the network for classification of 12065 images takes approximately 50 minutes, which averages to less than 1 minute to classify all image frames from a single recorded video. Implementation of the network was done in Matlab software using the neural network toolbox. The proposed CNN-based classification model is directly compared against a Random Forest [21] classifier using statistical features extracted from the same dataset. The statistical features used in the Random Forest classifier were the mean, variance, skewness, kurtosis, and entropy of the OBS in both the spatial and frequency domains, as well as the correlation to a reference OBS of each class, computed in the spatial domain [11].

2.4 Comparison to classification using shearing-based 3D digital holographic microscopy

We additionally compare the performance of our system to that of a comparable 3D digital holographic microscopy system for cell identification. Digital holographic microscopy is a non-destructive quantitative amplitude and phase imaging modality with robust applications in biological cell imaging, analysis, and identification [2227]. 3D digital holographic microscopy relies on the principles of interferometry to recover the complex amplitude of samples under inspection for further analysis. Here, we use a previously reported compact and field-portable 3D digital holographic microscopy system [25] to provide a comparison with the presented system. The 3D digital holographic microscopy system uses the same laser diode as a laser source, and the same image sensor as the presented SRPE system. The digital holographic system differs by the removal of the diffuser from the system and the inclusion of a shear plate to induce interference [2527], as well as a 40x (0.65 NA) microscope objective. As with the presented lensless system, 10s videos were recorded at 20 frames per second (fps). 707 total red blood cells were segmented from the digital holographic microscopy video dataset for classification. Each video frame for every segmented RBC was reconstructed using the scalar diffraction integral [10,26] and considered as a single data point, resulting in over 142,000 video frames of 3D reconstructed RBCs for classification. As was done with the opto-biological signature data, cross-validation is performed in all classification models to ensure no video frames from the same video are used simultaneously in both training and testing. For full comparison between the SRPE and 3D digital holographic systems, we consider the same CNN architecture trained on 3D reconstructed digital holographic data for classification. For use of the digital holographic data in our CNN, we use the 2D optical path length maps as the input to the CNN. For comparison of performance in the Random Forest classifier, morphological features are extracted from the 3D reconstructed cells for input to a Random Forest classifier. The morphological features used for this comparative study were mean optical path length, coefficient of variation, projected area, optical volume, cell thickness skewness, cell thickness kurtosis, cell perimeter, cell circularity, cell elongation, cell eccentricity, and cell thickness entropy. Depictions of the data acquired from each RBC class in both systems is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Depiction of data acquired in SRPE (a, c) and 3D digital holographic microscopy systems (b, d). (a)-(b) Results from Cow RBC samples. (c)-(d) Results of Horse RBC samples. Red rectangles show an enlarged view of the opto-biological signatures. SRPE: Single random phase encoding. RBC: Red blood cell.

Download Full Size | PPT Slide | PDF

3. Results and discussions

The confusion matrices for the proposed CNN-based classification method and the feature-based classification using the Random Forest classifier in SRPE as well as for the 3D digital holographic microscopy system are presented in Table 1. Furthermore, the receiver operating characteristic (ROC) curves are given in Fig. 3. The performance is assessed using classification accuracy, area under the curve (AUC), and Mathew’s correlation coefficient (MCC). Area under the curve represents the probability that a randomly chosen positive data point ranks higher than a randomly chosen negative data point, and MCC represents a correlation between the output and predicted classes. AUC ranges from 0 to 1 with 0.5 equivalent to a random guess whereas MCC ranges from −1 to 1 and 0 represents the equivalent of a random guess. From Fig. 3, we note the proposed method provided the best receiver operating characteristic curve. Furthermore, Table 2 indicates that the proposed method provided the best performance in each of the metrics considered.

 figure: Fig. 3.

Fig. 3. Receiver Operating Characteristic (ROC) curves for each classifier comparing CNN-based and Random Forest classifiers for both the single random phase encoding system and the shearing-based 3D digital holographic system. SRPE: Single random phase encoding; CNN: Convolutional neural network; RF: Random Forest. Shearing: Shearing-based digital holographic system.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Confusion matrices for classification of animal RBCsa

Tables Icon

Table 2. Classification summary for animal RBCs using SRPE lensless cell identification system

From Table 2, the proposed lensless system using convolutional neural networks provides improved performance over the same system using statistical features extracted from the opto-biological signatures. Additionally, the presented system outperforms a comparable shearing-based 3D digital holographic system in terms of classification performance by all metrics considered.

The system was further tested in the presence of added noise to assess the robustness of the classification system. In particular, we are interested in the effect of noise added to the test set that was not present in the training set. If the noise model is known a priori, data augmentation may be used during training to improve the classification performance, however, here we have considered the case when the noise model is unknown during training which presents a more challenging task. To accomplish this task, two-thirds of the data were used for training, without any modification. After the model was trained, the test set was modified with additional noise and the performance was recorded. We have tested the system by considering both partial obstruction of the recorded signal, and in the presence of additive Gaussian noise. Both forms of noise were tested at four levels of severity. For partial obstruction, a randomly positioned rectangular area of the opto-biological signature was set to zero with the size of the rectangle increasing at more severe levels of occlusion. The levels tested corresponded to blocking of 1%, 4%, 25%, and 64% of the recorded signal. These percentages were chosen by selecting a proportionally sized rectangle with dimensions of 10%, 20%, 50%, and 80% of the original dimensions. For additive Gaussian noise, each OBS of the testing set was modified by the addition of zero-mean Gaussian noise. The variance of the Gaussian noise was increased to control the severity of the noise. We have tested Gaussian noise using variances of 0.01, 0.02, 0.05 and 0.08. The various levels of added noise are visualized by Fig. 4.

 figure: Fig. 4.

Fig. 4. Visualization of various computationally added noises. (a) Shows the original recorded opto-biological signature for a sample of cow RBCs measured using the reported SRPE system. (b-e) Show the recorded signal under partial obstruction, corresponding to 1%, 4%, 25%, and 64% of the signature area being obstructed, respectively. (f-i) Show the recorded signature with added zero-mean Gaussian noise having variances of 0.01, 0.02, 0.05, and 0.08, respectively.

Download Full Size | PPT Slide | PDF

To further characterize the noise levels considered, we additionally provide the peak signal-to-noise ratio (PSNR) between the original recorded opto-biological signature, and the opto-biological signatures under simulated noisy conditions. The PSNR values for the partially obstructed data in 4(b)-4(e) were 28.73 dB, 22.85 dB, 14.93 dB, and 10.91 dB, respectively. The PSNR values in the case of additive Gaussian noise (i.e. 4(f)-4(i)) were 20.04 dB, 17.17 dB, 13.70 dB, and 12.11 dB, respectively.

Figure 5(a) and 5(c) shows the ROC curves for classification under partial obstruction of the signal and with added Gaussian noise, respectively. Figure 5(b) and 5(d) show the relationship between the level of degradation and the computed performance metrics (accuracy and AUC) for both the CNN and RF classifiers, corresponding to obstructive noise, and additive noise, respectively. From Fig. 5, it is evident that the proposed CNN-based classification strategy offers a more robust classification in the presence of noise not considered in the training dataset. Tabulated results are provided by Table 3.

 figure: Fig. 5.

Fig. 5. (a) Receiver operating characteristic (ROC) curves for classification in under partial obstruction of the recorded signal. (b) Plots of accuracy and area under the curve (AUC) values as a function of the percent of signal obstruction. (c) ROC curves for classification in the presence of additive Gaussian noise. (d) Plots of accuracy and AUC value as a function of the variance of the additive Gaussian noise. CNN: Convolutional neural network; RF: Random Forest.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. SRPE classification performance in the presence of noise

These results in the presence of noise demonstrate that the proposed CNN-based classification strategy for lensless pseudorandom cell identification provides a more robust system than the previously reported statistical feature-based classification strategy. Notably, the CNN-based approach was nearly unaffected by small amounts of additive Gaussian noise, whereas the Random Forest classifier quickly dropped to 50% accuracy and 0.5 AUC. Similarly, the classification accuracy dropped to 50% with the addition of slight levels of obstruction to the recorded OBS. While the AUC did not immediately drop to 0.5 in the case of signal obstruction, the CNN-based classifier consistently outperforms the Random Forest classifier in terms of AUC. Moreover, the partial obstruction presented in Fig. 4 is helpful for demonstrating two potential benefits of a diffuser-based lensless classification system. First, the high classification accuracy with partial obstruction is indicative that the information is spread across the sensor by the recorded pattern as has been suggested by other works dealing with perturbations to the recorded pattern in psuedo-random phase encoding systems [28]. Secondly, this demonstration may show the proposed strategy offers some robustness to one-pixel or few-pixel attacks or similar cybersecurity attacks which have been shown to have severely negative effects on some CNN-based image classifiers [29].

While these results demonstrate superior performance in comparison to the similar 3D digital holographic system, we must note that the 3D digital holographic system provides several advantages in terms of visualization, and the ability to segment individual cells of interest. We also note, the digital holographic system has been shown to improve performance when including individual cell motility information [25] which has not been considered here for sake of comparison to the lensless pseudorandom system. Lastly, we note that the holographic system investigated individually segmented cells, whereas the proposed system considers all cells within the field of view. Future work looks to examine this presented SRPE system at varied cell concentrations and finding the optimal concentrations for classification tasks. Despite these differences, we find it informative that the proposed system outperforms a previously studied 3D imaging system for static RBC classification. In addition to improvements in classification performance, the presented system also offers a reduced cost and form factor by removing several optical components in comparison to the shearing 3D digital holographic system. Furthermore, the removal of lenses from the system and subsequent replacement by a diffusive element removes one limitation to the numerical aperture of the system to allow for higher spatial frequencies to be captured by the sensor, as well as enabling large field of view and large depth of field sensing of the 3D volume. The most notable system limitations in terms of field of view and resolution of the presented lensless imaging system will be the beam size at the sample plane (2.9 mm diameter) and the diffraction between the sample and the encoding diffuser, respectively. Future work will be devoted to providing a more precise characterization of the imaging properties for the lensless pseudorandom phase encoding systems.

4. Conclusions

In conclusion, we have presented experimental verification for lensless cell identification using convolutional neural networks in a single random phase (SRPE) encoding system. Cells were loaded into a custom-made microfluidic device then input into the 3D-printed SRPE system. A laser source probes the 3D volume of cells under inspection, and the signal generated by the interaction of the complex amplitude of the sample with a random diffuser is examined for classification. For each sample of cells, a video was recorded at 20 fps for 10s, then each image frame was treated as an input to a convolutional neural network. Cross-validation was performed to allow all data to be used for testing and to ensure no images from the same video sequence were included simultaneously in both training and testing. The proposed SRPE method achieved 89.99% accuracy with an area under the curve (AUC) of 0.9649 and Mathew’s correlation coefficient of 0.8024. Each metric showed improved performance over classification using statistical features from the opto-biological signatures (OBS) and over the comparable 3D digital holographic microscopy system. The lensless nature of the 3D capture system removes limitations imposed by lenses on the numerical aperture, field of view, and depth of field to enable inspection of large 3D volumes. Additionally, the SRPE system offers advantages in compactness, time efficiency, and cost. To the best of our knowledge this is the first report on convolutional neural network-based cell identification in lensless pseudorandom phase encoding systems. Future work includes detailed analysis of the system limitations in terms of resolution, stability, etc. Additional future work may focus on optimization of the system and microfluidic design for use in single cell studies and providing precise control over cell density. Ultimately, future applications in this area entail classification of the health state of biological cells and disease screening using lensless pseudorandom phase encoding systems.

Acknowledgments

We would like to thank Adam Markman for fruitful discussions. T. O’Connor acknowledges support via the GAANN fellowship through the Department of Education.

Disclosures

The authors declare no conflicts of interest.

References

1. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803 (2018). [CrossRef]  

2. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018). [CrossRef]  

3. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

4. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1 (2018). [CrossRef]  

5. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020). [CrossRef]  

6. S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012). [CrossRef]  

7. R. Corman, W. Boutu, A. Campalans, P. Radicella, J. Duarte, M. Kholodtsova, L. Bally-Cuif, N. Dray, F. Harms, G. Dovillaire, S. Bucourt, and H. Merdji, “Lensless microscopy platform for single cell and tissue visualization,” Biomed. Opt. Express 11(5), 2806–2817 (2020). [CrossRef]  

8. B. Javidi, S. Rawat, S. Komatsu, and A. Markman, “Cell identification using single beam lensless imaging with psuedo-random phase encoding,” Opt. Lett. 41(15), 3663 (2016). [CrossRef]  

9. A. Stern and B. Javidi, “Random Projections Imaging With Extended Space-Bandwidth Product,” J. Display Technol. 3(3), 315–320 (2007). [CrossRef]  

10. J. W. Goodman, Introduction to Fourier Optics, (McGraw-Hill, 1968).

11. B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190 (2018). [CrossRef]  

12. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane encoding,” Opt. Lett. 20(7), 767 (1995). [CrossRef]  

13. A. Carnicer, M. Montes-Usategui, S. Arcos, and I. Juvells, “Vulnerability to chosen-cyphertext attacks of optical encryption schemes based on double random phase keys,” Opt. Lett. 30(13), 1644–1646 (2005). [CrossRef]  

14. O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009). [CrossRef]  

15. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094 (2010). [CrossRef]  

16. Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

17. J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013). [CrossRef]  

18. M. Haskel and A. Stern, “Modeling optical memory effects with phase screens,” Opt. Express 26(22), 29231–29243 (2018). [CrossRef]  

19. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

20. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

21. L. Breiman, “Random Forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]  

22. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

23. A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017). [CrossRef]  

24. I. Moon and B. Javidi, “Shape tolerant three-dimensional recognition of biological microorganisms using digital holography,” Opt. Express 13(23), 9612–9622 (2005). [CrossRef]  

25. B. Javidi, A. Markman, S. Rawat, T. O’Connor, A. Anand, and B. Andemariam, “Sickle cell disease diagnosis based on spatio-temporal cell dynamics analysis using 3D printed shearing digital holographic microscopy,” Opt. Express 26(10), 13614–13627 (2018). [CrossRef]  

26. A. Anand, V. Chhaniwal, and B. Javidi, “Tutorial: Common path self-referencing digital holographic microscopy,” APL Photonics 3(7), 071101 (2018). [CrossRef]  

27. A. S. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20(21), 23617–23622 (2012). [CrossRef]  

28. F. Goudail, F. Bollaro, B. Javidi, and P. Réfrégier, “Influence of a perturbation in a double phase-encoding system,” J. Opt. Soc. Am. A 15(10), 2629–2638 (1998). [CrossRef]  

29. J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

References

  • View by:

  1. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5(7), 803 (2018).
    [Crossref]
  2. Y. Li, Y. Xue, and L. Tian, “Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media,” Optica 5(10), 1181–1190 (2018).
    [Crossref]
  3. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
    [Crossref]
  4. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1 (2018).
    [Crossref]
  5. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020).
    [Crossref]
  6. S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
    [Crossref]
  7. R. Corman, W. Boutu, A. Campalans, P. Radicella, J. Duarte, M. Kholodtsova, L. Bally-Cuif, N. Dray, F. Harms, G. Dovillaire, S. Bucourt, and H. Merdji, “Lensless microscopy platform for single cell and tissue visualization,” Biomed. Opt. Express 11(5), 2806–2817 (2020).
    [Crossref]
  8. B. Javidi, S. Rawat, S. Komatsu, and A. Markman, “Cell identification using single beam lensless imaging with psuedo-random phase encoding,” Opt. Lett. 41(15), 3663 (2016).
    [Crossref]
  9. A. Stern and B. Javidi, “Random Projections Imaging With Extended Space-Bandwidth Product,” J. Display Technol. 3(3), 315–320 (2007).
    [Crossref]
  10. J. W. Goodman, Introduction to Fourier Optics, (McGraw-Hill, 1968).
  11. B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190 (2018).
    [Crossref]
  12. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane encoding,” Opt. Lett. 20(7), 767 (1995).
    [Crossref]
  13. A. Carnicer, M. Montes-Usategui, S. Arcos, and I. Juvells, “Vulnerability to chosen-cyphertext attacks of optical encryption schemes based on double random phase keys,” Opt. Lett. 30(13), 1644–1646 (2005).
    [Crossref]
  14. O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
    [Crossref]
  15. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094 (2010).
    [Crossref]
  16. Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  17. J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
    [Crossref]
  18. M. Haskel and A. Stern, “Modeling optical memory effects with phase screens,” Opt. Express 26(22), 29231–29243 (2018).
    [Crossref]
  19. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
  20. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.
  21. L. Breiman, “Random Forests,” Mach. Learn. 45(1), 5–32 (2001).
    [Crossref]
  22. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).
  23. A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017).
    [Crossref]
  24. I. Moon and B. Javidi, “Shape tolerant three-dimensional recognition of biological microorganisms using digital holography,” Opt. Express 13(23), 9612–9622 (2005).
    [Crossref]
  25. B. Javidi, A. Markman, S. Rawat, T. O’Connor, A. Anand, and B. Andemariam, “Sickle cell disease diagnosis based on spatio-temporal cell dynamics analysis using 3D printed shearing digital holographic microscopy,” Opt. Express 26(10), 13614–13627 (2018).
    [Crossref]
  26. A. Anand, V. Chhaniwal, and B. Javidi, “Tutorial: Common path self-referencing digital holographic microscopy,” APL Photonics 3(7), 071101 (2018).
    [Crossref]
  27. A. S. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20(21), 23617–23622 (2012).
    [Crossref]
  28. F. Goudail, F. Bollaro, B. Javidi, and P. Réfrégier, “Influence of a perturbation in a double phase-encoding system,” J. Opt. Soc. Am. A 15(10), 2629–2638 (1998).
    [Crossref]
  29. J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

2020 (2)

2018 (7)

2017 (1)

A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017).
[Crossref]

2016 (1)

2015 (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

2014 (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

2013 (1)

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

2012 (2)

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

A. S. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20(21), 23617–23622 (2012).
[Crossref]

2010 (1)

2009 (1)

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

2007 (1)

2005 (2)

2001 (1)

L. Breiman, “Random Forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

1998 (1)

1995 (1)

Anand, A.

Andemariam, B.

Antipa, N.

Arcos, S.

Bae, H.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Bally-Cuif, L.

Barbastathis, G.

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Bollaro, F.

Bostan, E.

Boutu, W.

Breiman, L.

L. Breiman, “Random Forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

Bucourt, S.

Campalans, A.

Carnicer, A.

Chhaniwal, V.

A. Anand, V. Chhaniwal, and B. Javidi, “Tutorial: Common path self-referencing digital holographic microscopy,” APL Photonics 3(7), 071101 (2018).
[Crossref]

Corman, R.

Deng, J.

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Deng, M.

Dhummakupt, A.

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

Dokmeci, M. R.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Dong, W.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Dovillaire, G.

Dray, N.

Duarte, J.

Fei-Fei, L.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics, (McGraw-Hill, 1968).

Goudail, F.

Harms, F.

Haskel, M.

Heckel, R.

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Hinton, G. E.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

Javidi, B.

A. Anand, V. Chhaniwal, and B. Javidi, “Tutorial: Common path self-referencing digital holographic microscopy,” APL Photonics 3(7), 071101 (2018).
[Crossref]

B. Javidi, A. Markman, S. Rawat, T. O’Connor, A. Anand, and B. Andemariam, “Sickle cell disease diagnosis based on spatio-temporal cell dynamics analysis using 3D printed shearing digital holographic microscopy,” Opt. Express 26(10), 13614–13627 (2018).
[Crossref]

B. Javidi, A. Markman, and S. Rawat, “Automatic multicell identification using a compact lensless single and double random phase encoding system,” Appl. Opt. 57(7), B190 (2018).
[Crossref]

A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017).
[Crossref]

B. Javidi, S. Rawat, S. Komatsu, and A. Markman, “Cell identification using single beam lensless imaging with psuedo-random phase encoding,” Opt. Lett. 41(15), 3663 (2016).
[Crossref]

A. S. Singh, A. Anand, R. A. Leitgeb, and B. Javidi, “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express 20(21), 23617–23622 (2012).
[Crossref]

Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094 (2010).
[Crossref]

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

A. Stern and B. Javidi, “Random Projections Imaging With Extended Space-Bandwidth Product,” J. Display Technol. 3(3), 315–320 (2007).
[Crossref]

I. Moon and B. Javidi, “Shape tolerant three-dimensional recognition of biological microorganisms using digital holography,” Opt. Express 13(23), 9612–9622 (2005).
[Crossref]

F. Goudail, F. Bollaro, B. Javidi, and P. Réfrégier, “Influence of a perturbation in a double phase-encoding system,” J. Opt. Soc. Am. A 15(10), 2629–2638 (1998).
[Crossref]

P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane encoding,” Opt. Lett. 20(7), 767 (1995).
[Crossref]

Jueptner, W.

U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

Juvells, I.

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Khademhosseini, A.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Kholodtsova, M.

Kim, S. B.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Komatsu, S.

Koo, K.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Kouichi, S.

J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

Krizhevsky, A.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

Kuo, G.

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, J.

Leitgeb, R. A.

Li, K.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Li, L.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Li, S.

Li, Y.

Lindell, D. B.

D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020).
[Crossref]

Markman, A.

Matoba, O.

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

Merdji, H.

Mildenhall, B.

Millan, M.

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

Montes-Usategui, M.

Moon, I.

A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017).
[Crossref]

I. Moon and B. Javidi, “Shape tolerant three-dimensional recognition of biological microorganisms using digital holography,” Opt. Express 13(23), 9612–9622 (2005).
[Crossref]

Ng, R.

Nomura, T.

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

O’Connor, T.

Ozcan, A.

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

Perez-Cabre, E.

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

Radicella, P.

Rawat, S.

Refregier, P.

Réfrégier, P.

Rivenson, Y.

Samson, P. C.

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

Schnars, U.

U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

Shor, L. M.

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

Singh, A. S.

Sinha, A.

Socher, R.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

Stern, A.

Su, J.

J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

Sutskever, I.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

Tian, L.

Vargas, D. V.

J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

Waller, L.

Wetzstein, G.

D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020).
[Crossref]

Wikswo, J. P.

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

Xue, Y.

Anal. Chem. (1)

J. Deng, A. Dhummakupt, P. C. Samson, J. P. Wikswo, and L. M. Shor, “Dynamic dosing assay relating real-time respiration responses of Staphylococcus aureus biofilms to changing microchemical conditions,” Anal. Chem. 85(11), 5411–5419 (2013).
[Crossref]

APL Photonics (1)

A. Anand, V. Chhaniwal, and B. Javidi, “Tutorial: Common path self-referencing digital holographic microscopy,” APL Photonics 3(7), 071101 (2018).
[Crossref]

Appl. Opt. (1)

Biomed. Opt. Express (1)

J. Display Technol. (1)

J. Lab. Autom. (1)

S. B. Kim, H. Bae, K. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-Free Imaging for Biological Applications,” J. Lab. Autom. 17(1), 43–49 (2012).
[Crossref]

J. Opt. Soc. Am. A (1)

Mach. Learn. (1)

L. Breiman, “Random Forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

Nat. Commun. (1)

D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat. Commun. 11(1), 4517 (2020).
[Crossref]

Nat. Photonics (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Nature (1)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Opt. Express (5)

Opt. Lett. (3)

Optica (3)

Proc. IEEE (2)

O. Matoba, T. Nomura, E. Perez-Cabre, M. Millan, and B. Javidi, “Optical techniques for information security,” Proc. IEEE 97(6), 1128–1148 (2009).
[Crossref]

A. Anand, I. Moon, and B. Javidi, “Automated Disease Identification With 3-D Optical Imaging: A Medical Diagnostic Tool,” Proc. IEEE 105(5), 924–946 (2017).
[Crossref]

Other (5)

J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks,” arxiv:1710.08864 (2017).

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.

J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.

U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005).

J. W. Goodman, Introduction to Fourier Optics, (McGraw-Hill, 1968).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Schematic representation of a single random phase encoding cell identification system. (b) Experimental setup for SRPE cell identification. Inset of (a) shows the complex amplitude being randomly scattered by the diffuser prior to recording by the CMOS image sensor, allowing for capture of 3D information of the cells under inspection. SRPE: Single random phase encoding. The 3D printed instrument has dimensions of 50 mm x 125 mm x 160 mm.
Fig. 2.
Fig. 2. Depiction of data acquired in SRPE (a, c) and 3D digital holographic microscopy systems (b, d). (a)-(b) Results from Cow RBC samples. (c)-(d) Results of Horse RBC samples. Red rectangles show an enlarged view of the opto-biological signatures. SRPE: Single random phase encoding. RBC: Red blood cell.
Fig. 3.
Fig. 3. Receiver Operating Characteristic (ROC) curves for each classifier comparing CNN-based and Random Forest classifiers for both the single random phase encoding system and the shearing-based 3D digital holographic system. SRPE: Single random phase encoding; CNN: Convolutional neural network; RF: Random Forest. Shearing: Shearing-based digital holographic system.
Fig. 4.
Fig. 4. Visualization of various computationally added noises. (a) Shows the original recorded opto-biological signature for a sample of cow RBCs measured using the reported SRPE system. (b-e) Show the recorded signal under partial obstruction, corresponding to 1%, 4%, 25%, and 64% of the signature area being obstructed, respectively. (f-i) Show the recorded signature with added zero-mean Gaussian noise having variances of 0.01, 0.02, 0.05, and 0.08, respectively.
Fig. 5.
Fig. 5. (a) Receiver operating characteristic (ROC) curves for classification in under partial obstruction of the recorded signal. (b) Plots of accuracy and area under the curve (AUC) values as a function of the percent of signal obstruction. (c) ROC curves for classification in the presence of additive Gaussian noise. (d) Plots of accuracy and AUC value as a function of the variance of the additive Gaussian noise. CNN: Convolutional neural network; RF: Random Forest.

Tables (3)

Tables Icon

Table 1. Confusion matrices for classification of animal RBCsa

Tables Icon

Table 2. Classification summary for animal RBCs using SRPE lensless cell identification system

Tables Icon

Table 3. SRPE classification performance in the presence of noise

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

u 2 ( ξ , η ) = u 1 ( x , y ) h 1 ( ξ x , η y ) d x d y ,
h 1 ( ξ , η ) = e j k z 1 j λ z 1 e x p [ j k 2 z 1 ( ξ 2 + η 2 ) ] .
u 3 ( ξ , η ) = u 2 ( ξ , η ) e x p [ j ϕ D 1 ( ξ , η ) ] .
u 4 ( α , β ) = u 3 ( ξ , η ) h 2 ( α ξ , β η ) d ξ d η ,
h 2 ( α , β ) = e j k z 2 j λ z 2 e x p [ j k 2 z 2 ( α 2 + β 2 ) ] .

Metrics