Layered peptide array (LPA) system enables multiplex screening of biomarkers [1-3]. One of the main problems of the LPA system is the screening of the layered-membranes stack. Nowadays, each membrane is imaged separately using conventional fluorescent imaging. This process is time consuming and requires extensive manual interaction. This paper describes a general solution for optical imaging of a layered grid medium using photogrammetric methods. The suggested method enables visualization of the LPA membranes stack by using only two images of the stack. This study is a proof of concept of the suggested solution using MATLAB simulation and phantom experiments.
© 2011 OSA
1 . Introduction
Tumor pathologic analysis is based on changes in cellular morphology, combined with a specific tumor profile that combines different molecular markers and can be detected by use of immunohistochemistry (IHC). The traditional standard chromagen IHC is limited to visualizing one or two target molecules per tissue section. Those markers may not spatially overlap because the stain is opaque. More modern multiplex fluorescent probe labeling methods extend the range of detection to several fluorescent markers per tissue section. The marker’s positions may spatially overlap due to the usage of transparent stain. However, spectral overlap causes label cross-talk (bleed-through), which still presents a major technical barrier to multiplexed analysis. Seamless integration of the processes of tissue staining, digital data acquisition and data analysis typifies the state of the art [1, 2]. Also, the number of proteins that may be assayed has grown tremendously, hundreds of which are recognized as potential biomarkers of cancer and can be used in new diagnostic tests.
Scarce tissue samples, such as core needle biopsies, might enable personalized drug treatments as thorough diagnosis using consecutive sections is a challenge. Thus a technology is needed that will be able to integrate the standard IHC with “multiplexing” of the data present in the section for a robust diagnosis of multiple markers.
A team from the National Cancer Institute (NCI) in Bethesda, MD, USA has been working since 2003 on Layered peptide array (LPA) method [1–3]. LPA is a method to obtain multiple molecular profiles from a standard tissue section by incorporating thin layers on top of the section. The technique enables multiple parallel measurements on the multi-layered stack of membranes and operates with up to 50 layers. This LPA system can transform life science platform such as tissue micro-arrays (TMA) or multi well plates in to a 3D dimension array by adding a third molecular array dimension.
Membranes with thickness of a few microns are coated with peptides specific to antigens of interest (Fig. 1 ). Tissue sections are then incubated with a cocktail of antibodies against target proteins. After washing, the antibodies are released from the section and passed through the analysis layers while maintaining their two-dimensional position. The antibodies are specifically captured by their target peptides and subsequently detected using standard secondary antibody-based methods .
The imaging of the antibodies on the different layers is currently performed after separating the layers. Thus, the main issue to be addressed is the screening of the multi-layered set of membranes with the captured antibodies. Nowadays, each layer is imaged separately using conventional fluorescent imaging. This process is time consuming, cumbersome and requires extensive manual interactions.
In this paper, we suggest a simple and low cost solution for the screening of the multi layer without separating the stack. The suggested solution incorporates photogrammetric methods and requires initial attachment of the fluorescent dyes to the antibodies passing through the membranes stack. Each captured antibody will be marked with a fluorescent dye. The sample is then imaged from two different directions. These images are processed and 3D reconstruction of the image volume is performed. The 3D reconstruction enables visualizing and separation of the fluorescent dyes embedded in different layers.
Photogrammetric measurements are widely used in diverse medical applications . They have many advantages, mainly related to the ability of quick non-contact recording. Usually, the photogrammetric applications assume that the imaged object is opaque as opposed to this study that images transparent medium with small light sources (fluorescent dyes) from different layers and the light sources might be one above the other.
In this paper, we focus on imaging a 3D array pattern created when using the LPA system with TMA or multi well plates. Our study shows a proof of concept of the suggested solution using MATLAB simulation and phantom experiments. The proposed technique will enable efficient and quick detection of markers when combined with the LPA technique.
Specifically, the thickness of the membrane in the original layered peptide Array is several microns thick. The current paper is dealing with a general concept that required using gels to simulate the carbon membranes in the original publication. The gels, constructed for the current set of experiments, was prepared in a constant thickness of 6 mm.
2 . Methods
Our proposed technique includes three stages; a pre-imaging stage, an imaging stage and a processing stage. The pre-imaging stage includes the attachment of the antibodies to fluorescent dyes, cameras adjustment and calibration. The method requires cameras calibration in which the cameras positions and the cameras parameters are found. This stage is essential in order to construct the 3D position of the captured antibodies.
In the imaging stage, a laser illuminates the whole sample, which contains the captured antibodies. The laser light excites the fluorescent dyes attached to the captured antibodies. A fluorescent camera collects the fluorescent light and two images are captured from two different camera positions.
The final stage is the processing of the captured images, which includes several sub stages. First, the two images are processed in order to extract each captured antibody position in the image. The extracted positions are then matched between the two images using a matching algorithm. This sub-stage is crucial because only by knowing for each captured antibody its position in the two fluorescent images, the 3D reconstruction can be performed. After the matching, the 3D reconstruction of the medium is performed. The final step is the layers separation. The layers separation is done using the depth coordinates of the reconstructed 3D points of the captured antibodies. Each reconstructed point is then assigned to the nearest real grid point. Figure 2 presents a flowchart, which summarizes the suggested method for finding fluorescence points (x,y,z coordinates) in a grid structure:
2.1 Pre-imaging stage: Camera adjustment and calibration
A proper positioning of the cameras with respect to the 3D grid sample may help simplifying the matching algorithm and improve the method's resolution. The camera adjustment and calibration process are done only once when constructing the image system.
The camera’s orientations were adjusted in order to image the sample from one side. Each camera axis was set to one of the main diagonals in the horizontal plane (angle of 45 degree and −45 degrees) with a small angle with respect to the z direction (vertical direction). A schematic drawing of the setup is shown in Fig. 3 . The chosen orientation has several advantages; points with the same x-y coordinates would lie across the diagonal, resulting in better separation between the points in the image plane. This improves the depth separation between the points. Moreover, if one chooses to use only one camera then using this orientation requires translation only in one direction, which simplifies the system's requirement.
After adjusting the cameras to the proper orientation, the system needs to be calibrated in order to find the precise extrinsic and intrinsic parameters of the cameras. The extrinsic parameters are the rotation and translation with respect to the world reference frame of each camera and the rotation and translation between the two cameras. The intrinsic parameters are the camera focal plane, the principal point (the geometric center of the image) and the pixel size. The principal point was assumed as the middle of the camera's screen and the pixel size was taken from the camera manual. The focal length and all of the extrinsic parameters were calculated during the calibration process The calibration was performed using the Camera Calibration Toolbox for MATLAB [5, 6]. The calibration was made with a 1 mm calibration grid. The grid was positioned in different orientations (total of 20) and two images were taken, one for each camera. Examples of the calibration images are shown in Fig. 4 . The calibration images were loaded in the Camera Calibration toolbox. This toolbox requires that the chessboard grid would be marked manually for each image and its dimensions are inserted to the program. The toolbox processes each camera set of images to extract each camera position and focal length. In addition, the toolbox uses the pair set of images to find the common cameras parameters and to refine each camera position.
2.2 Processing stage
2.2.1 Extracting dyes points' positions from images sub stage
The imaged antibodies area created using the LPA system with TMA or multi well plates is a round area rather than a single point. Therefore, a preceding process is needed before using the matching algorithm in order to extract each fluorescent dot's position. The image-processing algorithm for extracting the captured antibodies center of mass position need to deal with non-uniform field illumination and not necessarily unit intensity peak for each fluorescent dot. A briefly description of this algorithm is given in the next paragraphs and detailed description is described in .
The algorithm includes three main stages. The first stage is a noise reduction stage due to laser light. When the laser light is not fully filtered before the detector, its diffused part, especially by the sample boundaries, may appear in the image. The second stage is a binarization stage that divides the image to background and foreground (fluorescent dots).The triangle threshold, an automatic threshold, is applied on the image [8,9]. Adjacent fluorescent dots may be joined due to the binary threshold. If a higher threshold was used dots with lower intensity would not been detected.
The last stage separates adjacent fluorescent dots. This stage assumes that each fluorescent dot area is round. The shape’s element is taken into account using a repeated correlation between each binary element and a disk of different sizes. An initial diameter disk is used for the first correlation. Then by counting the number of regional maxima  in the correlation image and determining for each binary element its orientation, area and the major axis it is possible to decide whether the global threshold is sufficient or should a local threshold be used. The area and the major axis of the binary elements divide the elements into three groups: element with one fluorescent dot, element with two fluorescent dots and element with three fluorescent dots. This stage assumes that large areas contain more than one dye point. For the last two groups the local threshold is increased until the number of regional maxima equal to the number of assumed fluorescent dots. When two fluorescent dots become connected due to a low threshold or a large correlation disk, they may have only one regional maximum. Therefore, for each threshold, different disk sizes are checked before further increasing the threshold.
The purpose of this stage is to create a correlation image in which each fluorescent dot has one regional maximum. The complementary image of this image is the input for the watershed algorithm .The Watershed algorithm looks at the image as a topographic surface. The height of a pixel is determined by its intensity. Higher intensity results in higher altitude in the topographic surface. In this way, dots in the image appear as hills with summit. In the watershed algorithm, the watershed line of the topographic surface is found by gradually “filling” the complementary topographic image with water. These lines divide the image to segments. Since the integrated correlation image has one regional maximum for each fluorescent point, the number of segments returned from the watershed algorithm should be equal to the number of detected points.
If a simple threshold was used rather than the watershed algorithm, it would have been impossible to detect dots with diverse grayscale intensities. Diverse grayscale intensities cause diverse shapes in the binary image, which result in a different intensity for each dot in the correlation image. After the segmentation to separated fluorescent dots, the center of mass of each component area is calculated and is referred to as the image point of the 3D fluorescent dot in space.
2.2.2 Matching algorithm
The matching algorithm matches between corresponding points in the two images. Each point in the left image is assigned to its corresponding point in the right image, which resulted from imaging the same point in space. The matching algorithm is performed using only geometric constraints such as the epipolar constraint . In contrast to most photogrammetric applications in which the intensity feature is used, in this case, the intensity feature is not used because of poor discriminatory power among suspected points. The epipolar constraint focuses the search for corresponding point to a line in the other image and the dimensions of our search reduce from two dimensions to one. This line is known as the epipolar line.
The matching process begins with image rectification . Rectification of a stereo images pair resolves a transformation of each image plane such that the epipolar lines of the transformed images are parallel to one of the image axis (commonly the horizontal one). Rectification helps finding stereo correspondences by searching along the horizontal line. In practice due to noise, the algorithm searches for corresponding points not only on the epipolar line but also in a small region surrounding it. The search is restricted to a window that limits the possible disparity (the distance between the same point positions in both cameras) between the points. The center of the search window is taken as the search point column coordinate with the addition of the average disparity of all the points. The possible matches are then checked for validity. Mismatches are cancelled by considering only matching points that are consistent for all suspected points. Again, a detailed description of this stage is also described in .
2.2.3 Structure reconstruction and layer separation Algorithm
A 3D reconstruction algorithm based on the least square estimation is used in order to find the 3D coordinates of the points. This algorithm is a well-known algorithm that constructs the 3D coordinates up to a scale factor (for further elaborating of this algorithm see reference ). By using a known calibration point, this factor is calculated and used for extracting the depth coordinates.
In order to find the real 3D coordinates, a refraction correction is needed. This is due to the fact that the beams from the fluorescent dyes to the camera pass two optical media with different refractive indices. Therefore, the reconstructed coordinates of the fluorescent dyes appear closer to the camera. The estimated points can be corrected using geometrical optics calculations. The position of the plane interface of gel/air surface needs to be found in order to execute this correction. This can be done during the calibration stage of the camera, using a reconstruction point which lies on the air-medium surface .
The layer separation is performed using the K-means algorithm on the depth coordinates in the reconstructed world frame coordinates . The number of layers is found by requiring that the standard deviation of the output clusters will be less than a specific constant. The algorithm runs while increasing the number of layers until the condition is met. In addition, the layers are sorted according to their heights so that the lowest layer is layer number one.
2.3 MATLAB simulation
A MATLAB simulation was written in order to validate the suggested algorithm. The cameras’ extrinsic parameters were tested for different camera orientations. First, the simulation defines a 3D grid structure in space. Each fluorescent marker /tissue core is simulated by a specific point in space. In order to stimulate the refraction, the simulation corrects the point’s location and shifts them in each layer. The original 3D location is shifted to a new position such that the pinhole camera model can be applied using the object coordinates of the shifted point. This correction is performed using the algorithm mention in . After the refraction correction, simulated images of the 3D shifted locations are generated and an additional Gaussian noise is added to them. The noise is added to each dimension separately. Finally, the matching algorithm and the 3D reconstruction of the image points are performed. The simulation was used twice; once for initial verification of the method and then with the experimental camera orientation in order to generalized the experimental results.
2.4 Experimental setup
An experimental setup was constructed in order to verify the proposed approach (Fig. 5 ). The setup included a diode laser driver (ITC510 Thorlabs -profile) with a diode laser emitting at 785 nm wavelength and 100 mW power (Thorlabs L785P100) for the excitation of the fluorescent dye.
The laser illuminates the sample from the side and excites the whole sample at once. This was achieved by using three lenses: two cylindrical lenses and one convex lens. The convex lens focuses the diode laser ray, which has a large divergence angle. Two additional cylindrical lenses were used in to expand the ray in two directions.
A high precision fluorescent camera (MicroMax, Roper Scientific, New Jersey) was placed above the sample in order to capture the emitted fluorescent light during the experiment. The camera was connected to a PC and the fluorescence images were recorded using the camera's program WINVIEW. Later, the images were converted to text files, which were analyzed using MATLAB program.
The camera had two feasible motions: translation in one dimension and rotation in the xy plane. The motion was controlled manually by using an optical bench and a rotation stage. In addition, the motion was supervised by using a laser pointer that was connected to the camera. The laser pointer enabled an adjustment of the angle which is finer than the rotation stage accuracy (one degree).
The fluorescent dye which was used is Dylight 800. It has an absorption pick at 770 nm and emission peak at 794 nm. The laser light was filtered using two band pass filters: the first- centered at 835 nm with a 70 nm spectral width, and the second – centered at 845 nm with a 60 nm spectral width (Omega Filters, Custom Design No.835BP70 and No.845BP60 respectively).
Two experimental phantoms were used: A Perspex phantom and an Agar gel phantom (Agarose- Ultra pure-GIBCO) phantom. Both phantoms included the fluorescent dye arranged in a 3D points array. The Perspex phantom had 3 layers separated by 6mm height, nine holes in each layer with a distance of 6 mm in the x and y direction between them and a hole diameter of 1mm. The agar gel phantom had a similar structure.
One major technical problem was fixing the fluorescent dye in the agar gel without diffusion. By conjugating the fluorescent dye to micro-beads, this obstacle was overcome . Amino beads (amino-modified microspheres Bands laboratories) were conjugated to the fluorescent dye with an amine reactive group. The beads were maintained in the refrigerator and were washed before use in order to separate them from the buffer.
The Agarose gel layers were prepared using a ratio of 0.1gr agar (Ultra pure-GIBCO) per 10 ml purified water. The holes on each layer were created using a clamp as shown in Fig. 6 . The agar was poured to a plastic box and was dried with the rods in it for half an hour. Afterwards, the beads were dripped into the holes using a 0.5-10 µl pipette. Finally, the layers were placed one above the other and imaged with the fluorescent camera. The position of the fluorescent dyes in the gel was set by the distance between the centers of the rods and the agar layers thickness. The centers of the rods were 6 mm apart from each other and the thickness of the gel layers was set to 6 mm.
The camera focal length was 31 mm with f-number 32 and the pixels length was 6.8 µm in each direction. The camera spatial resolution next to sample stage was approximately 30 µm with depth of field of 4.96 mm. The depth of field larger than the thickness between the lower layer to the higher layer (12mm) and equal to 14.96 mm is obtained for circle of confusion equal to three pixels length. Therefore, the expected blurring due to out of focus can be up to 90 µm.
3 . Results
3.1 Camera calibration results and evaluation
The calibration results from the MATLAB calibration toolbox showed a root mean square error (RMSE) of 0.0757 mm between the estimated coordinates and the real coordinates. The differences between two adjacent points was calculated and compared to the theoretical grid size, 1mm. The mean horizontal differences error was 0.0283±0.0198 mm with a maximum difference of 0.1014 mm. The mean vertical difference was 0.0550±0.0366 with a maximum difference of 0.1565 mm. The mean distance to the epipolar line was 1.972±1.328 pixels with a maximum distance of 5.704 pixels. All of the means errors were less than 10% compared to the grid size. The errors were relatively small errors for a manual experimental system. The rotation angle between the cameras was 81.93° and the translation vector in the reference world frame was T=(73.8057,2.8716,0.9266) mm, which is similar to the suggested theoretical orientation.
3.2 Phantom experimental results
The algorithm was tested for several configurations of fluorescent dots: Agar gel phantoms with 4,8,10,14 and 20 fluorescents dots and a Perspex phantom with 27 fluorescent points. All of the fluorescent dots were detected properly in both types of phantoms. Two configurations results are presented as an example (Fig. 7 and Fig. 8 ). The first example is a 3 layers Agar gel phantom with 20 fluorescent dots and the second example is a 3 layers Perspex phantom with 27 fluorescent dots. Figure 7 and Fig. 8 show the captured images, the results of the extraction of points from the images and the layers separation for Agar gel phantom and Perspex phantom respectively. In addition, Fig. 9 shows an example of the 3D reconstruction of the fluorescent points from the Agar gel phantom shown in Fig. 7.
3.3 Results evaluation
3.3.1 Agar gel phantoms results
In the Agar gel phantoms, the reconstructed coordinate system was fixed so that the minimum point coordinates in the x and y direction would be the origin. The fixation of the minimum X and Y coordinates as the origin point rather than setting the origin in the theoretical position of the beginning grid point, was done because of the difficulty to precisely assess where the calibration grid origin was with respect to this point in a gel structure. The error estimation was calculated as the distances between the reconstructed coordinates and the real coordinates of the fluorescent dots. Only the distance between the x and y coordinates of the dots were calculated for two reasons: the difficulty to accurately control the thickness of the gel layers and because the thickness of the layers changed a little when the layers were placed one above the other. The distances were calculated using two methods. In the first one, the distance was calculated relative to the real dots centers coordinates. In the latter, the distance calculation was done relative to the real dots theoretical boundaries (the circumference of the fluorescent source), where the distances of reconstructed points that were within the dots boundaries were taken as zeros. In addition, the differences between the reconstructed points are compared to the real distances between the points. The results evaluation for the Agar gel phantoms is summarized in Table.1.
After the layers separation the reconstructed fluorescent point is attributed to the nearest theoretical point center. Therefore, the 2D distance from the dot's center indicates the feasibility for making the right detection. If the intensity peak of the point is closest to the dot's boundary rather than its center, the error is increased. Hence, the 2D distance from the dot's boundary is also calculated. Finally, the differences error is calculated, because this error discounts the systematic error of the reconstructed coordinate system origin.
The mean 2D distance was only 7% higher than the dots' radius and the maximum error was less than 1 mm from the fluorescent dots' boundaries. The distances between the points and their epipolar lines were calculated in order to assess the system noise level. The mean distance to the epipolar lines was 9.1±5.01 pixels with a median maximum distance of 15.68 pixels.
3.3.2 Perspex phantom results
The second type of phantom that was used was a Perspex phantom. The Perspex phantom is a rigid body therefore its dimensions are more accurately measured than in the gel phantom case. Thus, a full results evaluation including the reconstructed depth coordinate is possible. The main disadvantage of the Perspex phantom is the reflection of the fluorescence light from the boundaries, which requires a complicated noise reduction.
The results for the Perspex phantom with 27 points are summarized in Table.2. The 3D distances between the real points’ coordinates and the reconstructed points’ coordinates were calculated. In addition, the distances within each layer (without the depth coordinate) were calculated (referred to as the 2D distance in the table). In this experiment the grid points origin with respect to the calibration grid origin was evaluated as (3,5,0). This point was found by comparing the sample origin point to the calibration grid origin. Therefore, the errors are calculated in two ways. The first is with the fixation of the reconstructed coordinates system to the evaluated minimum grid point X and Y coordinates. The second is with the fixation of the reconstructed coordinates system to the minimum X and Y reconstructed coordinates. The distances between the points and their epipolar lines were calculated in order to assess the noise level in the experiments. The mean distance to the epipolar lines was 13.12±4.97 pixels with a maximum distance of 22.45 pixels.
The error of the 3D mean distance was within the boundaries of the holes (which had a cylinder shape with 0.5 mm radius and 1mm height). The mean 2D distance was also relatively small and within the boundaries of the dots. The maximum distance was a little larger than the boundaries of the dots but still relatively small (less than the calibration grid size- 1mm). The use of an evaluated starting point decreased all the calculated errors by 0.2-0.3 mm. As expected, all the 2D distance errors were smaller than the 2D errors in the Agar gel phantom experiments. This was because the prior phantoms errors were smaller.
The mean and median maximum distances to the epipolar line were higher than in the gel agar phantoms experiments. This can be results of small changes from calibration orientation, the different number of points in the experiments and the fact there were more experiments with the agar gel phantoms.
3.4 Simulation results with the experimental orientation
Using the calibration results, the experimental cameras orientation was simulated. First, the simulation was run with the theoretical coordinates without adding Gaussian noise. Then, the simulation was run with additive Gaussian noise with 3 and 5 standard deviation. The simulation was run a 100 times for each standard deviation. In rare cases (less than 1%), when the adaptive noise was higher than the theoretical limitation discussed in the theoretical section, the matching algorithm failed. The evaluation of the results is summarized in Table.3.
4 . Discussion
This study shows the feasibility of using photogrammetric methods for layer separation in the LPA system. In phantom models experiments, all of the dots were detected correctly with a mean distance error between the reconstructed 3D points and their theoretical coordinates of less than 10% relatively to the horizontal grid size (6 mm). In addition, the theoretical simulations show the stability of the method in the presence of 5 standard deviation Gaussian noise. In addition, the simulations indicated that the experimental accuracy is maintained in these noises level.
When setting the reconstructed coordinate system origin at the minimum point coordinates the minimum point error is added to all the points. The differences errors were in fact smaller than the absolute errors in the agar gel phantoms and the errors in the Perspex phantom were about 20% smaller when fixing the origin point to the real beginning point of the grid. In a real system, the origin of the sample with respect to the origin of the grid needs to be measured precisely in order to reduce this systematic error. The small mean differences error calculated in the agar gel experiments (0.0513 mm, less than 1% relative to the grid size) indicates that there is no systematic error in the differences between adjacent points.
One of the difficulties in the experimental setup was stabilizing the experimental system. Any small change in the position of the camera, results in inconsistent images in the right and left cameras, which leads to unsuccessful calibration. The use of two cameras or a motorized translation and rotation stage should help in increasing the accuracy of the cameras' calibration. Although, the manual control and the use of only one camera the calibration errors had been small relatively to the calibration grid size (7.6% RMSE). For better accuracy, the calibration needs to be made with a smaller grid size.
In the experiment, the MMD to epipolar lines was similar to the MMD distance that was calculated for the simulation with 5 standard deviation noise. Therefore, the noise level in the experimental part is similar to the simulation noise level. Furthermore, the errors in the points' positions in the experiments are expected to have similar values as in the simulation. And indeed the 3D distance error of the Perspex phantom was 0.5438 within the standard deviation of the simulation 3D distance error, which was 0.4353±0.2860. However, the distribution of the errors in the experiments is different from the one of the simulations because the noise level was higher in the diagonal direction on which the dots lied than in the orthogonal direction. This was because of the need to separate the fluorescent dots from one another in this direction.
The study shows the feasibility of the suggested approach to 3 layers. The number of the layers that can be used depends on the grid size in the x, y and z direction. This system uses the gaps in the x-y plane between the dots in order to visualize the depth factor. Therefore, there is a compromise between the minimal dz of the grid and the minimal dx and dy of the grid. Furthermore, the performances of the method are strongly depended on the camera orientation and magnification.
By using more than one type of fluorescent dye, the number of layers that can be visualized increases and the minimal distance between the layers decreases. For example by using 3 different types of fluorescent dyes with sequenced order of the layers, 9 layers can be image instead of three layers. In this way, the minimal distance between the layers equals to a third of the minimal distance with only one type of fluorescent dye.
The phantoms layers thickness was larger than the LPA membranes thickness. However, in order to decrease the thickness of the layers as suggested in the previous paragraph, hyper-spectral imaging should be used. The main Advantage of our suggested method is that the separation between the layers does not rely on the use of different markers and combining the method with multiple fluorophores enables the visualizing of thinner layers. Future work will include further experiments with different fluorophores types.
5 . Conclusions
The study shows a proof of concept of separating a grid structure using photogrammetric methods. The suggested algorithm can solve the screening problem of the layered membranes and enables an efficient and quick detection of markers when combined with the LPA technique. The study presents both simulation calculations and experimental results that prove the feasibility of the suggested method with mean errors of less than 10%. This method can also be used for other applications that need to screen a 3D array structure and enables quick and simple imaging of 3D grid structure.
References and links
1. G. Gannot, M. A. Tangrea, H. S. Erickson, P. A. Pinto, S. M. Hewitt, R. F. Chuaqui, J. W. Gillespie, and M. R. Emmert-Buck, “Layered peptide array for multiplex immunohistochemistry,” J. Mol. Diagn. 9(3), 297–304 (2007). [CrossRef]
2. G. Gannot, M. A. Tangrea, J. W. Gillespie, H. S. Erickson, B. S. Wallis, R. A. Leakan, V. Knezevic, D. P. Hartmann, R. F. Chuaqui, and M. R. Emmert-Buck, “Layered peptide arrays: high-throughput antibody screening of clinical samples,” J. Mol. Diagn. 7(4), 427–436 (2005). [CrossRef]
3. G. Gannot, M. A. Tangrea, A. M. Richardson, M. J. Flaig, S. M. Hewitt, E. M. Marcus, M. R. Emmert-Buck, and R. F. Chuaqui, “Layered expression scanning: multiplex molecular analysis of diverse life science platforms,” Clin. Chim. Acta 376(1-2), 9–16 (2007). [CrossRef]
4. H. Mitchell and I. Newton, “Medical photogrammetric measurement: overview and prospects,” ISPRS J. Photogramm. Remote Sens. 56(5-6), 286–294 (2002). [CrossRef]
5. J. Bouguet, “Camera calibration toolbox for matlab,” (2004).
6. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in (Citeseer, 1997), 1106–1112.
7. R. Cohen, Device and Method for Imaging of Layered Medium for Multiplex Screening of Mmarkers in Cancer Tssue Biopsies (Tel Aviv University, 2010).
8. P. Rosin, “Unimodal thresholding,” Pattern Recognit. 34(11), 2083–2096 (2001). [CrossRef]
11. F. Meyer and S. Beucher, “Morphological segmentation,” J. Vis. Commun. Image Represent. 1(1), 21–46 (1990). [CrossRef]
12. Y. Ma, S. Soatto, and J. Koeck An invitation to 3-D vision: From images to geometric models (Springer Verlag, 2004).
13. A. Fusiello, E. Trucco, and A. Verri, “A compact algorithm for rectification of stereo pairs,” Mach. Vis. Appl. 12(1), 16–22 (2000). [CrossRef]
14. R. Westaway, S. Lane, and D. Hicks, “Remote sensing of clear-water, shallow, gravel-bed rivers using digital photogrammetry,” Photogramm. Eng. Remote Sensing 67, 1271–1282 (2001).
15. J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in (California, USA, 1967), 14.
16. H. Maas, A. Gruen, and D. Papantoniou, “Particle tracking velocimetry in three-dimensional flows,” Exp. Fluids 15(2), 133–146 (1993). [CrossRef]
17. A. M. De Grand, S. J. Lomnes, D. S. Lee, M. Pietrzykowski, S. Ohnishi, T. G. Morgan, A. Gogbashian, R. G. Laurence, and J. V. Frangioni, “Tissue-like phantoms for near-infrared fluorescence imaging system assessment and the training of surgeons,” J. Biomed. Opt. 11(1), 014007 (2006). [CrossRef]