Abstract

Fluorescence imaging can reveal functional, anatomical or pathological features of high interest in medical interventions. We present a novel method to record and display in video rate multispectral color and fluorescence images over the visible and near infrared range. The fast acquisition in multiple channels is achieved through a combination of spectral and temporal multiplexing in a system with two standard color sensors. Accurate color reproduction and high fluorescence unmixing performance are experimentally demonstrated with a prototype system in a challenging imaging scenario. Through spectral simulation and optimization we show that the system is sensitive to all dyes emitting in the visible and near infrared region without changing filters and that the SNR of multiple unmixed components can be kept high if parameters are chosen well. We propose a sensitive per-pixel metric of unmixing quality in a single image based on noise propagation and present a method to visualize the high-dimensional data in a 2D graph, where up to three fluorescent components can be distinguished and segmented.

© 2017 Optical Society of America

1. Introduction

Clinical studies evaluating tumor resection of high and low grade gliomas suggest that a higher extent of resection can be associated with longer life expectancy of patients [1]. Imaging technology is needed to visualize tumor margins and support the surgeon to perform a complete removal of the tumor for an improved surgical outcome.

Fluorescence can aid surgeons by highlighting anatomical, functional or pathological tissue. As an example, administration of 5-aminolevulinic acid causes accumulation of the fluorescent marker protoporphyrin IX in epithelia and cancerous tissue [2]. This technique has been introduced into clinical practice for the resection of gliomas [1,2] and urothelial carcinomas [3,4]. Further technologies based on novel molecular targeting approaches are being translated into clinical practice [5–7].

The first generation of surgical fluorescent imaging systems allows the visualization of either the fluorescence or the color reflectance images, but not of both at once. They are designed to only image a single dye and no further spectral analysis is employed. Imaging multiple dyes simultaneously would allow to highlight different types of tissue like nerves and tumor in the same image [7]. Also, spectroscopic evaluation of the fluorescent signals can improve the quality of fluorescence information [8].

Recent preclinical and clinical research has shown that multispectral fluorescence imaging systems improve the sensitivity during glioma resection [8]. It has also been shown in preclinical studies that the combined information from multiple contrast agents can improve the tumor to background contrast [9]. However, the currently available imaging methods and technologies are not ready to fully capture the potential of multispectral fluorescence imaging.

We have recently introduced an imaging system that can simultaneously image multiple fluorophores together with conventional color images and display them at video rate [10]. In this, the optical path is split into two paths each imaged by a color sensor. The two paths are filtered with two complimentary multiband filters allowing one sensor to record fluorescence and the other to record a color image. However, due to the spectral gaps, some color information is lost and certain fluorophores cannot be imaged.

Based on this combined path and spectrum splitting concept we present here a combined 6-channel fluorescence and 6-channel color imaging system whose sensitivity covers the entire VIS/NIR spectral range with only minimal spectral gaps. This is possible by combining spectral and temporal multiplexing in two phases.

In this paper, we present a novel method to record and display video rate multispectral color and fluorescence images over the visible and near infrared range. The fast acquisition in multiple channels is achieved through a combination of spectral and temporal multiplexing in a system with two standard color sensors. Accurate color reproduction and high fluorescence unmixing performance are experimentally demonstrated with a prototype system in a challenging imaging scenario. Through spectral simulation and optimization we show that the system is sensitive to all dyes emitting in the visible and near infrared region without changing filters and that the SNR of multiple unmixed components can be kept high if parameters are chosen well. We propose a sensitive per-pixel metric of unmixing quality in a single image based on noise propagation and present a method to visualize the high-dimensional data in a 2D graph, where up to three fluorescent components can be distinguished and segmented.

2. Methods

2.1. Setup and principle of operation

The optical setup is shown in Fig. 1(a). In the imaging path, the object is imaged with a combined infinity corrected objective lens and zoom system (OL) (Z6 with 0.5× objective lens, Leica Microsystems, Germany) and split into two separate paths using a neutral 50:50 beam splitter cube (BS). The two paths have complementary spectral detection characteristics. In the first path (to sensor S1), the light passes through the multibandpass filter F1 (Semrock, USA) with the transmission spectrum shown in Fig. 1(g) and is imaged by the camera objective lens L1 (0.63× camera objective, Leica Microsystems, Germany) to the sensor S1 (PCO AG, Germany). The light in the complementary second path is filtered by the complementary multiband filter F2 (Semrock, USA) and imaged by sensor S2 (PCO AG, Germany), with spectra shown in Figs. 1(f) and 1(h). Relative spectral quantum efficiency curves of the sensors are measured by a spectral scan. The sensor is illuminated with monochrome light, produced by a Tungsten Halogen light source (HL-200, Ocean Optics, USA) filtered by a monochromator (SP-CM 110, Spectral Products, USA) and a calibrated power meter (PM100D with S150C detector, Thorlabs, USA) is used to determine the power at each wavelength. The data is converted from power to photon numbers and scaled to match the peak quantum efficiency specified by the manufacturer.

 

Fig. 1 (a) Schematic of the optical setup. The object is illuminated alternately with either of the two spectrally complementary lights LA or LB. OL: objective lens, BS: beam splitter, F1 and F2: complementary multiband filters, L1 and L2: imaging lenses, S1 and S2: color sensors. (b) RGGB pixel pattern of the color sensors. (c) Normalized LED spectra of L1 and (d) normalized LED spectra of L2. (e) Transmission t of multiple bandpass filter F1. (f) Transmission t of multiple bandpass filter F2.(g) Quantum efficiency s of the sensor S1 and (h) of the sensor S2. (i) Spectral and temporal multiplexing scheme. In phase A: LA illuminates the object, S1 (with F1) records the fluorescence image, and S2 (with F2) records the reflectance image. In phase B is the opposite: LB illuminates the object, S1 (with F1) records the complementary reflectance image and S2 (with F2) records the complementary fluorescence image.

Download Full Size | PPT Slide | PDF

Two multi LED light sources (LedHUB, Omicron-Laserage Laserprodukte GmbH, Germany) are coupled into 3 mm liquid light guides. For the experiment, LEDs with peak wavelengths at 455 nm, 470 nm, 625 nm, 655 nm and two broadband LEDs with peak emission at 555 nm were used. All LEDs, whose spectra are displayed in Figs. 1(c) and 1(d) are individually filtered such that their emission spectra matches the transmission bands of the multiband filters F1 and F2 in order to prevent leakage. Spectra of the individual lights obtained from the manufacturer. The light of both sources is finally merged into an 8 mm liquid light guide (Lumatec, Germany) and subsequently directed with an aspheric condenser lens (Thorlabs, USA) to the object plane.

In Fig. 1(i) the multiplexing scheme is illustrated. Acquisition proceeds in two alternating phases A and B. In phase A, the object is illuminated by light LA and both cameras detect an image during the integration time tA. The spectrum of illumination in phase A is completely blocked by the multiband filter F1, so only fluorescence light passes to sensor S1 and therefore the fluorescence image A1 is detected in phase A with sensor S1. In contrast, the light LA passes through filter F2 and thus sensor S2 records the reflectance color image A2. Fluorescence emission light can also pass through filter F2, but is outshone by the much stronger reflected light. In phase B, the object is illuminated with light LB and images are recorded with both sensors. The sensor S2 in the blue path now records a fluorescence image B2, because all the excitation light LB is blocked by filter F2. The other sensor S1 records a color image B1.

After each cycle consisting of the phases A and B, the corresponding images from the two sensors are combined to produce one six-channel fluorescence and one six-channel reflectance image exhibiting only small spectral gaps.

2.2. Color imaging

In the present system color reproduction is detrimentally affected by both the presence of the two multiband filters and the illumination shaped individually for each phase from numerous filtered narrowband LED light sources, and thus considerably different from natural daylight. We propose a correction algorithm to transform the original colors recorded by the sensors into the desired colors as perceived by the human eye under the CIE standard illuminant D65 representing average daylight [11].

A transformation from the camera 6-channel space with intensities dc with c = 1 . . . 6 to CIE XYZ intensites e with components ei and i ∈ {X, Y, Z} is described by the linear coefficients Ki,c and the offset value K0 as follows:

ei=c=16Ki,cdc+K0.
The intensities e in CIE XYZ space are converted to the CIE L*a*b* intensity f with components fj and j ∈ {L*, a*, b*}, assuming D65 illumination. The transformation can be summarized for experimentally recorded color values as:
dexpKi,candK0eexpD65illuminationfexp
To quantify the perceived difference between two color values, the metric ΔE00 is employed for a homogeneous perceived difference over the color space [12]. The color difference between experimental values ftileexp of individual color tiles of a calibration target and the ideal value as perceived by the human eye under D65 illumination ftileD65,humaneye is calculated as
ΔE00,tile=ΔE00(ftileexp,ftileD65,humaneye)
in CIE L*a*b* space.

Calibration images of a multi-tile color target (RezChecker, Image Science Associates, USA) were recorded with the camera system with an integration time of 5 ms. Using extracted color values for each tile of the target dtileexp from the calibration images, we seek the parameters Ki,c and K0 such as to minimize the average perceived color difference ΔE00¯=130tile=130ΔE00,tile between the corrected experimental intensities ftileexp and the intensities ftileD65,humaneye ideally perceived by the human eye under D65 illumination in the CIE L*a*b* space [12]. The reference values ftileD65,humaneye for the individual tiles are provided by the manufacturer:

argminKi,c,K0(130tile=130ΔE00,tile(dtileexpKi,c,K0ftileexp;ftileD65,humaneye))
We solve the optimization problem using the globalsearch algorithm in combination with fmincon interior-point optimization with the Matlab Global Optimization Toolbox (Mathworks, USA) for the dual sensor system (dexp = (d1, . . . , d6)) and for the two single sensor systems, only recording with sensor S1 (dexp = (d1, . . . , d3)) or sensor S2 (dexp = (d4, . . . , d6)). Corrected colors are calculated with thus obtained parameters Ki,c and K0 according to eq. 1 and converted to the RGB color space for display.

After applying the above transformation, we evaluate the residuals ΔE00. Meaningful limits of ΔE00 depend on several factors such as the observer viewing time and therefore different thresholds are reported. Generally, ΔE00 > 7.0 indicates a perceptible color difference, ΔE00 < 3.5 indicates colors which are perceived somehow different, while ΔE00 < 1 indicates a rather imperceptible difference.

2.3. Unmixing, noise and residuals

Let M ∈ ℝNc×Nf be a mixing matrix relating vector i ∈ ℝNf of fluorescent light intensity originating from Nf fluorophores to the vector d ∈ ℝNc of counts detected in Nc spectral channels recorded by the system:

d=Mi,
where NfNc and c=1NcMc,f=1. A solution ĩ for the fluorescence intensities i can then be obtained with U = M+, a generalized inverse of M
i˜=M+d=Ud.
The noise covariance matrix Σĩ ∈ ℝNf×Nf is given by
Σi˜=UΣdU,
where Σd ∈ ℝNc×Nc is the measurement noise covariance which we assume to be diagonal, meaning that the errors on different channels are uncorrelated. In contrast, Σĩ is not generally diagonal. Its j-th diagonal element
Σi˜j,j=c=1NcUj,c2Σdc,c=c=1NcUj,c2σdc2
is the variance of the j-th unmixed fluorescence intensity ĩj, where σ(·) denotes the standard deviation. We can then define the signal to noise ratio (SNR) of the unmixed fluorescent images as the ratio of ĩ to the standard deviation of the noise on ĩ. Taking
(diag(Σi˜))12
as a diagonal matrix containing the standard deviations representing the noise of ĩ, we define SNR ∈ ℝNf, a vector containing the SNR of each unmixed fluorophore, as
SNR=(diag(Σi˜))12i˜.

The plausibility of a particular unmixed fluorescence intensity ĩ may be evaluated by measuring the residual r = d, where we define

d˜=Mi˜=MUd.
Its noise covariance Σ ∈ ℝNf×Nf is given by
Σd˜=MΣi˜M=MUΣdUM,
and the noise covariance of the residual is
Σr=Σd+MUΣdUMMUΣdΣdUM.
Analogous to Eq. 10, we define a vector of standardized residuals r ∈ ℝNf, calculated as the ratio of the residual to the standard deviation of the error on the residual:
r=(diag(Σr))12r.

The noise of the channel measurements σdc2 has two chief sources: the Poisson noise σdcPoisson2 due to the photon nature of the signal, and the readout noise σ02 of the sensor [13]:

σdc2=σdcPoisson2+σ02.
The readout noise is specified by the manufacturer as σ02=2.79. If all pixels are unmixed well and the deviation is only due to the described noise and additionally if we assume dc~𝒩(μdc,μdc)+𝒩(0,σ02), where μ(·) denotes the mean and 𝒩(μ(),σ()2) denotes a normal distribution with parameters μ(·) and σ()2, then r = ‖r2 is χ2-distributed with NcNf degrees of freedom [14, pp. 106–9].

An accurate estimation of σdcPoisson2 requires multiple acquisitions of a still scene. For real time applications, when only a single measurement is available, we can approximate σdcPoisson2dc, which we use for the r values reported here.

For pixels with sufficient fluorescence intensity, σdcPoisson2=μdcσ02, so σ02 can be ignored. In the simulations in section 2.7, we therefore simplify Eq. 8 as

σi˜f2=Σi˜f,fc=1NcUf,c2dc=c=1Ncφ=1NfUf,c2Mc,φiφ.
This describes the noise σĩf of an unmixed fluorescent component f as a function of the mixing and unmixing matrices and the (simulated) fluorescent intensities iφ. It thus becomes clear that the noise σĩf in a single unmixed channel f depends on the intensity iφ of all the fluorophores φ.

2.4. System spectral sensitivity

To analyze the response of the system to different dyes in the VIS/NIR range we resort to numerical simulation. A schematic of the procedure with the results is presented in Fig. 3.

 

Fig. 2 (a) Uncorrected color image produced by averaging the color channels of the two sensors (for visibility the black and the white tiles are scaled to match the ideal RGB average brightness). (b) Corrected image. Best correction result using linear transformation with an additional offset. (c) Comparison of color reproduction methods for all 30 tiles. For each tile 5 different processing scenarios are shown: (A) Uncorrected colors (average of two sensors, for visibility the black and the white tiles are scaled to match the ideal RGB average brightness). (B) Optimal correction by using combined information from sensors A and B and transformation to xyz color space. (C) Best color correction using only sensor A. (D) Best color correction using only sensor B. (E) Ideal value as perceived under D65 illumination by the human eye. (d) Boxplot of the CIEDE2000 errors ΔE00 after optimal correction using the color information from only sensor A, only sensor B and both sensors. Dotted line at 1 is the absolute distinction limit; dashed line at 3.5 indicates a limit at which color differences can be hardly distinguished.

Download Full Size | PPT Slide | PDF

 

Fig. 3 Simulation of system sensitivity. (a) Fluorochrome emission spectrum (Gaussian distribution). (b) Combination of filter spectral transmission and sensor sensitivity. (c) Sensor signal intensity for the six channels calculated from the convolution of (a) and (b). S1 and S2 are simulating monochrome sensors by combining channels R1, G1, B1 and R2, G2, B2, respectively

Download Full Size | PPT Slide | PDF

The spectral sensitivity of the system is calculated by multiplying the transmission spectrum of the filters F1 and F2 with the quantum efficiency of the respective color channels as shown in Fig. 1. All spectral values are sampled on a numerical grid between 395 and 900 nm with a bin width of 1 nm.

To simulate the signal intensity in the sensor taking into account the spectral emission of the fluorochromes, a Gaussian spectral photon emission profile with parametric center wavelength λc = 450 . . . 850 nm and σ = 20 nm is assumed as visualized in Fig. 3(a).

Assuming a constant number of photons per dye, the transmission efficiency and the spectral signatures are calculated for each peak wavelength by multiplying the simulated emission light intensity with the spectral sensitivity of each channel.

The wavelength-dependent effective detection efficiency of the color sensors, defined as the ratio of detected photons to total photons, is calculated as a weighted average of the number of photons detected in each channel to account for the mosaic Bayer pattern (twice as many green as red and blue pixels).

To put the results in perspective, we also calculate the detection efficiency for the case when only one sensor is used, as in [10].

2.5. Fluorescent test targets

To evaluate the system performance to unmix various fluorescent components, we conducted experiments, where a phantom consisting of six vials with solid agar preparations of three dyes with strong spectral overlap in various mixtures was imaged.

The fluorescent dyes Atto532, Atto565 and Atto610 (ATTO-TEC GmbH, Germany) whose peak emissions wavelengths span less than 100 nm were selected. Six vials with a total volume of 500 μl were prepared, three containing a single dye, and three containing equal concentrations of two dyes, as detailed in Table 1. A filler solution was prepared by bringing to boil 1.2 g agar powder (Sigma-Aldrich, Germany) dissolved in 100 ml distilled water, to which 5 ml of 20% Intralipid solution (Sigma-Aldrich, Germany) were added when cooling down. Each vial was filled up to a volume of 500 μl with the filler solution. Before solidifying, the solution in each vial was mixed and vortexed to ensure a homogeneous dye distribution. The vials A–F (c.f. Table 1) were arranged clockwise in a bespoke circular holder under the imaging system.

Tables Icon

Table 1. Peak absorption (λabs), peak fluorescense (λfl) and concentration of dyes in vials A to F.

2.6. Fluorescence image processing and evaluation

A temporal series of 100 fluorescence images was recorded with each sensor separately using the respective fluorescence excitation illumination with an integration time of 50 ms.

The raw fluorescence images were (1) background subtracted, (2) converted to electron counts with a conversion factor, (3) demosaiced and (4) registered. It is important to calibrate the camera signal to the number of electrons detected in the transistor of each pixel [13] considering the conversion factor to be able to relate camera counts to Poisson statistics.

Therefore the signals need to be multiplied in step (2) with a camera specific conversion factor specified by the manufacturer. In the demosaicing procedure (3), each of the repeated basic 2 by 2 pixel mosaic patterns are collapsed onto a single pixel with 4 spectral channels in order to conserve the raw camera signal and avoid spatial smoothing. For the same reason, in the registration process (4) a nearest neighbor algorithm was selected to avoid spatial smoothing. Following registration, images for individual fluorophores were unmixed according to Eq. 6 using the 8 fluorescence channels.

The spectral signatures of the dyes are obtained by fitting the signal of the respective pure dye with an absolute least squares algorithm on the averaged image data. The data used for the fit was selected by manually marking the central region of the respective vials containing pure dye.

The angle α in channel space between two spectral signatures sA and sB is calculated using the inner product cos(α) = 〈sA, sB〉. It is an intuitive metric to estimate how spectrally different the dye signatures are.

A plausibility check of the unmixing is conducted by computing r for each pixel individually on the basis of a single frame.

To compare the dual sensor system with a system consisting only of sensor S1 or sensor S2, only the 4 channel data from sensor S1 (respectively S2) was processed in exactly the same way as for the dual sensor system.

2.7. Optimal dyes and filters

Using the SNR analysis it is possible to find

  1. the best possible SNR for ndyes and the optimal set of dyes for the fixed system configuration, and
  2. the best possible SNR for different bandpass filters given the combination of dyes selected before for the performance demonstration.

The SNR is calculated for all dyes using Eq. 15, assuming a Gaussian photon emission spectrum (σ = 20 nm) for each dye, sampled to fixed bins ranging from 395 nm to 900 nm with a bin width of 1 nm as described previously in section 2.4.

It is assumed that each dye emits 1000 e per sensor channel per phase, i.e. 8000 potentially detectable photons in total and the same number of photons in each phase. Finally, the fitness function for the optimization algorithm maximizes for the maximum inverted root mean square |SNR| of all dyes f:

|SNR|=(1NffNfSNRf2)12.
Calculating the inverted root mean square of the individual SNR values results in a small |SNR| if either of the SNRf is low and puts an additional weight favoring similar SNRf values.

If all photons were already separated upon detection, the SNR of each dye would only be limited by the Poisson statistics (sensor noise is negligible), leading to SNRideal=nphotons=800089. The results of the SNR calculation are reported as absolute SNR [1] or relative to the ideal SNR[dB]=10log10(SNR[1]SNRideal[1]).

To find the optimal set of ndyes dyes for the given system, we employ the globalsearch algorithm in combination with the fmincon interior point algorithm as implemented in Matlab’s Optimisation Toolbox (Mathworks, USA) with the constraint that the peak emission wavelenghts of the dyes are between 435 and 860 nm and at least 10 nm apart. To find a good starting point for the optimization, an intuitive guess is made by testing all permutations of ndyes with emission peaks at (some of) the filter transmission band centers (440, 485, 521, 559, 607, 649, 694 and 809 nm) and picking the one producing the best SNR. Additionally, for the simplest case of ndyes = 2, the SNR for the complete parameter space of λ1 = 435 . . . 860 nm and λ2 = 435 . . . 860 nm is calculated to verify the optimization results.

To answer the second question, how good a system could be for the given set of dyes, an optimal filter design was sought by shifting the edges of the multiband filter pairs with up to four transmission bands each. The lowest band edge was restricted to be at 500 nm and the highest band edge was restricted to 800 nm, because outside these wavelengths the selected experimental set of dyes hardly emit and thus the result of an optimization would not be meaningful. A spectral security gap in which both filters block at the band edge is necessary to prevent bleedthrough in a real setup. This gap was set to 5 nm, in the center of the gap is the edge wavelength to be varied in the optimization. Additionally, the minimum spacing between two band edges was set to 20 nm, ensuring a width of a single band of at least 15 nm.

The optimization was implemented using the MATLAB Optimization Toolbox patternsearch, globalsearch and ga (genetic algorithm) algorithms (Mathworks, USA) with standard parameters. The initial start of the optimization algorithm was an equidistant distribution of the bands over the available spectral space with additional blocking regions on both sides.

2.8. Unmixing and segmentation in a low-dimensional space

In order to be able to visualize and explore the spectral information in the fluorescence image data, it is projected to a lower dimensional subspace before unmixing or segmenting. If the subspace is optimally chosen, most of the relevant information is maintained and the resolving power of the system to separate dyes is maintained. Here we apply principle component analysis to find a suitable subspace basis.

The multispectral fluorescence images d are normalized n=dcdc per pixel. The normalized image n is processed by unweighted principle component analysis (PCA) using singular value decomposition. The resulting PCA basis vectors (p1, p2,. . .) form an orthonormal basis. Here the image data is analyzed in the low-dimensional subspace formed by the first two PCA components:

q=[p1,p2]n
The image is unmixed into the fluorescent component intensities i of the dyes A, B and C solving
[SA,1SB,1SC,1SA,2SB,2SC,2111]i=[q1q21](cdc).
The spectral signature matrix of the dyes S in PCA space are obtained by S = [p1, p2] · M. The third row of Eq. 18 results from the normalization condition as the image was normalized per pixel.

For segmentation, a mask for the pixels of q matching user defined spectral conditions is created and the total intensity cdc of the masked pixels is displayed.

3. Results and discussion

3.1. Color imaging

The evaluation of the system’s color imaging performance under illumination for fluorescence excitation is presented in Fig. 2, which shows that the corrected image exhibits much better color reproduction compared to the uncorrected average of the two sensors.

A direct comparison between the original, corrected and ideal colors is presented in Fig. 2(c), together with the results obtained when using only a single sensor. It can be seen that using both sensors allows close to ideal reproduction of the colors, while restricting to a single sensor yields a worse correction.

This is also reflected in the distribution of the ΔE00 color difference metric presented in Fig. 2(d) as a box plot. For the dual sensor approach, 75% of the tiles have an error ΔE00 ≤ 3.5, while for both single sensor systems more than 50% of the values show an error ΔE00 ≥ 3.5 with sensor S1 outperforming S2.

The proposed color correction method can be implemented easily in a real-time system because it only consists of a single matrix multiplication per pixel. The computationally expensive optimization using the metric ΔE00 to calculate the matrix elements is only performed once for each light and exposure configuration.

Both single sensor systems show inferior performance and a clear deviation between optimal and reproduced colors, since each of the color images exhibit major spectral gaps in which color information is lost. As the dual sensor system shows only minor spectral gaps throughout the visible range, the colors can be corrected satisfyingly. Looking at the gray tiles in Fig. 2, it seems that color tones can be better reproduced than the brightness of the color calibration tiles. If this effect is caused by vignetting of the imaging system, it can be easily corrected by recording the individual tiles sequentially at the same location by moving the target. Otherwise, the correction of color tones and brightness can be split to improve the color correction. Moreover, color calibration targets with more and application specific color and gray scale values are expected to further improve the color correction for specific applications. The fitness function aims to match the measured color target values as close as possible to the ideal impression as seen under standardized D65 illumination. Alternatively, the correction can be done so that the colors are perceived as under surgical illumination. Additionally, tuning the illumination spectrum for spectral ranges which are not used for excitation of a fluorescent dye can be included in the optimization algorithm.

3.2. Spectral sensitivity analysis

Spectral sensitivity analysis for fluorescence imaging revealed that the dual-sensor system can detect any dye with a Gaussian emission spectrum (σ = 20 nm) and a peak wavelength between 450 and 850 nm. In this emission range, the total system sensitivity ranges between 3.9% and 22% as shown in Fig. 3(c). Sensitivity is best (above 15%) and relatively even between 498 and 607 nm. A shallow trough (still above 10%) at around 650 nm is caused by larger than usual gaps between the complementary bands of the two filters. Sensitivity is lowest for dyes with peak emission between 700 nm and 800 nm, where a blocking band in filter F1 is not matched by a complementary transmission band in filter F2. Further into the NIR, sensitivity reaches its maximum value of 22% for dyes with a peak around 811 nm, where a wide transmission band of F1 is matched by a blocking band of F2.

In contrast, the two single sensor systems have regions of low sensitivity throughout the spectrum, corresponding to the blocking bands of the respective filter, and have all in all a much lower sensitivity than the two-sensor system. This restricts the use of the single sensor systems to certain emission wavelengths and does not allow to use it as a universal imaging system like the dual sensor system. However, because the single sensor systems have independent phases for fluorescence and reflectance imaging, the timing schematics can be adapted for longer fluorescence integration time to improve sensitivity.

The results of the simulation are in good agreement with the design requirements of the system to detect any dye within the VIS/NIR range.

3.3. Fluorescence imaging and unmixing

Raw and unmixed fluorescence images are shown in Fig. 4. The three fluorochromes shown in Fig. 4(c) emit in a spectral range narrower than 85 nm with big overlap of excitation and emission spectra. Figure 4(b) shows the mixed raw fluorescence intensity as recorded by the system. Spectral unmixing successfully separates the fluorescent components of each dye displayed in Fig. 4(d) with practically zero cross-talk from one component to the other.

 

Fig. 4 Fluorescent images of the test vials containing the fluorescent dyes Atto532, Atto565 and Atto610. (a) Image of the sum of all channels (one acquisition). Vial labels A – F match with Table 1. (b) Images of each channel for both sensors (only one of the two G pixels is shown; individual acquisition). (c) Excitation spectra (dotted lines) and emission spectra (solid lines) of Atto532 (green), Atto565 (yellow) and Atto610 (red). (d) Spectrally unmixed fluorescent dye components using channels from both sensors (individual acquisition). (f) Standard deviation of 100 sequentially acquired and unmixed fluorescent images (as in (d)). (e) r values to assess the unmixing quality. The region in the white box is magnified for better visibility. (g) Unmixing results using only the sensor channels R1, G1, G1* and B1 of a single image and (h) unmixing results using only the sensor channels R2, G2, G2* and B2 of a single image. Images of (g) and (h) also contain strong negative values, but the scale is limited to 0 as the lower limit. (table) Maximum scaling value of the intensity images.

Download Full Size | PPT Slide | PDF

The noise of the unmixed components Atto532, Atto565 and Atto610 calculated as the standard deviation of 100 sequential acquisitions is shown in Fig. 4(f). As expected, the noise is higher where the intensity of fluorescence is higher as predicted by Eq. 15.

Moreover, also in accordance with Eq. (15), noise transfers from one unmixed component to the others. This is evident in image areas containing a vial with pure dye, where noise is present not only in the corresponding unmixed component, but also in the other two.

This is a fundamental limitation of the imaging scenario and system concept. However, monochrome cameras with single band filters would not be able to separately detect strongly overlapping fluorophores. If the angle in channel space between the dyes is sufficient, the dyes can be unmixed in the two sensor system. In contrast, the single color sensor systems cannot unmix the combination of dyes, because the mixing is too strong. The comparison demonstrates that the additional multispectral information is worth more than just the sum of the photons, since the additional channel information allows separating dyes more reliably. However, it should be noted that the comparison of component angles are between a four dimensional and an eight dimensional space.

Here, we use the fluorescent signal of pure samples to fit the signatures. In vivo fluorescence images can be very challenging to unmix because the spectral signature is often unknown. Pre-calibration procedures can help to interpret the data effectively. Blind source unmixing algorithms like independent component analysis (ICA) [15,16], non negative matrix factorization (NMF) [17] or phasor analysis [18] can help to find the spectral fingerprints in vivo. However, most of those approaches also have their drawbacks. For example ICA may find dye signatures which in reality are combinations of the true ones and ICA resulting in potentially negative signature coefficients. In contrast, NMF enforces only positive contributions but therefore image noise causes a positive bias. Generally, unsupervised blind unmixing algorithms are problematic in real-time clinical imaging applications. Thus, extensive experimental data is needed for specific intraoperative scenarios to learn about the spectral signatures and their behavior in tissue.

Overall, the experiments show that individual images recorded with the system can be successfully unmixed, even if the dye emissions strongly overlap.

3.4. Standardized residuals

Figure 4(e) shows the standardized error of the residuals r for each pixel individually. If the assumed noise and unmixing model are correct, then rχ2(5), which is a quality measure independent of image intensity. While few pixels are expected to fall out of the scaling limits, any clusters or structures indicate a failure of the unmixing. For example, high r values are present on the edges of the vials. These might be caused by the sharp edges of the target in combination with the mosaic Bayer microfilter pattern on the sensor or by mis-registration between the two sensors. This shows the high sensitivity of the analysis.

In case pixels contain fluorescence from a dye which is not accounted for in the unmixing even though it should have been, r will also show high values. It may fail if an unknown spectral signature can be expressed as a linear combination of the signatures for which the data is unmixed. This is rather unlikely, but future research can try to identify the effect of known tissue absorbers and known autofluorescent dyes in tissue and estimate their effect on the unmixing of the dyes and the analysis in general. The lower the number of dyes to be unmixed for, the better the information of the r calculation. Generally, the higher the number of dyes to be unmixed for the bigger is the chance that an unknown disturbing dye is falling into the subspace spanned by the spectral signatures of the known dyes. Also we have assumed that both sensor noise and photon noise are normally distributed. Strictly taken, both are not. This only limits the interpretation of r being χ2 distributed but it does not conflict with r being a quality measure for unmixing.

We conclude that the r analysis is a potent tool to evaluate the unmixing performance.

3.5. Comparison with single-sensor systems

The unmixing performance of the dual sensor system is compared with the two respective one sensor systems (presented in [10]) for the phantom scenario. The intensities of the fluorescent component images unmixed using the information recorded with a single sensor in Figs. 4(g) and 4(h) show dramatically higher noise compared to the dual sensor approach.

The improvement in performance with respect to the single-sensor systems can be attributed to the higher number of detected photons and an increase in separation angles between the different dye signatures in the channels space. These are calculated on the basis of the spectral signatures and presented in Table 2.

Tables Icon

Table 2. Upper part: angle in degrees between the different dye spectral signatures in the dual sensor system (S1 + S2) and the two single sensor systems (sensor S1 or sensor S2). Lower part: the percentage of photons η orginating from each dye and detected by each system.

In contrast, a single-sensor system based on S2 exhibits a large separation angle between the signatures of Atto532 and Atto610 and can therefore resolve the two dyes, for which it also receives the lion’s share of the photons, but cannot resolve Atto565. The addition of sensor S1 confers the ability to also resolve Atto565, which is not present in a single-sensor system based on S1 as evident from the small separation angles.

To summarize, this example shows that the dual sensor system not only covers the entire spectral VIS/NIR region for color and fluorescence imaging and thus collects more photons than a single sensor system, but also exhibits superior unmixing performance.

3.6. Optimization

The calculated SNR values for any combination of two dyes with peak emission wavelength between 435 and 860 nm and the present set of filters (cf. Figs. 1(c) and 1(d)) are presented in Fig. 5(a). Values are only displayed on the lower left triangle. The optimal emission center wavelengths are found to be 559 nm and 812 nm for two dyes.

 

Fig. 5 Optimization of filters and fluorescent compounds. (a) Optimal selection of two fluorescent dyes: relative SNR (in dB) of each of two Gaussian dyes and combined |SNR|. (b) Optimal selection of 2 – 6 fluorescent dyes: Visualization of the optimized center wavelengths for the given multiband filters F1 and F2. (c) Optimal design of filters: visualization of the optimized band structure for given dyes Atto532, Atto565 and Atto610 given sensor sensitivity curves, having the number of filter bands as a parameter.

Download Full Size | PPT Slide | PDF

This matches the optimal combination of two dyes identified with the optimization algorithm described in section 2.7 and presented in Fig. 5(b), together with the optimal solutions for sets of up to 6 dyes and the corresponding absolute SNR values.

Optimal dye emission wavelengths depending on the number of dyes together with the corresponding relative SNR values are collected in Table 3 and Fig. 5. Unsurprisingly, the center wavelengths of the identified dyes lie within the filters’ transmission bands and the maximum SNR falls with increasing number of dyes. Compared to the ideal case, the decrease in SNR is only −5.9 dB for two dyes emitting at 559 nm and 812 nm. This drop in SNR is expected, as in our two-phases setup photons are only recorded for half the time, causing an expected drop in intensity of 1.5 dB compared to SNRideal. Figure. 3 shows that the overall dye detection efficiency can realistically be up to 25%, leading to a drop of at least 3.0 dB. Adding these two effects, the SNR is expected to be at least 4.5 dB below the ideal SNR because of the phases, the emission filters and the quantum efficiency of silicon sensors. The presented result for two dyes shows that most of the decrease in SNR is due to a lower number of photons being detected, and not due to unmixing.

Tables Icon

Table 3. Optimal choices of peak emission wavelengths of Nf dyes and the corresponding relative SNR values.

With increasing number of dyes in the imaging scenario the SNR deteriorates as expected. This has two reasons. First, a high number of dyes does not allow to place all of them at wavelengths where the system sensitivity is highest. Second and more importantly, the higher the number of dyes, the stronger the mixing between the dyes and thus the higher the decrease in SNR.

Additionally, we have searched for the optimal filter band structure and optimal number of bands for the challenging example of the three Atto fluorochromes. Solutions are calculated for 2 to 8 complementary filter bands (see Fig. 5(c) and Table 4) as described in section 2.7. The optimal result is obtained for 3 bands, matching the number of dyes. This filter configuration is 2.75 dB better than the SNR of the current system with the F1 and F2 multiband filters. For a higher number of filter bands, the SNR deteriorates only slightly.

Tables Icon

Table 4. Relative SNR in dB of the optimization results varying the alignment of the filter bands for the two sensors (number of bands Nb as a fixed parameter) for the algorithm that performed best (PS: PatternSearch, GS: GlobalSearch, GA: Genetic Algorithm). The reported filter bands characterize the optimal transmission bands (bands reported alternating between F1 and F2).

The optimal solution shows that it is best to align each filter band with the emission maximum of a dye and to align the edges of the bands with the intersection of the emission profiles. This corresponds to an intuitive choice. In contrast, the filter pair we used has four small gaps and one large gap in the emission wavelength range of the three dyes because it was chosen as a versatile filter for any dye in the VIS/NIR range and not specified for the specific set of dyes used.

Even though the spectral modeling of the system was designed carefully, it depends on numerous assumptions. To parameterize the emission profile of a real dye, a Gaussian shaped spectrum with a fixed width was chosen. Real dye emission spectra have a nontrivial shape caused by intra- and inter-molecular interactions, and a more appropriate spectral emission model could improve the overall optimization results. Moreover, the number of photons emitted from each dye was assumed to be equal; should the number of emitted photons differ by orders of magnitude, this should be appropriately modeled.

The fitness function combines the individual SNR values to an absolute SNR. It favors similar SNR values and avoids optimization results, in which the SNR of one dye is dramatically better than the SNR of another dye. Alternative ways to combine the SNR values could take into account various combinations of detected photon numbers or a stability criterion towards dye emission wavelengths or filter wavelengths. Favoring solutions where small wavelength changes hardly influence the SNR can be particularly beneficial when translating to real biomedical applications. Alternatively the fitness function can find an optimal solution by minimizing the condition number of the mixing matrix for robust unmixing.

Overall the optimization results show that the presented concept is suitable for combined fluorescence and reflectance imaging for intraoperative applications. Further improvements could potentially be made by additionally taking into account the excitation process in the different phases and thereby further separate the dyes in channel space.

3.7. Segmentation and unmixing using principle component analysis

PCA allows finding a good basis for projecting the dye signals to a lower dimensional subspace with good spectral separation. The projected data can be visualized in histograms allowing to explore the data intuitively. This visualization allows to segment certain spectral signatures as well as to unmix for up to 3 components.

The 2D histogram of the first two PCA components of a single fluorescence image is presented in Fig. 6(a). The pixels from vials containing only one fluorescent dye (pure dye) form the edges of the triangle (A, C, E) while the pixels from the vials containing mixtures of dyes (B, D, F) lie along the sides of the triangle.

 

Fig. 6 Principle component analysis: (a) the 2D histogram of the first two PCA component vectors as basis (thresholded at 1% of the maximum signal). (b) Segmented images of the fluorescent signals (regions A to F as in PCA space shown in (a)). Fluorescent signals unmixed in PCA space spanned by the first two component vectors. Pure dye signals spanning the unmixing triangle in (a) are set by projecting the pure dye signatures onto the first two PCA component vectors.

Download Full Size | PPT Slide | PDF

Segmented images of the vials A to F shown in Fig. 6(b) were obtained by selecting all pixels within the circles in the histogram of Fig. 6(a). The pure signatures A, C and E span a triangle which can also be used for unmixing as shown in Fig. 6(c). Even though the spectral information is reduced by projecting into a 2D subspace, the resolving power of the system is still sufficient with the advantage that the data can be visualized for the user.

The analysis demonstrates that differences in the emission spectrum of a fluorophore can be seen in the 2D histogram plot of the subspace in the multidimensional sensor space. If the subspace is appropriately chosen, most of the information is maintained and thus different fluorophores can be separated in real-time. For subspace segmentation it is helpful to predefine the components which are carrying the significant information to the user. For example, changes in the spectral signature due to absorption of blood could be displayed in one histogram dimension while autofluorescence can be visualized in the second histogram dimension. Such a real-time histogram visualization of the multispectral fluorescence data can aid the user to judge whether signal originates from the molecule of interest, whether it originates from another autofluorescent molecule or whether the light spectrum was altered by absorption.

4. Conclusion

We have presented a concept of using color sensors for multispectral fluorescence imaging and demonstrated its feasibility. The concept is designed for real-time intraoperative imaging applications. While current clinical systems only allow to record either fluorescence or color images, the presented novel method allows to record a fluent image stream of both with only two alternating phases. The system can be easily adapted to surgical microscopes, rigid endoscopes, laparoscopes and cystoscopes. Combining state of the art image quality with mechanical flexibility and small form factors allows to realize chip on the tip endoscopes. Alternatively the method would also be suitable for fast imaging applications in fluorescence microscopy.

The system is able to quickly record two phases as no mechanical components need to be moved and thus allows a fluent visual perception. The frame rate is basically limited by the data transfer rate, the readout time of the sensor and also by the fact that short integration times result in increased imaging noise. On the other hand, the temporal delay between two subsequent phases can result in movement artifacts, which can be overcome by image processing algorithms. In order to translate the technology to the clinic, further preclinical and clinical experiments are being planned.

Funding

Fraunhofer-Gesellschaft; German Ministry of Education and Research (BMBF) (031B0219).

Acknowledgements

We thank Mr. Vaibhav Dixit for measuring the spectral quantum efficiency of the sensors.

References and links

1. N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008). [CrossRef]   [PubMed]  

2. W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006). [CrossRef]   [PubMed]  

3. M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996). [CrossRef]   [PubMed]  

4. D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008). [CrossRef]  

5. Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013). [CrossRef]   [PubMed]  

6. A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013). [CrossRef]   [PubMed]  

7. M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011). [CrossRef]   [PubMed]  

8. P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012). [CrossRef]   [PubMed]  

9. K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012). [CrossRef]   [PubMed]  

10. N. Dimitriadis, B. Grychtol, L. Maertins, T. Behr, G. Themelis, and N. C. Deliolanis, “Simultaneous real-time multicomponent fluorescence and reflectance imaging method for fluorescence-guided surgery,” Opt. Lett. 41, 1173 (2016). [CrossRef]   [PubMed]  

11. CIE, “Colorimetry - part 2: CIE standard illuminants CIE S 014-2/E:2008 (ISO 11664-2:2007),” Tech. rep., International Organization for Standardization (2008).

12. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001). [CrossRef]  

13. S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014). [CrossRef]   [PubMed]  

14. R. J. Barlow, Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences (John Wiley & Sons, 2008).

15. J. Glatz, N. C. Deliolanis, A. Buehler, D. Razansky, and V. Ntziachristos, “Blind source unmixing in multi-spectral optoacoustic tomography,” Opt. Express 19, 3175–3184 (2011). [CrossRef]   [PubMed]  

16. A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

17. R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009). [CrossRef]   [PubMed]  

18. F. Fereidouni, A. N. Bader, and H. C. Gerritsen, “Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images,” Opt. Express 20, 12729 (2012). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008).
    [Crossref] [PubMed]
  2. W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
    [Crossref] [PubMed]
  3. M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
    [Crossref] [PubMed]
  4. D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
    [Crossref]
  5. Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013).
    [Crossref] [PubMed]
  6. A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
    [Crossref] [PubMed]
  7. M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
    [Crossref] [PubMed]
  8. P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
    [Crossref] [PubMed]
  9. K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
    [Crossref] [PubMed]
  10. N. Dimitriadis, B. Grychtol, L. Maertins, T. Behr, G. Themelis, and N. C. Deliolanis, “Simultaneous real-time multicomponent fluorescence and reflectance imaging method for fluorescence-guided surgery,” Opt. Lett. 41, 1173 (2016).
    [Crossref] [PubMed]
  11. CIE, “Colorimetry - part 2: CIE standard illuminants CIE S 014-2/E:2008 (ISO 11664-2:2007),” Tech. rep., International Organization for Standardization (2008).
  12. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
    [Crossref]
  13. S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
    [Crossref] [PubMed]
  14. R. J. Barlow, Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences (John Wiley & Sons, 2008).
  15. J. Glatz, N. C. Deliolanis, A. Buehler, D. Razansky, and V. Ntziachristos, “Blind source unmixing in multi-spectral optoacoustic tomography,” Opt. Express 19, 3175–3184 (2011).
    [Crossref] [PubMed]
  16. A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).
  17. R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
    [Crossref] [PubMed]
  18. F. Fereidouni, A. N. Bader, and H. C. Gerritsen, “Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images,” Opt. Express 20, 12729 (2012).
    [Crossref] [PubMed]

2016 (1)

2014 (1)

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

2013 (2)

Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013).
[Crossref] [PubMed]

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

2012 (3)

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

F. Fereidouni, A. N. Bader, and H. C. Gerritsen, “Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images,” Opt. Express 20, 12729 (2012).
[Crossref] [PubMed]

2011 (2)

J. Glatz, N. C. Deliolanis, A. Buehler, D. Razansky, and V. Ntziachristos, “Blind source unmixing in multi-spectral optoacoustic tomography,” Opt. Express 19, 3175–3184 (2011).
[Crossref] [PubMed]

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

2009 (1)

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

2008 (2)

D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
[Crossref]

N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008).
[Crossref] [PubMed]

2006 (1)

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

2002 (1)

A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

2001 (1)

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

1996 (1)

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Achilefu, S.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Bader, A. N.

Barlow, R. J.

R. J. Barlow, Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences (John Wiley & Sons, 2008).

Baumgartner, R.

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Behr, T.

Berger, M. S.

N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008).
[Crossref] [PubMed]

Buehler, A.

Crisp, J. L.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Cui, G.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Deliolanis, N. C.

Dimitriadis, N.

Fereidouni, F.

Frangioni, J. V.

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

Friedman, B.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Gao, S.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Gerritsen, H. C.

Glatz, J.

Gross, L. A.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Gruev, V.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Grychtol, B.

Gunn, J. R.

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

Hofstädter, F.

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Hofstetter, A.

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Hutteman, M.

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

Hyvärinen, A.

A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

Jacobs, V. L.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Jocham, D.

D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
[Crossref]

Karhunen, J.

A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

Kirchhoff, F.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Knüchel, R.

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Kriegmair, M.

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Leblond, F.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Liang, R.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Luo, M. R.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Maertins, L.

Meinel, T.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Mitkovski, M.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Mondal, S. B.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Neher, E.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Neher, R. A.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Nguyen, L. T.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Nguyen, Q. T.

Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013).
[Crossref] [PubMed]

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Ntziachristos, V.

Oja, E.

A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

Paulsen, K. D.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Pichlmeier, U.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Pogue, B. W.

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

Razansky, D.

Reulen, H. J.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Rigg, B.

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Roberts, D. W.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Samkoe, K. S.

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

Sanai, N.

N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008).
[Crossref] [PubMed]

Sexton, K. J.

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

Steinbach, P.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Stepp, H.

D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
[Crossref]

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Stummer, W.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Theis, F. J.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Themelis, G.

Tichauer, K. M.

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

Tsien, R. Y.

Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013).
[Crossref] [PubMed]

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Vahrmeijer, A. L.

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

Valdés, P. A.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

van de Velde, C. J. H.

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

van der Vorst, J. R.

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

Waidelich, R.

D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
[Crossref]

Whitney, M. A.

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Wiestler, O. D.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Wilson, B. C.

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Zanella, F.

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Zeug, A.

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Zhu, N.

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Adv. Cancer Res. (1)

S. B. Mondal, S. Gao, N. Zhu, R. Liang, V. Gruev, and S. Achilefu, “Real-time fluorescence image-guided oncologic surgery,” Adv. Cancer Res. 124, 171–211 (2014).
[Crossref] [PubMed]

Analysis (1)

A. Hyvärinen, J. Karhunen, and E. Oja, “Independent component analysis,” Analysis 26, 505 (2002).

Biophys. J. (1)

R. A. Neher, M. Mitkovski, F. Kirchhoff, E. Neher, F. J. Theis, and A. Zeug, “Blind source separation techniques for the decomposition of multiply labeled fluorescence images,” Biophys. J. 96, 3791–3800 (2009).
[Crossref] [PubMed]

Color Res. Appl. (1)

M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001).
[Crossref]

Eur. Urol. (1)

D. Jocham, H. Stepp, and R. Waidelich, “Photodynamic diagnosis in urology: state-of-the-art,” Eur. Urol. 53, 1138–1150 (2008).
[Crossref]

J. Biomed. Opt. (1)

K. M. Tichauer, K. S. Samkoe, K. J. Sexton, J. R. Gunn, and B. W. Pogue, “Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging,” J. Biomed. Opt. 17, 066001 (2012).
[Crossref] [PubMed]

J. Urol. (1)

M. Kriegmair, R. Baumgartner, R. Knüchel, H. Stepp, F. Hofstädter, and A. Hofstetter, “Detection of early bladder cancer by 5-aminolevulinic acid induced porphyrin fluorescence,” J. Urol. 155, 105–110 (1996).
[Crossref] [PubMed]

Lancet Oncol. (1)

W. Stummer, U. Pichlmeier, T. Meinel, O. D. Wiestler, F. Zanella, and H. J. Reulen, “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol. 7, 392–401 (2006).
[Crossref] [PubMed]

Nat. Biotechnol. (1)

M. A. Whitney, J. L. Crisp, L. T. Nguyen, B. Friedman, L. A. Gross, P. Steinbach, R. Y. Tsien, and Q. T. Nguyen, “Fluorescent peptides highlight peripheral nerves during surgery in mice,” Nat. Biotechnol. 29, 352–356 (2011).
[Crossref] [PubMed]

Nat. Rev. Cancer (1)

Q. T. Nguyen and R. Y. Tsien, “Fluorescence-guided surgery with live molecular navigation-a new cutting edge,” Nat. Rev. Cancer 13, 653–662 (2013).
[Crossref] [PubMed]

Nat. Rev. Clin. Oncol. (1)

A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. H. van de Velde, and J. V. Frangioni, “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol. 10, 507–518 (2013).
[Crossref] [PubMed]

Neurosurgery (1)

N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery 62, 753–764 (2008).
[Crossref] [PubMed]

Opt. Express (2)

Opt. Lett. (1)

Sci. Rep. (1)

P. A. Valdés, F. Leblond, V. L. Jacobs, B. C. Wilson, K. D. Paulsen, and D. W. Roberts, “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep. 2, 798 (2012).
[Crossref] [PubMed]

Other (2)

CIE, “Colorimetry - part 2: CIE standard illuminants CIE S 014-2/E:2008 (ISO 11664-2:2007),” Tech. rep., International Organization for Standardization (2008).

R. J. Barlow, Statistics: A Guide to the Use of Statistical Methods in the Physical Sciences (John Wiley & Sons, 2008).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (a) Schematic of the optical setup. The object is illuminated alternately with either of the two spectrally complementary lights LA or LB. OL: objective lens, BS: beam splitter, F1 and F2: complementary multiband filters, L1 and L2: imaging lenses, S1 and S2: color sensors. (b) RGGB pixel pattern of the color sensors. (c) Normalized LED spectra of L1 and (d) normalized LED spectra of L2. (e) Transmission t of multiple bandpass filter F1. (f) Transmission t of multiple bandpass filter F2.(g) Quantum efficiency s of the sensor S1 and (h) of the sensor S2. (i) Spectral and temporal multiplexing scheme. In phase A: LA illuminates the object, S1 (with F1) records the fluorescence image, and S2 (with F2) records the reflectance image. In phase B is the opposite: LB illuminates the object, S1 (with F1) records the complementary reflectance image and S2 (with F2) records the complementary fluorescence image.
Fig. 2
Fig. 2 (a) Uncorrected color image produced by averaging the color channels of the two sensors (for visibility the black and the white tiles are scaled to match the ideal RGB average brightness). (b) Corrected image. Best correction result using linear transformation with an additional offset. (c) Comparison of color reproduction methods for all 30 tiles. For each tile 5 different processing scenarios are shown: (A) Uncorrected colors (average of two sensors, for visibility the black and the white tiles are scaled to match the ideal RGB average brightness). (B) Optimal correction by using combined information from sensors A and B and transformation to xyz color space. (C) Best color correction using only sensor A. (D) Best color correction using only sensor B. (E) Ideal value as perceived under D65 illumination by the human eye. (d) Boxplot of the CIEDE2000 errors ΔE00 after optimal correction using the color information from only sensor A, only sensor B and both sensors. Dotted line at 1 is the absolute distinction limit; dashed line at 3.5 indicates a limit at which color differences can be hardly distinguished.
Fig. 3
Fig. 3 Simulation of system sensitivity. (a) Fluorochrome emission spectrum (Gaussian distribution). (b) Combination of filter spectral transmission and sensor sensitivity. (c) Sensor signal intensity for the six channels calculated from the convolution of (a) and (b). S1 and S2 are simulating monochrome sensors by combining channels R1, G1, B1 and R2, G2, B2, respectively
Fig. 4
Fig. 4 Fluorescent images of the test vials containing the fluorescent dyes Atto532, Atto565 and Atto610. (a) Image of the sum of all channels (one acquisition). Vial labels A – F match with Table 1. (b) Images of each channel for both sensors (only one of the two G pixels is shown; individual acquisition). (c) Excitation spectra (dotted lines) and emission spectra (solid lines) of Atto532 (green), Atto565 (yellow) and Atto610 (red). (d) Spectrally unmixed fluorescent dye components using channels from both sensors (individual acquisition). (f) Standard deviation of 100 sequentially acquired and unmixed fluorescent images (as in (d)). (e) r values to assess the unmixing quality. The region in the white box is magnified for better visibility. (g) Unmixing results using only the sensor channels R1, G1, G1* and B1 of a single image and (h) unmixing results using only the sensor channels R2, G2, G2* and B2 of a single image. Images of (g) and (h) also contain strong negative values, but the scale is limited to 0 as the lower limit. (table) Maximum scaling value of the intensity images.
Fig. 5
Fig. 5 Optimization of filters and fluorescent compounds. (a) Optimal selection of two fluorescent dyes: relative SNR (in dB) of each of two Gaussian dyes and combined |SNR|. (b) Optimal selection of 2 – 6 fluorescent dyes: Visualization of the optimized center wavelengths for the given multiband filters F1 and F2. (c) Optimal design of filters: visualization of the optimized band structure for given dyes Atto532, Atto565 and Atto610 given sensor sensitivity curves, having the number of filter bands as a parameter.
Fig. 6
Fig. 6 Principle component analysis: (a) the 2D histogram of the first two PCA component vectors as basis (thresholded at 1% of the maximum signal). (b) Segmented images of the fluorescent signals (regions A to F as in PCA space shown in (a)). Fluorescent signals unmixed in PCA space spanned by the first two component vectors. Pure dye signals spanning the unmixing triangle in (a) are set by projecting the pure dye signatures onto the first two PCA component vectors.

Tables (4)

Tables Icon

Table 1 Peak absorption (λabs), peak fluorescense (λfl) and concentration of dyes in vials A to F.

Tables Icon

Table 2 Upper part: angle in degrees between the different dye spectral signatures in the dual sensor system (S1 + S2) and the two single sensor systems (sensor S1 or sensor S2). Lower part: the percentage of photons η orginating from each dye and detected by each system.

Tables Icon

Table 3 Optimal choices of peak emission wavelengths of Nf dyes and the corresponding relative SNR values.

Tables Icon

Table 4 Relative SNR in dB of the optimization results varying the alignment of the filter bands for the two sensors (number of bands Nb as a fixed parameter) for the algorithm that performed best (PS: PatternSearch, GS: GlobalSearch, GA: Genetic Algorithm). The reported filter bands characterize the optimal transmission bands (bands reported alternating between F1 and F2).

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

e i = c = 1 6 K i , c d c + K 0 .
d exp K i , c and K 0 e exp D 65 illumination f exp
Δ E 00 , tile = Δ E 00 ( f tile exp , f tile D 65 , human eye )
arg min K i , c , K 0 ( 1 30 tile = 1 30 Δ E 00 , tile ( d tile exp K i , c , K 0 f tile exp ; f tile D 65 , human eye ) )
d = Mi ,
i ˜ = M + d = Ud .
Σ i ˜ = U Σ d U ,
Σ i ˜ j , j = c = 1 N c U j , c 2 Σ d c , c = c = 1 N c U j , c 2 σ d c 2
( diag ( Σ i ˜ ) ) 1 2
SNR = ( diag ( Σ i ˜ ) ) 1 2 i ˜ .
d ˜ = M i ˜ = MUd .
Σ d ˜ = M Σ i ˜ M = MU Σ d U M ,
Σ r = Σ d + MU Σ d U M MU Σ d Σ d U M .
r = ( diag ( Σ r ) ) 1 2 r .
σ d c 2 = σ d c Poisson 2 + σ 0 2 .
σ i ˜ f 2 = Σ i ˜ f , f c = 1 N c U f , c 2 d c = c = 1 N c φ = 1 N f U f , c 2 M c , φ i φ .
| SNR | = ( 1 N f f N f SNR f 2 ) 1 2 .
q = [ p 1 , p 2 ] n
[ S A , 1 S B , 1 S C , 1 S A , 2 S B , 2 S C , 2 1 1 1 ] i = [ q 1 q 2 1 ] ( c d c ) .

Metrics