Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Context-free hyperspectral image enhancement for wide-field optical biomarker visualization

Open Access Open Access

Abstract

Many well-known algorithms for the color enhancement of hyperspectral measurements in biomedical imaging are based on statistical assumptions that vary greatly with respect to the proportions of different pixels that appear in a given image, and thus may thwart their application in a surgical environment. This article attempts to explain why this occurs with SVD-based enhancement methods, and proposes the separation of spectral enhancement from analysis. The resulting method, termed affinity-based color enhancement, or ACE for short, achieves multi- and hyperspectral image coloring and contrast based on current spectral affinity metrics that can physically relate spectral data to a particular biomarker. This produces tunable, real-time results which are analogous to the current state-of-the-art algorithms, without suffering any of their inherent context-dependent limitations. Two applications of this method are shown as application examples: vein contrast enhancement and high-precision chromophore concentration estimation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Color reconstruction of multi- and hyperspectral images (MSI/HSI) is, without a doubt, a mature field of research. As could be expected, given the vast amount of information contained in HSI data, translating this inherently physical content into color pictures has always been a fundamental objective. There are several ways to achieve this, but two are particularly frequent, namely (1) generating a false-color image from an a priori calculated magnitude and (2) using Color Matching Functions (CMFs) –as defined by the International Commision on Illumination (CIE)– on carefully modified spectra so that only some pixels with particular spectral properties turn to a different color. The origins of both techniques date back to the 1970s [1], and were typically employed in the analysis of multispectral remote sensing measurements [29]. In these methods, color reconstructions are obtained by integrating spectra with the CMFs, with varying degrees of success in improving contrast in such representations.

Highlighting abnormal spectral variations within an image has been of particular interest in the past decade in Biomedical Optics research, and thus much work has been dedicated to translating these ideas to biomedical images. The main focus of these methods is to make spectral anomalies significantly more visible in color reconstructions of MSI/HSI images, obtaining an intuitive, direct visual outlier and anomaly inspection and detection framework. Examples include changing RGB channels from entropy measures of spectral information to enhance contrast in gastrointestinal HSI images [10], or extracting sublingual veins from hyperspectral images [11], as well as vein identification via blending NIR and VisNIR data [12]. However, the most significant references are the results and findings of M. Mitsui et al. [13] and N. Hashimoto et al. [14], in which abnormal variations across different wavelengths can be enhanced or turned into a different color. This procedure –highlighting or enhancing multispectral image abnormalities on different wavelengths– provides stunning visual results that implicitly relate to the physical properties of the biological material being sampled. As a consequence, these methods have been tested in ex vivo samples of breast tissue tumorectomies [15], palm vein visualization [16], multispectral images of Hematoxylin-and-Eosin (H&E) stained histological sections [17,18], as well as in vivo fluorescence-guided surgery in animal models [19]. Other methods, less related to colorimetry or biomedical optics, use the same methodology as a core component in anomaly detection for HSI remote sensing images [20], detecting bruises in apples in the Vis-NIR range [21], or for background segmentation for human detection algorithms using HSI images of the ocean [22].

In these applications, spectra are arranged in a matrix as column vectors, and Principal Component Analysis (PCA) is used to find a small set of basis vectors that can explain most of their variance, under the assumption that high-dimensional spectra actually lay in a low-dimensional manifold that can be inferred linearly. This effectively serves as a lossy compression algorithm, in which any spectrum can be specified with a few coordinates. If there are spectral features that differentiate a particular spectrum from more typical signatures, such features are exaggerated in the spectrum, producing significant contrast once the image is converted to RGB.

Unfortunately, this procedure is deeply context-dependent, as PCA exploits one well-known dimensionality reduction method from Linear Algebra, the Singular Value Decomposition (SVD) of a matrix. The SVD can be understood as an operator with provable uniqueness and, therefore, in the context of multi- and hyperspectral imaging of malignant tissue in surgical applications, its exploitation could be problematic. If the SVD provides a different result when provided different spectra in different amounts, the output singular vectors will not be the same on different images, and thus the enhancement results cannot be systematically repeatable. Such notions are mentioned in previous work [9,23], but are not further studied or analyzed. Similar problems could be faced by other methods that use context information to obtain results, such as ICA [24]. There is, thus, a requirement for a colorimetric enhancement method that only highlights desired information in a controllable and constrained fashion.

This work attempts to understand the aforementioned issues, and provide an improvement upon current techniques. First, we must describe the theoretical context and framework for hyperspectral enhancement as a spectral anomaly enhancement method, and provide proofs and examples that explain the fundamental issues shown in our experiments. Second, we propose a solution which uses a much simpler color enhancement framework, supported by a set of affinity or similarity functions that will provide the same results independently from the context of the image. We then define tunable contrast, gain and threshold parameters for color enhancement of any particular affinity function, having in mind its use in a fast-paced, timely surgical environment. Many state-of-the-art spectral optical quantification and classification algorithms could exploit the proposed method and enhance images in real time, so two applications (vein visualization and chromophore quantification) are presented. These novel solutions cannot be approached by the previously described enhancement methods, ensuring the practical applicability of the proposed methodology in a clinical setting.

2. Materials and methods

2.1 Imaging equipment, preprocessing and reflectance calculations

The imaging system used for hyperspectral image capture is a custom-built hyperspectral scanning system, with an embedded computer for storage and control [25]. A general description is left in Table 1. Its fundamental optical components are an ImSpector V10E spectrograph (Specim, Spectral Imaging Ltd., Finland) paired with a generic C-mount objective lens and CMOS (Mako G-223B NIR, Allied Vision Technologies GmbH., Germany). Hot pixels due to long exposure times were corrected via dark image substraction as well as a spatial median filter of size 2. In order to obtain reflectance images, each measurement was calibrated with a white reference image at the same distance and lighting conditions. Reflectance $R(\lambda )$, therefore, was calculated with the well-known equation

$$R(\lambda) = \frac{I_{\textrm{s}}(\lambda)- I_{\textrm{bg}}(\lambda)}{I_{\textrm{w}}(\lambda)- I_{\textrm{bg}}(\lambda)},$$
where $I_{\textrm {s}}(\lambda )$ is the sample intensity measurement, $I_{\textrm {w}}(\lambda )$ is a white reference image under identical illumination conditions, and $I_{\textrm {bg}}$ is the background intensity due to hot pixels in the sensor. Throughout the manuscript, we will only employ reflectance images, where this calibration operation is performed, compensating for light source variability. Wavelength calibration was achieved through linear least-squares on a set of 3 known absorption peaks from a Spectralon Wavelength Calibration Standard (WCS), namely WCS-MC-020 (Labsphere, Inc. New Hampshire, USA) [26]. Relevant specifications of the imaging system are left in Table 1.

Tables Icon

Table 1. Specifications of the custom-built imaging instrument.

2.2 Color reconstruction

Spectra will be transformed into RGB images by using well-known methods in colorimetry. More specifically, we will use CIE 1931 Standard Observer Color Matching Functions (CMFs), which describe retinal perception of color as the sum of three stimuli. Given a sampled spectrum $r\in \mathbb {R}^l$, with $l$ being the number of sampled channels and/or wavelengths, the CIE 1931 tristimulus values $(X, Y, Z)$ can be approximated by Riemannian sums as follows [27]:

$$X \approx k \left(r\odot s \right)\bar{x}^T\Delta\lambda, \hspace{1em} Y \approx k \left(r\odot s \right)\bar{y}^T\Delta\lambda, \hspace{1em} Z \approx k \left(r\odot s \right)\bar{z}^T\Delta\lambda,$$
where $r=(r_1, \dots , r_l)^T$ is the reflectance vector, $s=(s_1, \dots , s_l)^T$ is the power spectral density of a CIE standard light source, $\bar {x}, \bar {y}, \bar {z} \in \mathbb {R}^l$ are the CMFs for a secondary light source as the colorimetric stimulus, i.e. the dot product $\phi = r\odot s$, and finally $\Delta \lambda$ is the spectral resolution. Finally, $k$ is a normalization constant, which for secondary light sources is defined as
$$k = \frac{100}{\sum_{i=1}^{l}{s_i \bar{y}_i \Delta\lambda}}.$$
Conversion from XYZ to RGB space was achieved with skimage.color, a Python 3 library.

2.3 Existing enhancement methods

A brief description of the algorithms that exploit PCA/SVD should be included. In particular, we will address the methods in [13] and [14,28] for multispectral enhancement. Both algorithms treat spectra in the same fashion. For the following discussion, let us arrange our $n$ spectra as column vectors in a matrix $R\in \mathbb {R}^{l\times n}$, i.e. $R = (r_1,\dots , r_n)^T$, where $r_i$ corresponds to a single spectrum, with $l$ channels (or sample wavelengths) in total. For a single spectrum $r_i$, its enhanced spectrum is calculated via

$$r_{e, i} = W(r_i - s_i) + r_i,$$
where $W\in \mathbb {R}^{l\times l}$ is defined as the enhancement matrix, $r_i$ is the original pixel, and $s_i$ is an $m$-rank approximation of spectrum $r_i$, namely
$$s_i = \sum_{j=1}^{m}{\alpha_{i,j} u_j} + \bar{r},$$
and here $\{u_j\}$ are the left singular vectors of matrix $R$, $\alpha _{i,j}=\langle r_i-\bar {r}, u_j\rangle$ is the projection of $r_i-\bar {r}$ onto the $j$-th left singular vector, and $\bar {r}= \frac {1}{N} \sum _{i=1}^{n}{r_i}$ is the average spectrum. It is possible to rewrite these methods, understanding that we are in fact assuming that every spectrum can be expressed as the sum of four values, namely
$$r_i = s_i + d_i + \bar{r} + n_i = \left(\sum_{j=1}^{m}{\alpha_{i,j} u_j}\right) + \left(\sum_{j=m+1}^{rank(R)}{\alpha_{i,j} u_j}\right) + \bar{r} + n_i,$$
being $s_i$ the $m$-rank approximation of spectrum $r_i$, $d_i = r_i-s_i$ is our desired vector of differences among spectra that we (ideally) wish to highlight, and $n_i$ is additive gaussian noise ($\bar {r}$ is still the average spectrum).

The methods differ, therefore, by choosing $W$ wisely, resulting in different enhancement images. Mitsui et al.’s method [13] has the objective of highlighting uncommon variations of spectra with respect to the average spectrum and to its $m$-rank approximation, and then exaggerating those differences selectively, depending on the wavelength we wish to enhance. In particular, $W$ is a diagonal matrix with all of its entries set to zero except for a single channel: $W=\textrm {diag}(w)$, $\{w_i\} = 0, \forall i\neq q$, and $w_q = k\in \mathbb {R^+}$. The term $k$ usually receives the name of enhancement gain. Hashimoto et al.’s method [14], on the other hand, adds color to the spectrum as a function of this same residual variation, and therefore chooses to create $W$ as a set of column vectors, $W=(w_1,\dots ,w_l)$, where every column is a vector of zeroes $\{w_i\}=0$, except for the $q$th channel, the channel whose variation we wish to highlight: $w_q = k\cdot c$, and here $c\in \mathbb {R}^l$ is a particular desired color, synthesized in the wavelengths described by the $l$ channels. Variable $k$ is still the enhancement gain, a positive real number.

2.4 Singular value decomposition of a matrix

The Singular Value Decomposition of a matrix $A\in \mathbb {R}^{m\times n}$ is the factorization

$$A = U\Sigma V^T,$$
where the orthogonal column vectors in $U=(u_1,\dots , u_r)^T$ are known as the left singular vectors of $A$, $\Sigma =\textrm {diag}(\sigma _1,\dots , \sigma _r)$ is a diagonal matrix whose main diagonal values are called $A$’s singular values, and $V=(v_1,\dots , v_r)^T$ are its right singular vectors. Finally, $r=\textrm {rank}(A)$ is the rank of the matrix. The SVD is used in many classification problems as a means of dimensionality reduction of large vectors. This is due to the fact that the SVD can be understood as a lossy compression algorithm, where $\tilde {A}_k = U_k \Sigma _k V_k^T$ with $U_k = (u_1,\dots , u_k)^T$, $\Sigma _k = \textrm {diag}(\sigma _1, \dots , \sigma _k)$, $V_k = (v_1,\dots ,v_k)^T$, and $k\ll r$, is known as the $k$-rank approximation of matrix $A$, an approximation with a known theoretical error [29], namely $\left \Vert A - \tilde {A}_k\right \Vert _F = \sigma _{k+1}$.

Another way of understanding the SVD of a matrix is by describing it as a quadratic optimization problem. The following definitions will be helpful in the coming sections:

Definition 1 (First singular value of a matrix as an optimization problem) The first left singular vector of matrix $A\in \mathbb {R}^{m\times n}$, $u_1$, is the solution of the following optimization problem:

$$\begin{aligned} \underset{u}{{maximize}} \hspace{2em} & \left\Vert u^T A\right\Vert_F, \\ {subject to} \hspace{2em} & u^T u= 1, \end{aligned}$$
and its corresponding singular value and right singular vector are defined by
$$\sigma_1(A) = \left\Vert u_1^T A\right\Vert,$$
$$v_1 = \frac{1}{\sigma_1(A)} u_1^T A.$$

This first definition simply states that the first left singular vector is the only vector with unit norm that is most ’similar’ (in terms of dot product magnitude) to all the column vectors in the matrix. In statistical terms, this is equivalent to finding the first vector that maximizes the explained variance with respect to all the columns in $A$ (or, alternatively, the vector that explains most columns of covariance matrix $A^T A$). In other words, $u_1$ is the vector that best explains the columns in $A$ with the smallest possible error. The error in this approximation is $\left \Vert A-\tilde {A}_1 \right \Vert _F = \left \Vert A-u_1\sigma _1 v_1^T\right \Vert _F = \sigma _2$, i.e. the next singular value.

Definition 2 (Singular values of a matrix as an optimization problem) The $i$-th left singular vector of matrix $A\in \mathbb {R}^{m\times n}$, $u_i, i=2,\dots ,rank(A)$, can be obtained recursively:

$$\begin{aligned} \underset{u}{{maximize}} \hspace{2em} & \left\Vert u^T \left(A-B_k\right)\right\Vert_F, \\ {subject to} \hspace{2em} & u^T u= 1, \end{aligned}$$
where
$$ B_k = \sum_{k=1}^{i-1}{u_k \sigma_k v_k^T}, $$
and its corresponding singular value and right singular vector are, respectively,
$$\begin{aligned} \sigma_i(A)& = \left\Vert u_i^T A\right\Vert, \\ v_i &= \frac{1}{\sigma_i(A)} u_i^T (A - B_k). \end{aligned}$$

The proof of these two statements are left in [30]. Once described in this manner, the SVD of $A$ can be calculated. The solution, $A=U\Sigma V^T$, has a set of properties that are certainly appealing, due to the fact that the most frequent and relevant features can be seen in the first singular vectors. Unfortunately, these definitions are rather obscure which, combined with the visible features in the output of the numerical methods, may suggest assuming that a specific singular vector is associated with a specific feature, or that for every $A$ the number of singular vectors needed to explain all the variance is always the same. Both of these assumptions are easily refuted by the following theorem:

Theorem 1 (Uniqueness of the SVD of a matrix) Every matrix $A\in \mathbb {C}^{m\times n}$ has a singular value decomposition. Its singular values $\{\sigma _j\}$ are uniquely determined and, if $A$ is square and the $\sigma _j$ are distinct, the left and right singular vectors $\{u_j\}$ and $\{v_j\}$ are uniquely determined up to complex signs.

The proof of this theorem is left in the Refs. [2931], is sufficient to reject methods that use the SVD in a case-by-case basis, and is well beyond the scope of this article. Despite this fact, it can be insightful to find some example problems and study their properties, in order to improve our understanding of the effects that the magnitude and frequency of appearance of measured spectra produce on the output of the SVD.

2.5 Understanding why the SVD is context-dependent

The following exercise shows how PCA/SVD provides different results depending on how well-balanced the input dataset is. We will work with a matrix $A$ with specific properties in their columns, and will show what is returned by the SVD by obtaining its singular vectors and values by hand. Measured spectra will be represented by column vectors in a matrix. This example will consist of a matrix with three different types of spectra repeated over the columns.

Proposition 1 Let $\alpha , \beta ,\gamma \in \mathbb {N}-\{0\}$, and let $a_1, a_2, a_3 \in \mathbb {R}^n$ be three column vectors such that $\left \Vert a_i \times a_j \right \Vert \neq 0$, $\langle a_i, a_j\rangle >0 \hspace {0.2em}, \forall i\neq j$. Let matrix $A\in \mathbb {R}^{m\times n}$ be a matrix with the following structure:

$$ A = (\mathop {\underbrace{a_1,\dots, a_1}\limits_{\alpha}} \vert \mathop {\underbrace{a_2,\dots, a_2}\limits_{\beta}} \vert \mathop {\underbrace{a_3,\dots, a_3}\limits_{\gamma}} ). $$
The left singular vectors $u_1, u_2, u_3\in \mathbb {R}^n$ of matrix $A$ are:
$$u_1 = \frac{\alpha a_1 + \beta a_2 + \gamma a_3}{\left\Vert \alpha a_1 + \beta a_2 + \gamma a_3 \right\Vert}, \hspace{1em} u_2 = \frac{\alpha b_1 + \beta b_2 + \gamma b_3}{\left\Vert \alpha b_1 + \beta b_2 + \gamma b_3 \right\Vert}, \hspace{1em} u_3 = \frac{\alpha c_1 + \beta c_2 + \gamma c_3}{\left\Vert \alpha c_1 + \beta c_2 + \gamma c_3 \right\Vert},$$
with
$$\begin{aligned} b_j &= a_j - u_1 u_1^T a_j, & j=1, 2, 3.\\ c_j &= a_j - u_1 u_1^T a_j - u_1 u_2^T a_j, & j=1, 2, 3.\\ \end{aligned}$$
Proof. We only need to rewrite the optimization problem from Definition 1 by introducing our new description for $A$. By using Lagrange multipliers, the optimization problem becomes
$$\underset{u}{\textrm{maximize}} \hspace{2em} \mathcal{L}(u,\lambda) = \alpha (u^T a_1)^2 + \beta (u^T a_2)^2 + \gamma (u^T a_3)^2 -\lambda u^T u,$$
and its solution can be found by means of the gradient
$$\nabla_u \mathcal{L}(u,\lambda) = 2\alpha(u^T a_1) a_1 + 2\beta (u^T a_2) a_2 + 2\gamma (u^T a_3) - 2\lambda u = 0$$
or, equivalently, solving the eigenvalue problem
$$\left(\alpha \left(a_1 a_1^T\right) + \beta \left(a_2 a_2^T\right) + \gamma \left(a_3 a_3^T\right)\right)u = Q_1 u = \lambda u$$
which, given that $\textrm {rank}(Q_1)=1$ and that $Q_1 = (\alpha a_1 +\beta a_2 + \gamma a_3)(\alpha a_1 +\beta a_2 + \gamma a_3) ^T$, implies that the only possible solution is $(\alpha a_1 +\beta a_2 + \gamma a_3)$ with multiplicity $1$:
$$u_1 = \frac{\alpha a_1 + \beta a_2 + \gamma a_3 }{\left\Vert \alpha a_1 + \beta a_2 + \gamma a_3 \right\Vert }.$$
The subsequent singular vectors can be obtained by applying Definition 2 directly. For the second singular vector, the problem can be rewritten, noting that $u_1 \sigma _1 v_1^T = u_1 u_1^T A$ by virtue of Eq. (9), as
$$\begin{aligned} \underset{u}{\textrm{maximize}} \hspace{2em} & \left\Vert u^T (I-u_1 u_1^T) A\right\Vert^2_F, \\ \textrm{subject to} \hspace{2em} & u^T u= 1, \end{aligned}$$
which, given the properties of $A$, allows us to write
$$\underset{u}{\textrm{maximize}} \hspace{2em} \mathcal{L}(u,\lambda) = \alpha (u^T b_1)^2 + \beta (u^T b_2)^2 + \gamma (u^T b_3)^2 -\lambda u^T u,$$
via the following change of variable:
$$ b_1 = a_1 - u_1 u_1^T a_1, \hspace{1em} b_2 = a_2 - u_1 u_1^T a_2, \hspace{1em} b_3 = a_3 - u_1 u_1^T a_3. $$
The solution is, again, solving the eigenvalue problem
$$\left(\alpha \left(b_1 b_1^T\right) + \beta \left(b_2 b_2^T\right) + \gamma \left(b_3 b_3^T\right)\right)u = Q_2 u = \lambda u.$$
Therefore, the solution for the second left singular vector $u_2$ is analogous to the second one, namely
$$u_2 = \frac{\alpha b_1 + \beta b_2 + \gamma b_3 }{\left\Vert \alpha b_1 + \beta b_2 + \gamma b_3 \right\Vert }.$$
The rest of the singular vectors up to $u_r$ with $r=\textrm {rank}(A)$ can be obtained by induction and using the same change of variable. $\square$

In summary, when applying the SVD to a set of spectra belonging to three similar (yet different) categories, the first singular vector (the vector that explains most spectra) will be near identical to the most frequent spectrum. The second singular vector will be similar to the most typical column in $A$, after subtracting what was explained by the first singular vector. This goes on recursively until there is no information to explain. Additionally, if vectors are proportional to one another, we will find similar undesired relationships.

A practical example of Proposition 1 is shown in Fig. 1. Each row represents a different selection of column vectors for $A$. The leftmost subfigure in each row shows the column vectors that compose a given matrix. The amount of column vectors of each type in the matrix are described in the legend. The other three plots show the first, second, and third left singular vectors, respectively, of a matrix with such composition. The blue and green plots in each subfigure represent the results obtained numerically via numpy.linalg.svd(), i.e. $\pm u_i$, due to the fact that the numerical method outputs each basis vector with sign indeterminacy. The black, dotted plot that is plotted over either $+u_i$ or $-u_i$ is the theoretical solution given by Proposition 1. For this example, we have used $a = 0.5(1+\cos (2\pi (0.1)x))$, $b = 0.5\cos (2\pi (0.35)x)$, and $c=\frac {1}{\sigma \sqrt {2\pi }} e^{-\frac {1}{2}\left (\frac {x-\mu }{\sigma }\right )^2}$ with $\mu =-4$ and $\sigma =4$ as a small Gaussian peak anomaly in the ’spectra’. For the first hypothesis (first row of the figure), $\alpha =\beta =\gamma =50$ will be used, with $a_1 = a$, $a_2 = a+b$, and $a_3 = a + b + c$. Since all three vectors cannot be expressed as a sum of the other two, $\textrm {rank}(A)=3$. As expected, $u_1$ is more similar to $a$, then $u_2$ is more similar to $b$, and finally $u_3$ is more similar to $c$. The second row shows a different hypothesis, where $a_1=a_2=a$ and $a_3 =a + c$. Here, $\textrm {rank}(A)=2$, and $u_1$ is mostly similar (dot product-wise) to $a$, while $u_2$ tends to look like $c$. The third left singular vector, both in the theoretical and numerical case, just represents quantization noise due to using double floating point precision, not representing anything significant (as expected).

 figure: Fig. 1.

Fig. 1. Testing the proposition empirically with some sample spectra. Top row: case with $\alpha =\beta =\gamma =50$, $a_1 = a$, $a_2 = a+b$ and $a_3 = a+b+c$; from left to right: example ’spectra’ used to generate the matrix and number of column vectors per class, and first, second and third singular vectors, respectively, obtained numerically (in color) and theoretically (in black dotted line). Bottom row: case with $\alpha =\beta =\gamma =50$, $a_1=a_2=a$, $a_3=a+c$. Curves $a, b$, and $c$ are defined in Section 2.5.

Download Full Size | PDF

In the context of image enhancement, we must assume some value of $m$ in Eq. (6). Information given by the first $m$ singular values will be discarded, and the variations among spectra beyond those vectors will be amplified. The values showcased by the first $m$ singular values will be representative of the most frequent, most common spectra, and this result will certainly vary across hyperspectral images. The differences we will enhance, therefore, may be obscured partially or completely, depending on the image we are working with:

  • • If $m$ is exactly right, then $s_i \approx r_i - (d_i + n_i)$, and we will be amplifying exactly the differences we wish to observe: $r_{e,i} = W(r_i - s_i) + r_i \approx W(d_i + n_i) + r_i \approx W d_i + r_i.$
  • • If $m$ is lower than necessary, then $lim_{m\rightarrow 1}(s_i) = r_i - (r_i' + d_i + n_i)$, where $r_i'$ is information that is common to a large number of spectra, thus obtaining a suboptimal enhancement, that is $r_{e,i} = W(r_i - s_i) + r_i \approx W(r_i' + d_i + n_i) + r_i \approx W(r_i' + d_i) + r_i,$ where $r_i'$ will be common amongst large amounts of spectra, and thus contrast will be lost.
  • • Finally, if $m$ is too large, then $\lim _{m\rightarrow n^{-}} (s_i) = r_i - n_i$ and, then, we will only be amplifying noise: $r_{e,i} = W(r_i - s_i) + r_i \rightarrow W(r_i - r_i + n_i) + r_i = W n_i + r_i.$

2.6 Affinity-based color enhancement (ACE)

If we rearrange a hyperspectral image as a matrix $A$ with its columns corresponding to hyperspectral pixels, the result obtained through the SVD is certainly dependent of the types of spectra that are present in the image. When working in a surgical environment, with constant artifacts and objects appearing and disappearing from the field of view, any enhancement method that exploits the SVD of data has the risk of either compromising image contrast, or producing unwanted enhancement results, depending on the types of spectra present in the image.

To overcome this problem, we must modify the response of the CIE CMF functions in a consistent and systematic way. The following method can produce the same results shown in state-of-the-art work, by separating diagnosis-related calculations from color enhancement. The only restriction is that the analytical calculation must highlight the same type of spectral responses, regardless of what is present in the image.

The proposed novel framework is summarized in Fig. 2. From now on, we will refer to references as vectors that resemble a specific spectral signature of interest, $w\in \mathbb {R}^{l}$. From a practical standpoint, for example, we could study oxy- and deoxyhaemoglobin, fat, or water extinction coefficients, sampled at $l$ different wavelengths. Secondly, we will use the name spectral affinity or spectral similarity for any mapping that compares two vectors of the same type ($a: \mathbb {R}^l \times \mathbb {R}^l \rightarrow \mathbb {R}$), or any mapping that provides some type of measure as a function of the type of spectrum and some internal parameters ($a: (\mathbb {R}^l ,\theta ) \rightarrow \mathbb {R}$). When an affinity function outputs high values when comparing two spectra, we will consider those spectra as similar.

 figure: Fig. 2.

Fig. 2. Proposal for the separation of affinity from color enhancement.

Download Full Size | PDF

A method that exploits this separation between affinity functions and spectral enhancement will be in effect a type of affinity-based color enhancement algorithm (ACE). An ACE method combines a case-specific affinity metric, with a fully-customizable color enhancement method. This adds flexibility as well as reliability to the procedure, since there may be a multitude of affinity metrics that are not context dependent, which can now be used. For the following examples, we will consider that there exists an affinity function that generates an affinity value $a_i = a(r_i)$ for every spectrum, thus providing an affinity map $\{a_i\}$. For this manuscript, we propose the following spectral enhancement function:

$$r_{e,i} = r_i \left( 1+ K\cdot c_e \cdot \tanh\left(k\left(\hat{a}_i + b\right)\right)\right),$$
where $r_{e,i}$ is the enhanced output, $r_i$ is the original spectrum, $K$ will be is the enhancement gain of the method, $c_e$ is the desired output color of the enhancement, $\tanh (.)$ is the hyperbolic tangent function, $k$ is a contrast gain, $b$ is a threshold value, and $\hat {a}_i$ is the affinity measure value of the spectrum when all the spectra are normalized in the range $[0, 1]$, namely $\hat {a}_i = \frac {a_i -a_{\textrm {min}}}{a_{\textrm {max}}-a_{\textrm {min}}}.$ The values $a_{\textrm {min}}$ and $a_{\textrm {max}}$ should be known beforehand, by selecting an appropriate affinity metric.

2.7 Affinity metrics

The final enhancement operation should be performed with an appropriate affinity map containing the diagnostic and/or quantification information we wish to highlight. The following methods provide a bounded scalar result obtained from different types of input vector data. These methods are well known and already used in various specific classification tasks, and could be used to provide a useful affinity map for the ACE method.

Spectral Angle Mapper (SAM) Given a reference spectrum $w\in \mathbb {R}^n$, the affinity of any spectrum $r_i$ with $w$ can be calculated by the dot product $a_i = w r_i^T$. Since $w r_i^T$ is the cosine of the angle between these two vectors, this function is bounded ($a_i \in [0, 1]$) [32].

Nonparametric kernel density estimation (KDE). Kernel density estimation attempts to infer the probability density function (PDF) of an arbitrary multivariate distribution from a series of sample vectors $w_1, \dots , w_N \in \mathbb {R}^n$, which act as a Ref. [33].

Neural networks This will be our preferred method, since it is rather flexible when compared to other procedures. A typical neural network with a sigmoid output layer is typically the best for extracting a smooth probability map. Instead of working with binary classification, the response of each output neuron can be used to encode color at various wavelengths and intensities.

  • • For the vein enhancement example, a $6\times 100$ ELU feedforward network is prepared, with a $281\times 1$ input layer and a sigmoidal $3\times 1$ output layer. The three classes to classify are Background, Skin and Veins. A total of 600 spectra were manually picked, using an 850-nm reflectance image as reference, and the network is tested for the rest of the data and other images.
  • • In the case of chromophore concentration estimations, another $6\times 100$ ELU feedforward network is trained. This network has also a $281\times 1$ input layer, but a $8\times 1$ sigmoid output layer. The concentration $c_i$ for a given spectrum is easily interpolated from the output layer of the network via a simple expression, namely
    $$c_i = \sum_{i=0}^{7}{\hat{y}_i * \left(\frac{i}{10}\right)},$$
    with $\hat {y}_0, \dots , \hat {y}_7$ being each of the output neurons, which will be trained to fire to detect red ink concentrations of $0\%, \dots , 70\%$, respectively. If we seek a specific objective concentration $c_{\textrm {obj}}$, then an affinity metric for a concentration estimate can be
    $$a_{i, \textrm{concentration}} = \sigma\left( -\log_{10}{\left(h^2\left(c_i - c_{\textrm{obj}}\right)^2\right)}\right),$$
    where $\sigma (.)$ is the sigmoid function $\sigma (x) = 1/(1+e^{-x})$, and $h \in \mathbb {R}^{+}$ serves as an affinity bandwidth, narrowing the allowed distances that can be considered proximal to the objective concentration $c_{\textrm {obj}}$ (the higher $h$ is, the closer the concentration must be to $c_{\textrm {obj}}$ to be considered similar). This function is bounded ($a_i \in [0, 1]$) and thus will be useful for concentration detection.

3. Results and discussion

The following experiments have been carried out to show the sensitivity of the SVD to changes in the input image, and its effect on typical color enhancement, as well as to showcase the enhancement capabilities of the ACE method. Once we show the context-dependent variability of using PCA/SVD for analysis, we provide some example uses of ACE that, unfortunately, cannot be implemented with merely enhancing the differences that remain past the first singular vectors.

3.1 Susceptibility of current methods to changes in the environment

To compare the performance of current methodology against ACE, we have chosen to replicate the results in vein enhancement that exploited PCA [14,16,28], and then force the method to fail by changing the background. Given the considerations in Section 2.5, we can hypothesize that, by changing the background color of an HSI image of a hand, spectral enhancement will be worsened when the background is most similar to the diffuse spectral response of veins, and amplified when it is as different to vein diffuse spectra as possible.

Such susceptibility is exactly what was found in the experiments. Figure 3 shows the behavior of current enhancement methods for vein visualization and enhancement as a function of $m$, i.e. the number of left singular vectors that are ignored in the enhancement. This result was obtained with $K=30$ and enhancing in the $600-620$ nanometer wavelength range. As expected, the value of $m$ that enhances the veins varies depending on the background color. Given a black background (first row), the first enhanced features are the shadows within the fingers; then, veins are enhanced as blue. Deep veins will appear blue due to the strong scattering properties of skin, and thus this feature is enhanced, as it is the least frequent of the four present (skin, background, shadows, and veins). With a blue background, blue spectra is not considered as rare, and is explained by the first left singular vectors, hindering vein contrast as $m$ increases. Furthermore, areas that typically present more oxygenated hemoglobin content, such as the fingertips, are enhanced instead. Finally, given a red background, bluish spectra is the least frequent type of signature, and thus anything with properties akin to blue spectra will be enhanced, also losing contrast.

 figure: Fig. 3.

Fig. 3. Hyperspectral enhancement methods that use PCA/SVD can be affected by changes in background spectra. Each row represents an increase in $m$, the amount of left singular vectors ignored for enhancement, for a hyperspectral image of a hand with three different colored backgrounds. In the case of a blue background, deep veins (which have a blue scattering signature) cannot be enhanced adequately.

Download Full Size | PDF

While this sort of behavior is not necessarily fundamentally flawed (this is the expected behavior of the algorithm), for in vivo imaging this implies that contrast will fluctuate as different sorts of spectra are presented to the camera in various proportions. For example, in the case of margin delineation, tumors may be accentuated if surrounded by adipose tissue, or darkened if surrounded by normal or connective tissue (adipose and normal tissue can present lower and higher scattering coefficients and average reflectance than malignant tissue, respectively) [15,34]. These inconsistencies may be problematic in a clinical setting, explaining the motivation behind this desired improvement, and providing an insight on how these techniques cannot be exploited for specific applications unless the dependency with the SVD is avoided.

3.2 Vein visualization and enhancement

Once we have observed how traditional methods are susceptible to changes unrelated to the task at hand, we must repeat the experiment with the proposed methodology. Figure 4 shows how ACE can be applied to vein visualization, showing its resilience against changes in context, background, and illumination. Note that this example is only shown for illustrative purposes but, to avoid overfitting, dropout with $p=0.1$ probability was also introduced. In Fig. 4.(a), the points from the first image used to train the model are shown. The affinity function that produces each of the presented maps is $\hat {y}_2-\hat {y}_1$, that is, subtracting the output unit value for skin from the output unit for vein spectra, for each of the spectra presented to the network. This forces the affinity map range to take values within $[-1, 1]$, which is then normalized to $[0, 1]$.

 figure: Fig. 4.

Fig. 4. An example of context-resilient vein enhancement using ACE. Using the 850 nm response (a) of the hand as reference, a series of background, skin, and vein spectra are collected (b). A network trained with these samples provides an affinity map (c) that can be used to enhance the original RGB reconstruction (d) and provide a vein map via ACE (e–g).

Download Full Size | PDF

The network never sees the other two images (blue and red background) during training, yet is able to highlight the same veins that could be seen at 850 nm in the image with a black background. Although the background is slightly modified due to the enhancement process, the technique does not show variability on skin and/or vein regions. For the cases shown in Fig. 4(e) through (g), the parameter values were $K=-5.0$, $k=0.75$, $b=-0.75$. Setting $K=-5$ implies reducing brightness to the veins, $k=0.75$ was chosen to avoid a very abrupt change in color, and $b=-0.75$ signifies that approximately the upper quartile of the affinity map will be enhanced. These parameters were manually tuned to get the desired result. The selected enhancement color is a positive constant for the first 100 wavelengths (equal to one), thus producing a general darkening of the vein regions, replicating the images in the near infrared while still including color information.

In order to test the flexibility of the presented method, various values of $K$, $k$, and $b$ are also shown (Fig. 5), as well as the enhancement wavelength. Being able to change not only gain $K$, but also contrast gain $k$ and thresholding value $b$, as well as the output enhancement wavelength, provides a rather simple 4-parameter method that can be modified in real time during surgery. Compared to other similar methods [11,12,16], our technique allows for vein visualization regardless of the background data, and with no motion artifacts due to capturing Vis and NIR images separately. The method can be deployed to any HSI system, no matter the configuration. Furthermore, the neural network that performs the vein detection method could be better trained to be resilient to noise, absorption and scattering-based artifacts, or skin pigmentation, without requiring any further changes in the algorithm or the imaging system.

 figure: Fig. 5.

Fig. 5. Testing the four ACE parameters: enhancement gain $K$ (first row), enhancement contrast $k$ (second row), enhancement threshold $b$ (third row) and enhancement wavelength color $c_e$ (bottom row). Affinity-based enhancement provides a framework that can replicate the results of state-of-the-art methods, but without depending on the SVD.

Download Full Size | PDF

3.3 Chromophore concentration estimation

Now that we have a method that is resilient to changes in the background, we can test out other applications that cannot be solved with unsupervised color enhancement. This example shows an application of ACE in the area of optical property and chromophore concentration estimation. The experiment shown in Figs. 6 and 7 consists of a series of liquid phantoms prepared with mixtures of 10% Intralipid and red ink. More specifically, 0%, 10%, 20%, 30%, 40%, 50%, 60%, and 70% ink-to-Intralipid concentration ratios were obtained. Two phantoms per concentration ratio were prepared for training the network (lowest two rows of phantoms), and a validation set of 8 phantoms was obtained as well (top row, one phantom per concentration).

 figure: Fig. 6.

Fig. 6. Generating an affinity metric for precise chromophore quantification. A series of liquid phantoms with known concentrations are prepared (a); the dataset is divided into training (marked with a T) and validation (marked with a V) sets (b). The network outputs a complete concentration estimation map (c) which can be quantitatively evaluated (d). Once the concentration estimates are deemed accurate enough, the metrics devised in Section 2. allow for full control of the distance to a specific concentration (in this case, 40%) by varying the affinity bandwidth (e). This metric can now be used for ACE for any specific chromophore concentration.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Hyperspectral enhancement for materials presenting a 40% red ink concentration. Affinity bandwidth is set to $h=30$. Left and center columns show the result of varying enhancement gain and enhancement contrast, respectively. The right column provides a selection of enhancement colors. For the two left columns, the baseline was $K=5.0$, $k=0.2$, $b=-0.4$, and $c_e=1$ for the first 50 of the 218 channels (380 to 450 nm), and zero for all the other channels. For the right column, $K=20.0$, and 30 channels (90 nm) are set to 1 in $c_e$; the center enhancement wavelength is specified in each image.

Download Full Size | PDF

Subplots in Fig. 6(a) and (b) show a color reconstruction of the phantoms, and the manually selected circular regions of interest (ROIs), respectively. In Fig. 6(c), the concentration estimates given by the trained neural network are presented. Validation of the estimator is shown in 6(d). Finally, (e) shows how affinity contrast can be controlled via Eq. (20). In this example, the tolerance of what can be considered a reflectance similar to 40% concentration is tuned, by varying $h$ (the affinity bandwidth). The higher $h$ is, the more stringent the affinity function becomes.

Once a functional affinity function is found, then hyperspectral enhancement can take place. Figure 7 shows the result of enhancing the 40% concentration estimates given by the affinity function. The first column shows a variation in enhancement gain $K$, and the second presents the response of the CMFs to enhancement contrast $k$. The third column shows how the output enhancement color can be selected at every specific value of the visible electromagnetic spectrum. If more than one fluorophore is utilized in guided surgery, for example, this tool may allow for the selection of the desired enhancement wavelengths and output colors, perhaps improving the existing contrast already provided by the fluorescent substances.

Diagnostic-based contrast control shows potential applicability with other optical methods, such as fluorescence-guided surgery, SFDI/SSOP or qF-SSOP, as long as the methods can provide an affinity metric that is specific to the problem at hand (e.g., tissue oxygenation, burn severity, vasculature functionality, optical properties, fluorophore concentration, and fluorophore bleaching, among others) [35,36]. This type of control can be adjusted as shown, depending on the type of surgery being performed. Ideally, with an adequate optical property estimation system, the color reconstruction could be enhanced in real time by ACE, depending on the exact information that we wished to observe.

4. Conclusion

In this work, the main concerns regarding the sensitivity to changes of current hyperspectral enhancement methods have been addressed, and a solution inspired in their original formulation is proposed. With a series of adequate affinity functions, the current state-of-the-art methods can be improved in terms of consistency and repeatability.

The method, herein termed affinity-based color enhancement, or ACE for short, separates the similarity metric from the enhancement process. Results shown in this manuscript only apply the method two particular examples, namely chromophore quantification or detection and vein enhancement, but the possible uses of ACE are only bounded by the affinity metrics that can be found for a specific problem, and by whether or not a color reconstruction is desired.

Separating analysis from representation achieves three different objectives. First, the affinity function can be chosen from a series of possible candidates, and the most optimal for the specific task can be used with no loss of applicability. Second, it enables the usage of hardware-accelerated software, providing a promising potential for real-time applications of hyperspectral enhancement techniques. Third, and final, it provides a framework that may allow clinicians and/or surgeons to control how visualization is carried out, and maybe help them improve their performance in the operating room.

Funding

Spanish Ministry of Science, Innovation and Universities (FIS2010-19860, TEC2016-76021-C2-2-R); Spanish Ministry of Economy, Industry and Competitiveness and Instituto de Salud Carlos III (DTS15-00238, DTS17-00055); Instituto de Investigación Valdecilla (IDIVAL) (INNVAL16/02, INNVAL18/23); Spanish Ministry of Education, Culture, and Sports (FPU16/05705).

Acknowledgments

The authors would like to thank Dr. Luis M. Pardo (Universidad de Cantabria) for reviewing the notation and formal aspects of the mathematics in this work.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. D. Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005). [CrossRef]  

2. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancmeent of highly correlated images. decorrelation and hsi contrast stretches,” Remote. Sens. Environ. 20(3), 209–235 (1986). [CrossRef]  

3. H. Cetin, T. A. Warner, and D. W. Levandowski, “Data classification, visualization, and enhancement using n-dimensional probability density functions (nPDF) - AVIRIS, TIMS, TM, and geophysical applications,” Photogramm. Eng. Remote Sens. 59, 1755–1764 (1993).

4. H. Zumsprekel and T. Prinz, “Computer-enhanced multispectral remote sensing data: a useful tool for the geological mapping of Archean terrains in (semi)arid environments,” Comput. Geosci. 26(1), 87–100 (2000). [CrossRef]  

5. J. S. Tyo, A. Konsolakis, D. I. Diersen, and R. C. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Transactions on Geosci. Remote. Sens. 41(3), 708–718 (2003). [CrossRef]  

6. X. Wang, S. Kumar, F. Ramos, T. Kaupp, B. Upcroft, and H. Durrant-Whyte, “Probabilistic classification of hyperspectral images by learning nonlinear dimensionality reduction mapping,” (2006). Cited By 9.

7. Y. Zhu, P. Varshney, and H. Chen, “Evaluation of ica based fusion of hyperspectral images for color display,” (2007). Cited By 28.

8. B. Zhang and X. Yu, “Hyperspectral image visualization using t-distributed stochastic neighbor embedding,” (2015). Cited By 0.

9. Z. Mahmood and P. Scheunders, “Enhanced visualization of hyperspectral images,” IEEE Geosci. Remote. Sens. Lett. 8(5), 869–873 (2011). [CrossRef]  

10. X. Gu, Z. Han, L. Yao, Y. Zhong, Q. Shi, Y. Fu, C. Liu, X. Wang, and T. Xie, “Image enhancement based on in vivo hyperspectral gastroscopic images: A case study,” J. Biomed. Opt. 21(10), 101412 (2016).Cited By 6. [CrossRef]  

11. Q. Li, Y. Wang, H. Liu, Y. Guan, and L. Xu, “Sublingual vein extraction algorithm based on hyperspectral tongue imaging technology,” Comput. Med. Imaging Graph. 35(3), 179–185 (2011).Cited By 6. [CrossRef]  

12. F. P. Wieringa, F. Mastik, F. J. ten Cate, H. A. M. Neumann, and A. F. W. van der Steen, “Remote non-invasive stereoscopic imaging of blood vessels: first in-vivo results of a new multispectral contrast enhancement technology,” Ann. Biomed. Eng. 34(12), 1870–1878 (2006). [CrossRef]  

13. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve Transform,” Opt. Rev. 12(2), 69–75 (2005). [CrossRef]  

14. N. Hashimoto, Y. Murakami, P. A. Bautista, M. Yamaguchi, T. Obi, N. Ohyama, K. Uto, and Y. Kosugi, “Multispectral image enhancement for effective visualization,” Opt. Express 19(10), 9315–9329 (2011). [CrossRef]  

15. G. Fernández-Barreras, E. Real, A. M. Laughney, V. Krishnaswamy, K. D. Paulsen, J. M. López-Higuera, B. W. Pogue, and O. M. Conde, “Multispectral reflectance enhancement for breast cancer visualization in the operating room,” Proc. SPIE 9311, 931106 (2015). [CrossRef]  

16. M. Peng, C. Wang, T. Chen, and G. Liu, “A methodology for palm vein image enhancement and visualization,” 2016 IEEE Int. Conf. Online Analysis Comput. Sci. (ICOACS) (2016).

17. P. A. Bautista and Y. Yagi, “Multispectral enhancement method to increase the visual differences of tissue structures in stained histopathology images,” Anal. Cell. Pathol. 35(5-6), 407–420 (2012). [CrossRef]  

18. P. A. Bautista and Y. Yukako, “Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance,” J. Biomed. Opt. 17(5), 056013 (2012). [CrossRef]  

19. J. Glatz, P. Symvoulidis, P. B. García-Allende, and V. Ntziachristos, “Robust overlay schemes for the fusion of fluorescence and color channels in biological imaging,” J. Biomed. Opt. 19(4), 040501 (2014). [CrossRef]  

20. A. Taghipour and H. Ghassemian, “Hyperspectral anomaly detection using spectral-spatial features based on the human visual system,” Int. J. Remote. Sens. 40, 1–22 (2019). [CrossRef]  

21. W. Luo, H. Zhang, and X. Liu, “Hyperspectral/multispectral reflectance imaging combining with watershed segmentation algorithm for detection of early bruises on apples with different peel colors,” Food Anal. Methods 12(5), 1218–1228 (2019). [CrossRef]  

22. L. Yan, N. Noro, Y. Takara, F. Ando, and M. Yamaguchi, “Using hyperspectral image enhancement method for small size object detection on the sea surface,” (2015). Cited By 1.

23. N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Transactions on Geosci. Remote. Sens. 43(11), 2684–2692 (2005). [CrossRef]  

24. L. Nuffer, P. Medvick, H. Foote, and J. Solinsky, “Multispectral/hyperspectral image enhancement for biological cell analysis,” Cytometry, Part A 69A(8), 897–903 (2006).Cited By 14. [CrossRef]  

25. J. A. Gutiérrez-Gutiérrez, A. Pardo, E. Real, J. M. López-Higuera, and O. M. Conde, “Custom Scanning Hyperspectral Imaging System for Biomedical Applications: Modeling, Benchmarking and Specifications,” Sensors 19(7), 1692 (2019). [CrossRef]  

26. J. Mirapeix, A. Cobo, A. M. Cubillas, O. M. Conde, and J. M. López-Higuera, “In-process automatic wavelength calibration for ccd spectrometers,” Proc. SPIE 7003, 700311 (2008). [CrossRef]  

27. J. Schanda, Colorimetry – Understanding the CIE System (Wiley-Interscience, 2007), 1st ed.

28. N. Hashimoto, Y. Murakami, M. Yamaguchi, N. Ohyama, K. Uto, and Y. Kosugi, “Application of multispectral color enhancement for remote sensing,” (2011). Cited By 0.

29. J. W. Demmel, Applied Numerical Linear Algebra (SIAM, 1997), 1st ed.

30. A. Blum, J. Hopcroft, and R. Kannan, Foundations of Data Science (2018). Draft available at https://www.cs.cornell.edu/jeh/book.pdf.

31. W. Cheney and D. Kincaid, Numerical Mathematics and Computing (Brooks/Cole Cengage Learning, 2013), 7th ed.

32. P. E. Dennison, K. Q. Halligan, and D. A. Roberts, “A comparison of error metrics and constraints for multiple endmember spectral mixture analysis and spectral angle mapper,” Remote. Sens. Environ. 93(3), 359–367 (2004). [CrossRef]  

33. B. W. Silverman, Density Estimation for Statistics and Data Analysis (Chapman and Hall/CRC, 1986), 1st ed.

34. D. M. McClatchy, E. J. Rizzo, J. Meganck, J. Kempner, J. Vicory, W. A. Wells, K. D. Paulsen, and B. W. Pogue, “Calibration and analysis of a multimodal micro-CT and structured imaging system for the evaluation of excised breast tissue,” Phys. Med. Biol. 62(23), 8983–9000 (2017). [CrossRef]  

35. P. A. Valdes, J. P. Angelo, H. S. Choi, and S. Gioux, “qF-SSOP: real-time optical property corrected fluorescence imaging,” Biomed. Opt. Express 8(8), 3597–3605 (2017). [CrossRef]  

36. M. van de Giessen, J. Angelo, C. Vargas, and S. Gioux, “Real-time imaging of diffuse optical properties and surface profile using 3D-SSOP,” Proc. SPIE 9319, 931910 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Testing the proposition empirically with some sample spectra. Top row: case with $\alpha =\beta =\gamma =50$, $a_1 = a$, $a_2 = a+b$ and $a_3 = a+b+c$; from left to right: example ’spectra’ used to generate the matrix and number of column vectors per class, and first, second and third singular vectors, respectively, obtained numerically (in color) and theoretically (in black dotted line). Bottom row: case with $\alpha =\beta =\gamma =50$, $a_1=a_2=a$, $a_3=a+c$. Curves $a, b$, and $c$ are defined in Section 2.5.
Fig. 2.
Fig. 2. Proposal for the separation of affinity from color enhancement.
Fig. 3.
Fig. 3. Hyperspectral enhancement methods that use PCA/SVD can be affected by changes in background spectra. Each row represents an increase in $m$, the amount of left singular vectors ignored for enhancement, for a hyperspectral image of a hand with three different colored backgrounds. In the case of a blue background, deep veins (which have a blue scattering signature) cannot be enhanced adequately.
Fig. 4.
Fig. 4. An example of context-resilient vein enhancement using ACE. Using the 850 nm response (a) of the hand as reference, a series of background, skin, and vein spectra are collected (b). A network trained with these samples provides an affinity map (c) that can be used to enhance the original RGB reconstruction (d) and provide a vein map via ACE (e–g).
Fig. 5.
Fig. 5. Testing the four ACE parameters: enhancement gain $K$ (first row), enhancement contrast $k$ (second row), enhancement threshold $b$ (third row) and enhancement wavelength color $c_e$ (bottom row). Affinity-based enhancement provides a framework that can replicate the results of state-of-the-art methods, but without depending on the SVD.
Fig. 6.
Fig. 6. Generating an affinity metric for precise chromophore quantification. A series of liquid phantoms with known concentrations are prepared (a); the dataset is divided into training (marked with a T) and validation (marked with a V) sets (b). The network outputs a complete concentration estimation map (c) which can be quantitatively evaluated (d). Once the concentration estimates are deemed accurate enough, the metrics devised in Section 2. allow for full control of the distance to a specific concentration (in this case, 40%) by varying the affinity bandwidth (e). This metric can now be used for ACE for any specific chromophore concentration.
Fig. 7.
Fig. 7. Hyperspectral enhancement for materials presenting a 40% red ink concentration. Affinity bandwidth is set to $h=30$. Left and center columns show the result of varying enhancement gain and enhancement contrast, respectively. The right column provides a selection of enhancement colors. For the two left columns, the baseline was $K=5.0$, $k=0.2$, $b=-0.4$, and $c_e=1$ for the first 50 of the 218 channels (380 to 450 nm), and zero for all the other channels. For the right column, $K=20.0$, and 30 channels (90 nm) are set to 1 in $c_e$; the center enhancement wavelength is specified in each image.

Tables (1)

Tables Icon

Table 1. Specifications of the custom-built imaging instrument.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

R ( λ ) = I s ( λ ) I bg ( λ ) I w ( λ ) I bg ( λ ) ,
X k ( r s ) x ¯ T Δ λ , Y k ( r s ) y ¯ T Δ λ , Z k ( r s ) z ¯ T Δ λ ,
k = 100 i = 1 l s i y ¯ i Δ λ .
r e , i = W ( r i s i ) + r i ,
s i = j = 1 m α i , j u j + r ¯ ,
r i = s i + d i + r ¯ + n i = ( j = 1 m α i , j u j ) + ( j = m + 1 r a n k ( R ) α i , j u j ) + r ¯ + n i ,
A = U Σ V T ,
m a x i m i z e u u T A F , s u b j e c t t o u T u = 1 ,
σ 1 ( A ) = u 1 T A ,
v 1 = 1 σ 1 ( A ) u 1 T A .
m a x i m i z e u u T ( A B k ) F , s u b j e c t t o u T u = 1 ,
B k = k = 1 i 1 u k σ k v k T ,
σ i ( A ) = u i T A , v i = 1 σ i ( A ) u i T ( A B k ) .
A = ( a 1 , , a 1 α | a 2 , , a 2 β | a 3 , , a 3 γ ) .
u 1 = α a 1 + β a 2 + γ a 3 α a 1 + β a 2 + γ a 3 , u 2 = α b 1 + β b 2 + γ b 3 α b 1 + β b 2 + γ b 3 , u 3 = α c 1 + β c 2 + γ c 3 α c 1 + β c 2 + γ c 3 ,
b j = a j u 1 u 1 T a j , j = 1 , 2 , 3. c j = a j u 1 u 1 T a j u 1 u 2 T a j , j = 1 , 2 , 3.
maximize u L ( u , λ ) = α ( u T a 1 ) 2 + β ( u T a 2 ) 2 + γ ( u T a 3 ) 2 λ u T u ,
u L ( u , λ ) = 2 α ( u T a 1 ) a 1 + 2 β ( u T a 2 ) a 2 + 2 γ ( u T a 3 ) 2 λ u = 0
( α ( a 1 a 1 T ) + β ( a 2 a 2 T ) + γ ( a 3 a 3 T ) ) u = Q 1 u = λ u
u 1 = α a 1 + β a 2 + γ a 3 α a 1 + β a 2 + γ a 3 .
maximize u u T ( I u 1 u 1 T ) A F 2 , subject to u T u = 1 ,
maximize u L ( u , λ ) = α ( u T b 1 ) 2 + β ( u T b 2 ) 2 + γ ( u T b 3 ) 2 λ u T u ,
b 1 = a 1 u 1 u 1 T a 1 , b 2 = a 2 u 1 u 1 T a 2 , b 3 = a 3 u 1 u 1 T a 3 .
( α ( b 1 b 1 T ) + β ( b 2 b 2 T ) + γ ( b 3 b 3 T ) ) u = Q 2 u = λ u .
u 2 = α b 1 + β b 2 + γ b 3 α b 1 + β b 2 + γ b 3 .
r e , i = r i ( 1 + K c e tanh ( k ( a ^ i + b ) ) ) ,
c i = i = 0 7 y ^ i ( i 10 ) ,
a i , concentration = σ ( log 10 ( h 2 ( c i c obj ) 2 ) ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.