Abstract

In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.

© 2014 Optical Society of America

1. Introduction

Ultrahigh resolution 3D optical coherence tomography (OCT) imaging is a powerful and efficient technique for disease diagnosis and medical treatment of various tissues [14]. It is usually obtained by assembling consecutively acquired B-scans. 3D OCT provides volumetric visualization and quantification of tissue morphology features, reduction of the sampling error introduced by missing a sample location in 2D imaging, and reconstruction of en-face view of the tissue [5]. Nevertheless, 3D OCT imaging requires a long data acquisition time and large amount of spectral measurements which makes it susceptible to the unavoidable motion artifact.

Recently, compressive sensing (CS) [6,7] has been studied extensively for the high-resolution OCT image reconstruction [817]. It has been shown that high quality OCT image can be reconstructed using a fraction of the spectral measurements required by the Shannon/Nyqusit theory through CS reconstruction, which reduces the data acquisition time as well as the spectral data size while preserving the image quality. Application of CS on the OCT volumetric imaging has been reported in [15]. It acquires a sub set of B-scans required for the full 3D imaging and uses them to recover the missing ones through CS reconstruction. Although the under-sampled image data is used in the CS reconstruction, the sampling pattern is equivalent to sparsely sampling the spectral volume for the original 3D image in the slow-scanning lateral direction. The selected B-scans are reconstructed using 100% spectral measurements through classical method before CS reconstruction. They demonstrated volumetric OCT image recovery from up to 75% of missing data. In the studies by the same group, two similar sampling patterns are described: one consists of sparsely selected horizontal B-scans and vertical B-scans [16], while the other one takes several radial B-scans [17]. These two sampling patterns can be considered as under-sampling the original spectral volume in both the fast-scanning and slow-scanning lateral directions. The sampling patterns in [1517] all require 100% spectral measurements for the selected B-scans. The under-sampling is implemented on one or two lateral directions of the original spectral volume.

Most other CS OCT studies have used sampling patterns that reduce the size of spectral measurements for each A-scan, i.e. the under-sampling is done in the spectral domain directly. A-scans are reconstructed from the under-sampled spectral measurements; then several A-scans are placed together in the fast-scanning direction to form a B-scan. Volumetric image can be generated by repeating the same process for all the B-scans in the volume. Usually, 40%–50% spectral data is needed to reconstruct a high-quality OCT image for biological samples with complex morphology.

The above sampling patterns, however, only consider the under-sampling in one or two dimensions of the spectral data and use 100% measurements for the rest, which does not take full advantage of the sparse sampling, especially for the reconstruction of volumetric data. The under-sampling rate can be further reduced by exploring the sparse sampling in all three dimensions of the volumetric spectral data.

In this work, we proposed a novel sampling pattern that under-samples the spectral volume for 3D OCT image in all three directions (axial, fast-scanning lateral (B-scan) and slow-scanning lateral (C-scan)). This method can be considered as a combination of previous sampling patterns which do the under-sampling in either the axial direction or the lateral direction. The final under-sampling rate for volumetric image is the multiplication of its under-sampling rate in all three directions, which will be much lower than any one of them or the multiplication of two. Thus, our proposed sampling pattern has the potential to achieve a smaller under-sampling rate than current methods.

To recover the 3D image, we proposed a three-step CS strategy which recovers the missing volumetric image data dimension-by-dimension. In other words, the one-dimensional CS reconstruction procedure is applied separately to the recovery of 3D image data in each direction. Our proposed sampling pattern and CS reconstruction strategy can also be easily adapted to the reconstruction of 2D OCT image to reduce the size of its spectral measurements as well as the scanning time. To the best of our knowledge, this is also the first time that under-sampling in both dimensions of the B-scan has been explored for CS OCT imaging.

Our proposed sampling pattern and CS reconstruction strategy are tested on the OCT data of in vivo biological tissues. The experimental results show that high-quality and high-resolution OCT images can be reconstructed from less than 20% of the spectral measurements.

2. Methods

2.1. CS OCT

The reconstruction in CS SDOCT using a subset of the measurements is obtained by solving a constrained optimization problem that maximizes the sparsity of the signal in the transformed domain while preserving the fidelity of the signal in the measurement domain. The under-sampled measurement is denoted as yu and the desired image signal is denoted as x. Both of them are one-dimensional in this paper. Note that x is always in the spatial domain. The CS reconstruction process can be formulated as:

minx|Ψx1s.t.MFxyu22ε
where M is a sampling mask that denotes the data acquired in yu. F is the transformation between the measurements domain and the spatial domain. Ψ is the sparsifying operator that transforms x to a sparse representation. ε is the parameter that controls the fidelity of the signal in the measurement domain. ‖•‖1 denotes the l1-norm and ‖•‖2 is the l2-norm.

There are two different kinds of measurement domains used in this paper: spectral and spatial. For CS reconstruction in the axial direction (step 1 in Fig. 2), yu is the under-sampled data in spectral domain and F is the Fourier transformation matrix. Ψ is chosen as the identity matrix since OCT image is shown to be sparse enough in the spatial domain [8, 10, 11]. For CS reconstruction in either of the lateral directions (step 2 and 3 in Fig. 2), yu is the under-sampled data in spatial domain. F is the identity matrix. Ψ cannot be the identity matrix any more since the measurement domain cannot be the same as the sparsifying domain. Instead, two different kinds of sparsifying operator are used: the wavelet transformation and the Fourier transformation.

 figure: Fig. 1

Fig. 1 Schematic demonstration of the proposed sampling pattern which under-samples the original spectral volume in all three dimensions.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2

Fig. 2 Schematic demonstration of the proposed three-step CS reconstruction strategy.

Download Full Size | PPT Slide | PDF

2.2. Under-sampling pattern

The proposed sampling pattern which under-samples the original spectral volume for the volumetric OCT image in all three directions is schematically presented in Fig. 1. The under-sampling in the C-scan is made up of randomly selected B-scans in the volume; a subset of the A-scans in each B-scan is randomly chosen to accomplish the under-sampling in the fast-scanning lateral direction; for each A-scan, a fraction of the k-space measurements is acquired. The comparison of the data amount between the acquired under-sampled spectral volume and the original spectral volume for 3D OCT image is also illustrated in Fig. 1. It shows that the spectral data obtained using the proposed sampling pattern is scaled-down in all three dimensions of the spectral measurements required by the Shannon/Nyquist rate.

The uniformly random under-sampling with constraint on a maximum gap size is applied to both lateral directions and the linear-k sampling mask proposed in [11] is used for under-sampling the k-space data in the axial resolution.

2.3. Volumetric data reconstruction

To reconstruct the 3D OCT image using the under-sampled spectral volume described in Sec. 2.2, we propose a three-step CS strategy, which is schematically illustrated in Fig. 2. In the first step, the selected A-scans in each B-scan are reconstructed from their under-sampled k-space measurements individually through CS reconstruction. The CS reconstructions of different A-scans are independent of each other in this process. In the second step, the CS reconstruction is applied to each selected B-scan row-by-row. The image data from the fully reconstructed A-scans in the first step are served as the under-sampled data (yu in Eq. (1)) while those from the omitted ones are recovered by CS. The reconstructions of different rows in this step are also independent of each other. After this step, all A-scans image data for the selected B-scan is recovered. In the final step, fully recovered B-scan images are used as the input data for CS volume reconstruction to recover the missing B-scan images. Note that in this stepwise process, the under-sampled measurements yu in step 1 are in the k-space, while in step 2 and 3, they are image data in the spatial domain. YALL1 [18, 19] with the default parameters is used to solve the CS reconstructions.

3. Experimental results

3.1. System configuration

The proposed sampling pattern and CS reconstruction strategy are evaluated using spectral OCT data obtained from an in-house developed SD COT. The system uses a 12-bit, 70 kHz, 2048 pixel CCD line-scan camera (EM4, e2v, USA) as the detector array of the spectrometer. The spectrometer is designed to use a superluminescent (SLED) diode light source having 105 nm bandwidth centered at 845 nm. The axial resolution of the system is approximately 4 μm in the air. Its transversal resolution is approximately 12 μm. The spectral volume used in the CS reconstructions is under-sampled from the pre-generated full size k-space volume for the 3D OCT image. The original volume data contains 250 B-scans which covers an area of 1mm×1mm. The frame size of each B-scan is 2048(axial)×1000(fast-scanning lateral). The camera covers a spectral range of 240 nm which implies a sampling rate of 0.117 nm/pixel. All animal studies were conducted in accordance with the Johns Hopkins University Animal Care and Use Committee Guidelines.

The data processing were implemented with MATLAB R2014a on a desktop with Intel Xeon CPU (E5-2687W, 3.1GHz), 32GB RAM, windows 7 64-bit operation system. The CS programs were not optimized for speed, which can be accelerated significantly if implemented in C++ or on a graphics processing unit (GPU) [2022].

3.2. Image reconstruction of a B-scan of a mirror surface

We first studied the basic properties of proposed method by reconstructing a B-scan of a mirror surface. The spectral data were under-sampled in both axial and fast scanning lateral directions. As mentioned above, our proposed reconstruction strategy can be used to achieve 2D OCT image reconstruction by only taking the first two steps.

First, we fixed the overall sampling rate to 5%, which can be shown to be too small if under-sampling is implemented only in one dimension. The relative error is computed on CS reconstruction results obtained with various axial sampling rate, which changes from 5% to 100%. The corresponding sampling rate in the fast scanning lateral direction changes from 100% to 5%. The relative error is defined as:

e=fCSfref2/fref2
where fCS is the CS reconstruction result; fref is the reference image. Here, we use original image as the reference, which is obtained by applying classical method to 100% spectral measurements. Two different kinds of sparsifying operators were used for CS reconstructions in the fast-scanning lateral direction: the 4-level Daubechies4 wavelet transformation and the Fourier transformation. As can be seen in Fig. 3(a), the relative error is high with small axial sampling rate, although the corresponding sampling rate in the fast scanning lateral direction is 100%. It is because the axial reconstruction result is used as the input data of CS reconstruction in the fast scanning lateral direction. The under-sampling rate in the fast scanning lateral direction is redudant and does not help the overall recontruction. As the axial sampling rate increases, the relative error decreases until it reaches the minimum. It shows that better reconstruction result can be achieved by exploring under-sampling in both dimensions of the B-scan. Then the relative error increases again which implies that the sampling rate in the fast scanning lateral direction becomes too small and the axial sampling rate becomes more than enough. The relative error reaches its minimum with 20% axial sampling rate and 25% lateral sampling rate. Fourier and wavelet transformations show similar performance.

 figure: Fig. 3

Fig. 3 (a) relative error vs. axial sampling rate. The overall sampling rate is fixed; (b) relative error vs. lateral sampling rate. The axial sampling rate is fixed.

Download Full Size | PPT Slide | PDF

Figure 3(b) shows the relative error computed on CS reconstruction results obtained using various lateral sampling rate and fixed axial sampling rate (20%). Here, the relative error is computed only on the missing A-scans, i.e. there is no spectral data obtained for these A-scans. The relative error decreases for higher lateral sampling rate and converges to the relative error level of axial CS reconstruction, which is computed on the axial reconstruction results and does not change because of fixed axial sampling rate. Similar conclusion can be drawn from comparison on CS reconstruction results with fixed lateral sampling rate and changing axial sampling rate.

3.3. Image reconstruction of a B-scan of human skin

We then evaluated the proposed methods through the reconstruction of a B-scan of human skin. Its spectral data set was under-sampled in both axial and fast-scanning lateral direction. The under-sampling rates in both directions were 50%. Both Fourier transformation and wavelet transformation was tested as the sparsifying operator in the lateral CS reconstruction. The reconstruction results are shown in Figs. 4(c) and 4(d) respectively. The original image obtained using 100% spectral sampling in both directions is displayed in Fig. 4(a) as a reference. The CS reconstruction result obtained using the spectral data under-sampled only in the axial direction is shown in Fig. 4(b) as a comparison. The CS reconstruction for each A-scan in Fig. 4(b) uses 25% of the spectral measurements and the resulting A-scans are concatenated to form the B-scan. Thus, Figs. 4(b)–4(d) use the same amount of under-sampled spectral data. All the images are shown in the same dynamic range.

 figure: Fig. 4

Fig. 4 Reconstruction results of human skin. (a) is obtained using 100% spectral data. (b) is obtained using 25% spectral data for each A-scan and no under-sampling is applied in the fast-scanning direction. (c) and (d) are obtained using the proposed method, with the wavelet transformation and Fourier transformation as sparsifying operators, respectively. The under-sampling rates for the axial direction and fast-scanning direction are both 50%. The scale bars represent 100 μm. The image size in pixel is 900×925.

Download Full Size | PPT Slide | PDF

As can be seen from Fig. 4, our proposed method achieved accurate reconstructions that are very similar to the reference image. In contrast, the CS reconstruction using the same amount of spectral data that is under-sampled only in the axial direction shows obvious reconstruction error and information loss, especially for the fine structures at large imaging depths. This is mainly because 25% spectral data is not enough for obtaining an accurate CS reconstruction of an A-scan with complex morphology. Instead, our proposed method takes 50% spectral data for each selected A-scan. Thus these A-scans can be reconstructed with high accuracy. Then the missing A-scans are recovered by CS reconstruction row-by-row using the image data from these accurately recovered A-scans. The under-sampling rate for the input data used in each CS reconstruction of the fast-scanning direction is also 50%, which is usually high enough for an accurate reconstruction. Thus, our proposed method can significantly reduce the overall sampling rate by under-sampling in all dimensions, while achieving a high-quality reconstruction. The peak signal-to-noise ratio (PSNR) is computed for Figs. 4 (a) to 4(d) for a quantitative comparison of the image quality, where PSNR is defined as:

PSNR=10log10(max2(f(x))/var)
f (x) is the amplitude of the B-scan. var is the variance of the selected background regions that are outlined by the white dashed rectangles in Figs. 4(a)–4(d). The PSNR for Figs. 4(a)–4(d) is 72.60dB, 69.92dB, 75.49dB, and 75.57dB, respectively. Figure 4(c) achieves 2.89dB and 5.57dB better PSNR than Fig. 4(a) and Fig. 4(b), respectively, while the PSNR improvements are 2.97dB and 5.65dB for Fig. 4(d) against Figs. 4(a) and 4(b), respectively. The CS reconstruction tries to enhance the sparsity of the signal in the sparsifying domain while preserving the fidelity of the measurements and has long been recognized to be good at reducing background noise [810]. For the same reason, the CS results tend to lose low-intensity features in the sparsifying domain.

3.4. Volumetric image reconstruction

We then evaluated the proposed method using a volumetric OCT image. The tested spectral volume is under-sampled from the fully acquired spectral data in all three dimensions using the proposed sampling pattern. Then the three-step strategy in Sec. 2.3 was applied to the under-sampled spectral data to reconstruct the volumetric image.

The experiment was implemented using a 3D OCT data set of a mouse cornea. The under-sampling rates of 50%, 60% and 60% are applied to the axial direction, fast-scanning lateral (B-scan) direction and slow-scanning lateral (C-scan) direction of the original 3D spectral volume, respectively. The Fourier transformation and the 4-level Daubechies4 wavelet transformation are tested separately as the sparsifying operator. The same sparsifying operator is used in the second and third steps of the CS reconstructions in one experiment. More specifically, the images in the second columns of Figs. 5, 6 and 7 are obtained with wavelet transformation as the sparsifying operator in both step 2 and step 3 during the reconstruction, while the Fourier transformation is used for reconstructing the images in the third columns of these figures. The reconstructed images are post-processed for volumetric visualization using the free software ImageJ.

 figure: Fig. 5

Fig. 5 First row: volumetric visualization by ray-casting; second row: orthogonal cross-sectional display; first column: image obtained with 100% spectral data; second column: image obtained using the proposed method with the wavelet transformation; third column: image obtained using the proposed method with the Fourier transformation.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Recovered slices in the reconstructed volumetric images. Rows (a) and (b) are two en-face slices at the position of 160 μm and 800 μm below the surface, respectively. The image size in pixel is 925×250. Rows (c) and (d) are slices in the slow-scanning direction. The image size in pixel is 900×250. Rows (e) and (f) are B-scans in the fast-scanning direction. The image size in pixel is 900×925. The first column is the image obtained using 100% data. The second and third columns are images obtained using the proposed method with the wavelet and the Fourier transformation, respectively.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 First row: representative slices obtained using 100% spectral measurements (first column) and under-sampled spectral volume using the wavelet transformation (second column) and the Fourier transformation (third column). Second row: zoom-in of the green rectangle areas in the first row.

Download Full Size | PPT Slide | PDF

Figure 5 shows a ray-casting view of the mouse cornea. The sub images in the first row show the isotropic view of the cornea sample and those in the second row are the orthogonal slices display of the volume which show its inner structure. The first column in Fig. 5 presents the original 3D images as a reference that was obtained from 100% spectral measurements. The images in the second column are the CS reconstruction results using wavelet transformation, while those in the third column are the CS reconstruction results using Fourier transformation. As can be seen, CS reconstruction results using both the Fourier transformation and the wavelet transformation are very close to the reference image that uses 100% spectral measurements. The anatomical structures such as cornea, lens, and iris can be clearly visualized and they are marked out in the image.

Figure 6 shows representative slices extracted from the 3D mouse cornea image in different directions. Two slices are selected in each direction. The reference images using 100% spectral measurements are shown in the first column. The positions of selected slices are marked out in the reference images. The slices from 3D image obtained using the wavelet transformation and the Fourier transformation are shown in the second and third column, respectively. To demonstrate that proposed method can recover the missing image data, the slices in both lateral directions shown in Fig. 6 are from the under-sampled missing data, i.e. there is no spectral data obtained for these slices and they are recovered by proposed reconstruction method. As can be seen, the missing image data are reconstructed with high accuracy which is nearly indistinguishable from the original image using 100% of the spectral volume.

One representative en-face slice of the reconstruction result is shown in Fig. 7 to closely examine the images obtained using different spasifying operators: the wavelet transformation and the Fourier transformation. The green rectangle areas in the first row of Fig. 7 are magnified in the second row. As can be seen, the difference between the results obtained using Fourier transformation and wavelet transformation is small. The wavelet transformation result achieves a better overall image quality with a sharper structure, while the Fourier transformation result exhibits a smoothed image. The smoothness is mainly because the signal in the Fourier domain has low intensity at high frequency positions, which tend to get suppressed by the sparisty enhancement during CS reconstruction. This process is similar to that of a low-pass filtering. Based on our experience, the wavelet transformation usually achieves a better image quality. However, the Fourier transformation offers a better balance in terms of achieving high image quality and fast reconstruction speed since a fast implementation through the fast Fourier transformation (FFT) is available. The quantitative evaluation of different sparsifying transformations is still under-study.

4. Discussion

In the proposed three-step reconstruction strategy, the order of the second and the third step can be interchanged, since the CS reconstructions in each step uses only 1D input data and there is no overlap between the input and output data of these two steps. These two steps are independent of each other. Therefore, the change of their order does not alter the reconstruction result as long as an accurate reconstruction can be achieved in each step.

In the second step of proposed reconstruction strategy, the CS reconstruction only needs to be applied to half of the rows of each selected B-scans (rows here indicate the 1D image data in the fast-scanning lateral direction), since in regular SD OCT, only the first half of the B-scan will be displayed. This is also true for the third step of the reconstruction strategy: only the first half (i.e. top half in Fig. 2) of the volumetric image data needs to be reconstructed. In addition, the number of the rows that needs to be incorporated in step 2 can even be reduced further for the samples having small imaging depth. Reducing the row number will save the reconstruction time and alleviate the system memory required.

Our proposed reconstruction strategy applies 1D CS reconstruction separately to the volumetric data in each direction: in step 1, it is applied to the under-sampled spectral data of each chosen A-scan of the selected B-scan; in step 2, it is applied to each row of the selected B-scan; in step 3, it is applied to each row in the slow-scanning lateral direction of the 3D image. Although 3D transformation has also been used in the CS reconstruction of volumetric OCT image [1517], there are several benefits for our decomposition-based strategy:

  • (1) It is very easy to change parameters when reconstructing various data with different sparsity and noise level. Usually, a bigger ε is desired when reconstructing the rows corresponding to small imaging depths to reduce the noise, while a smaller ε is needed for those with large imaging depths to preserve the low-intensity features. It also provides the developer more choices when tuning the parameters in reconstructing the OCT data with complex morphology.
  • (2) It saves the memory needed during the reconstruction. For instance, the full volumetric image used in the experiments has 2048×1000×250 voxels. The data size is approximately 3.81 GB assuming the data type is double (default data type in MATLAB). It is even bigger (7.63GB) when the Fourier transformation is used because the data type comes to complex double. Usually, more than one copy of the volumetric data is stored in the RAM during the CS reconstruction to represent the interim data for the transformation between the sparsity domain and the image domain or between the measurement domain and the spatial domain. Using 3D transformation will place a high burden on the memory usage. This situation will deteriorate when implementing the acceleration using GPU, since most GPUs do not have enough on-chip memory to hold more than one such big array. Complicated data decomposition is necessary if 3D transformation is used. In comparison, assuming the under-sampling rate is 50% in all dimensions and data type double, the 3D CS OCT data size in the proposed method is 954 MB in step 1 and step 2, 1.91GB in step 3. These numbers are doubled assuming the data type is complex double with the same under-sampling rate. It can be seen that our algorithm requires less memory. In addition, all the CS reconstructions in our algorithms use 1D under-sampled data. Thus, the size of interim data is very small. Our algorithm can be easily implemented on most regular GPUs.
  • (3) It is easier to implement the GPU acceleration using proposed CS reconstruction method. Besides the memory advantage mentioned above, it is straightforward to accelerate the computation by applying data decomposition to the volumetric data since the 1D CS reconstructions in each step are independent of each other. A large number of 1D CS reconstructions can be executed simultaneously, given that the memory of the GPU is not used up. GPU favors a big number of simple computations, instead of a small number of complex computations. We can also use more than one GPU to achieve bigger acceleration. Examples of accelerating multiple independent CS reconstructions of SD OCT data on a triple-GPUs architecture can be found in [2022].

CS tries to maximize the sparsity of the signal in the sparsifying domain. Higher sparsity indicates that more signal tend to be zero or close to zero. Thus CS has been shown to be good at reducing the background noise. For the same reason, it usually results in the loss of low-intensity features. For the Fourier transformation case, this will smooth the image while the reconstruction result using wavelet transformation often loses low-intensity image details outright.

Many literatures have reported accurate CS reconstruction of an A-scan using 40%–50% spectral data [8, 10, 11, 14, 20, 21]. In addition, the literatures [1517] demonstrate volumetric CS reconstruction from a high percentage of missing data in one or both of the lateral directions. However, we found that the under-sampling rate cannot be too small for all three directions. In other words, we have not noticed satisfied reconstruction result when applying the proposed method to spectral volume with less than 50% under-sampling rate in all three directions. The main problem is the loss of low-intensity features and details in the spasifying domain. There are mainly two reasons for this: first, CS reconstruction tends to lose low-intensity features as mentioned earlier. Recovering these low-intensity features accurately requires a very high sampling rate. Reducing the sampling rate will do more harm to low-intensity features than those high-intensity ones. Secondly, the proposed method uses three round of CS reconstruction that will also jeopardize low-intensity features in the sparsifying domain. It will require accurate enough reconstruction in every round of CS reconstruction to preserve these low-intensity features.

Bigger sampling rate results in better image quality but increases the size of the spectral measurements as well as the data acquisition time. The optimal value of sampling rate for each direction is case-specific. Based on our experiences so far, setting each of them to 50%–60% is usually enough to guarantee a high quality reconstruction.

Since the CS reconstruction results in axial direction (step 1) is used as input data for CS reconstructions in lateral directions (step 2 and 3), its accuracy determines the minimum error in lateral directions. As can be seen from Fig. 3(b), the relative error converges to the relative error level of the axial CS reconstruction as the lateral sampling rate increases. CS reconstruction in lateral direction introduces more error with smaller lateral sampling rate.

Another interesting observation is that in Fig. 3(a), given a fixed overall sampling rate, a small axial sampling rate results in a bigger relative error than the same lateral sampling rate. For example, 5% axial and 100% lateral sampling rate results in a bigger error than 100% axial and 5% lateral sampling rate. Thus, it is more worthy to implement CS method in the lateral directions, given a limited computational resource.

5. Conclusions

In summary, we described high-quality volumetric SD OCT image reconstruction from significantly reduced spectral data which is obtained by under-sampling in all three dimensions of the original spectral volume. The reconstruction is implemented by applying 1D CS reconstruction dimension-by-dimension to the under-sampled spectral volume. Our method can reduce the overall sampling rate to less than 20% and can be easily adapted to the reconstruction of any 2D and 3D CS OCT imaging.

Acknowledgments

This work was supported in part by an NIH grant 1R01EY021540-01A1.

References and links

1. D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

2. M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005). [CrossRef]   [PubMed]  

3. T. Schmoll, C. Kolbitsch, and R.A. Leitgeb, “Ultra-high-speed volumetric tomography of human retinal blood flow,” Opt. Express 17(5), 4166–4176 (2009). [CrossRef]   [PubMed]  

4. M. Gargesha, M.W. Jenkins, A.M. Rollins, and D.L. Wilson, “Denoising and 4D visualization of OCT images,” Opt. Express 16(16), 12313–12333 (2008). [CrossRef]   [PubMed]  

5. W. Wieser, B.R. Biedermann, T. Klein, C.M. Eigenwillig, and R. Huber, “Multi-megahertz OCT: high quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second,” Opt. Express 18(14), 14685–14704 (2010). [CrossRef]   [PubMed]  

6. D.L. Donoho, Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

7. E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). [CrossRef]  

8. X. Liu and J.U. Kang, Compressive SD-OCT: the application of compressed sensing in spectral domain optical coherence tomography,” Opt. Express 18(21), 22010–22019 (2010). [CrossRef]   [PubMed]  

9. L. Fang, S. Li, Q. Nie, J.A. Izatt, C.A. Toth, and S. Farsiu, Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012). [CrossRef]   [PubMed]  

10. D. Xu, N. Vaswani, Y. Huang, and J.U. Kang, Modified compressive sensing optical coherence tomography with noise reducetion,” Opt. Lett. 37(20), 4209–4211(2012). [CrossRef]   [PubMed]  

11. N. Zhang, T. Huo, C. Wang, T. Chen, J. Zheng, and P. Xue, “Compressed sensing with linear-in-wavenumber sampling in spectral domain optical coherence tomography,” Opt. Lett. 37(15), 3075–3077 (2012). [CrossRef]   [PubMed]  

12. C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express 20(9), 10200–10211 (2012). [CrossRef]   [PubMed]  

13. S. Schwartz, C. Liu, A. Wong, D.A. Clausi, P. Fieguth, and K. Bizheva, “Energy-guided learning approach to compressive sensing,” Opt. Express 21(1), 329–344 (2013). [CrossRef]   [PubMed]  

14. D. Xu, Y. Huang, and J.U. Kang, “Compressive sensing with dispersion compensation on non-linear wavenumber sampled spectral domain optical coherence tomography,” Biomed. Opt. Express 4(9), 1519–1532 (2013). [CrossRef]   [PubMed]  

15. M. Young, E. Lebed, Y. Jian, P.J. Mackenzie, M.F. Beg, and M.V. Sarunic, “Real-time high-speed volumetric imaging using compressive sampling optical coherence tomography,” Biomed. Opt. Express 2(9), 2690–2697 (2011). [CrossRef]   [PubMed]  

16. E. Lebed, P.J. Mackenzie, M.V. Sarunic, and M.F. Beg, “Rapid volumetric OCT image acquisition using compressive sampling,” Opt. Express 18(20), 21003–21012 (2010). [CrossRef]   [PubMed]  

17. E. Lebed, S. Lee, M.V. Sarunic, and M.F. Beg, “Rapid radial optical coherence tomography image acquisition,” J. BioMed. Opt. 18(3), 036004 (2013). [CrossRef]   [PubMed]  

18. J. Yang and Y. Zhang, “Alternating direction algorithms for L1-problem in compressive sensing,” SIAM J. on Scientific Computing 33(1–2), 250–278 (2011). [CrossRef]  

19. D. Xu, Y. Huang, and J.U. Kang, “Assessment of robust reconstruction algorithms for compressive sensing spectral-domain optical coherence tomography,” Proc. SPIE 8589, 85890C (2013). [CrossRef]  

20. D. Xu, Y. Huang, and J.U. Kang, “Real-time compressive sensing spectral domain optical coherence tomography,” Opt. Lett. 39(1), 76–79 (2014). [CrossRef]  

21. D. Xu, Y. Huang, and J.U. Kang, “GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography,” Opt. Express 22(12), 14871–14884 (2014). [CrossRef]   [PubMed]  

22. D. Xu, Y. Huang, and J.U. Kang, “Real-time dispersion-compensated image reconstruction for compressive sensing spectral domain optical coherence tomography,” J. Opt. Soc. Am. 31(9), 2064–2069 (2014). [CrossRef]  

References

  • View by:

  1. D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).
  2. M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
    [Crossref] [PubMed]
  3. T. Schmoll, C. Kolbitsch, and R.A. Leitgeb, “Ultra-high-speed volumetric tomography of human retinal blood flow,” Opt. Express 17(5), 4166–4176 (2009).
    [Crossref] [PubMed]
  4. M. Gargesha, M.W. Jenkins, A.M. Rollins, and D.L. Wilson, “Denoising and 4D visualization of OCT images,” Opt. Express 16(16), 12313–12333 (2008).
    [Crossref] [PubMed]
  5. W. Wieser, B.R. Biedermann, T. Klein, C.M. Eigenwillig, and R. Huber, “Multi-megahertz OCT: high quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second,” Opt. Express 18(14), 14685–14704 (2010).
    [Crossref] [PubMed]
  6. D.L. Donoho, Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
    [Crossref]
  7. E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
    [Crossref]
  8. X. Liu and J.U. Kang, Compressive SD-OCT: the application of compressed sensing in spectral domain optical coherence tomography,” Opt. Express 18(21), 22010–22019 (2010).
    [Crossref] [PubMed]
  9. L. Fang, S. Li, Q. Nie, J.A. Izatt, C.A. Toth, and S. Farsiu, Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3(5), 927–942 (2012).
    [Crossref] [PubMed]
  10. D. Xu, N. Vaswani, Y. Huang, and J.U. Kang, Modified compressive sensing optical coherence tomography with noise reducetion,” Opt. Lett. 37(20), 4209–4211(2012).
    [Crossref] [PubMed]
  11. N. Zhang, T. Huo, C. Wang, T. Chen, J. Zheng, and P. Xue, “Compressed sensing with linear-in-wavenumber sampling in spectral domain optical coherence tomography,” Opt. Lett. 37(15), 3075–3077 (2012).
    [Crossref] [PubMed]
  12. C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express 20(9), 10200–10211 (2012).
    [Crossref] [PubMed]
  13. S. Schwartz, C. Liu, A. Wong, D.A. Clausi, P. Fieguth, and K. Bizheva, “Energy-guided learning approach to compressive sensing,” Opt. Express 21(1), 329–344 (2013).
    [Crossref] [PubMed]
  14. D. Xu, Y. Huang, and J.U. Kang, “Compressive sensing with dispersion compensation on non-linear wavenumber sampled spectral domain optical coherence tomography,” Biomed. Opt. Express 4(9), 1519–1532 (2013).
    [Crossref] [PubMed]
  15. M. Young, E. Lebed, Y. Jian, P.J. Mackenzie, M.F. Beg, and M.V. Sarunic, “Real-time high-speed volumetric imaging using compressive sampling optical coherence tomography,” Biomed. Opt. Express 2(9), 2690–2697 (2011).
    [Crossref] [PubMed]
  16. E. Lebed, P.J. Mackenzie, M.V. Sarunic, and M.F. Beg, “Rapid volumetric OCT image acquisition using compressive sampling,” Opt. Express 18(20), 21003–21012 (2010).
    [Crossref] [PubMed]
  17. E. Lebed, S. Lee, M.V. Sarunic, and M.F. Beg, “Rapid radial optical coherence tomography image acquisition,” J. BioMed. Opt. 18(3), 036004 (2013).
    [Crossref] [PubMed]
  18. J. Yang and Y. Zhang, “Alternating direction algorithms for L1-problem in compressive sensing,” SIAM J. on Scientific Computing 33(1–2), 250–278 (2011).
    [Crossref]
  19. D. Xu, Y. Huang, and J.U. Kang, “Assessment of robust reconstruction algorithms for compressive sensing spectral-domain optical coherence tomography,” Proc. SPIE 8589, 85890C (2013).
    [Crossref]
  20. D. Xu, Y. Huang, and J.U. Kang, “Real-time compressive sensing spectral domain optical coherence tomography,” Opt. Lett. 39(1), 76–79 (2014).
    [Crossref]
  21. D. Xu, Y. Huang, and J.U. Kang, “GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography,” Opt. Express 22(12), 14871–14884 (2014).
    [Crossref] [PubMed]
  22. D. Xu, Y. Huang, and J.U. Kang, “Real-time dispersion-compensated image reconstruction for compressive sensing spectral domain optical coherence tomography,” J. Opt. Soc. Am. 31(9), 2064–2069 (2014).
    [Crossref]

2014 (3)

2013 (4)

D. Xu, Y. Huang, and J.U. Kang, “Assessment of robust reconstruction algorithms for compressive sensing spectral-domain optical coherence tomography,” Proc. SPIE 8589, 85890C (2013).
[Crossref]

S. Schwartz, C. Liu, A. Wong, D.A. Clausi, P. Fieguth, and K. Bizheva, “Energy-guided learning approach to compressive sensing,” Opt. Express 21(1), 329–344 (2013).
[Crossref] [PubMed]

D. Xu, Y. Huang, and J.U. Kang, “Compressive sensing with dispersion compensation on non-linear wavenumber sampled spectral domain optical coherence tomography,” Biomed. Opt. Express 4(9), 1519–1532 (2013).
[Crossref] [PubMed]

E. Lebed, S. Lee, M.V. Sarunic, and M.F. Beg, “Rapid radial optical coherence tomography image acquisition,” J. BioMed. Opt. 18(3), 036004 (2013).
[Crossref] [PubMed]

2012 (4)

2011 (2)

2010 (3)

2009 (1)

2008 (1)

2007 (1)

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

2006 (2)

D.L. Donoho, Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

2005 (1)

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Adler, D.C.

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Beg, M.F.

Bie, H.

Biedermann, B.R.

Bizheva, K.

Candes, E.J.

E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Chen, T.

Chen, Y.

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Clausi, D.A.

Connolly, J.

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Donoho, D.L.

D.L. Donoho, Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

Duker, J.S.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Eigenwillig, C.M.

Fang, L.

Farsiu, S.

Fieguth, P.

Fujimoto, J.G.

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Gargesha, M.

Huang, Y.

Huber, R.

W. Wieser, B.R. Biedermann, T. Klein, C.M. Eigenwillig, and R. Huber, “Multi-megahertz OCT: high quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second,” Opt. Express 18(14), 14685–14704 (2010).
[Crossref] [PubMed]

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Huo, T.

Izatt, J.A.

Jenkins, M.W.

Jian, Y.

Kang, J.U.

Klein, T.

Ko, T.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Kolbitsch, C.

Kowalczyk, A.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Lebed, E.

Lee, S.

E. Lebed, S. Lee, M.V. Sarunic, and M.F. Beg, “Rapid radial optical coherence tomography image acquisition,” J. BioMed. Opt. 18(3), 036004 (2013).
[Crossref] [PubMed]

Leitgeb, R.A.

Li, S.

Liu, C.

Liu, X.

Mackenzie, P.J.

Nie, Q.

Rollins, A.M.

Romberg, J.

E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Sarunic, M.V.

Schmitt, J.

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Schmoll, T.

Schuman, J.S.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Schwartz, S.

Srinivasan, V.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Tao, T.

E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

Toth, C.A.

Vaswani, N.

Wang, C.

Wieser, W.

Wilson, D.L.

Wojtkowski, M.

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Wong, A.

Xu, D.

Xue, P.

Yang, J.

J. Yang and Y. Zhang, “Alternating direction algorithms for L1-problem in compressive sensing,” SIAM J. on Scientific Computing 33(1–2), 250–278 (2011).
[Crossref]

Young, M.

Zhang, N.

Zhang, Y.

J. Yang and Y. Zhang, “Alternating direction algorithms for L1-problem in compressive sensing,” SIAM J. on Scientific Computing 33(1–2), 250–278 (2011).
[Crossref]

Zheng, J.

Biomed. Opt. Express (3)

IEEE Trans. Inf. Theory (2)

D.L. Donoho, Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006).
[Crossref]

E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
[Crossref]

J. BioMed. Opt. (1)

E. Lebed, S. Lee, M.V. Sarunic, and M.F. Beg, “Rapid radial optical coherence tomography image acquisition,” J. BioMed. Opt. 18(3), 036004 (2013).
[Crossref] [PubMed]

J. Opt. Soc. Am. (1)

D. Xu, Y. Huang, and J.U. Kang, “Real-time dispersion-compensated image reconstruction for compressive sensing spectral domain optical coherence tomography,” J. Opt. Soc. Am. 31(9), 2064–2069 (2014).
[Crossref]

Nat. Med. (1)

D.C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J.G. Fujimoto, “Three-dimensional endomiscroscopy using optical coherence tomography,” Nat. Med. 12(12), 1429–1433 (2007).

Ophthalmology (1)

M. Wojtkowski, V. Srinivasan, J.G. Fujimoto, T. Ko, J.S. Schuman, A. Kowalczyk, and J.S. Duker, “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography,” Ophthalmology 112(10), 1734–1746 (2005).
[Crossref] [PubMed]

Opt. Express (8)

T. Schmoll, C. Kolbitsch, and R.A. Leitgeb, “Ultra-high-speed volumetric tomography of human retinal blood flow,” Opt. Express 17(5), 4166–4176 (2009).
[Crossref] [PubMed]

M. Gargesha, M.W. Jenkins, A.M. Rollins, and D.L. Wilson, “Denoising and 4D visualization of OCT images,” Opt. Express 16(16), 12313–12333 (2008).
[Crossref] [PubMed]

W. Wieser, B.R. Biedermann, T. Klein, C.M. Eigenwillig, and R. Huber, “Multi-megahertz OCT: high quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second,” Opt. Express 18(14), 14685–14704 (2010).
[Crossref] [PubMed]

X. Liu and J.U. Kang, Compressive SD-OCT: the application of compressed sensing in spectral domain optical coherence tomography,” Opt. Express 18(21), 22010–22019 (2010).
[Crossref] [PubMed]

E. Lebed, P.J. Mackenzie, M.V. Sarunic, and M.F. Beg, “Rapid volumetric OCT image acquisition using compressive sampling,” Opt. Express 18(20), 21003–21012 (2010).
[Crossref] [PubMed]

C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express 20(9), 10200–10211 (2012).
[Crossref] [PubMed]

S. Schwartz, C. Liu, A. Wong, D.A. Clausi, P. Fieguth, and K. Bizheva, “Energy-guided learning approach to compressive sensing,” Opt. Express 21(1), 329–344 (2013).
[Crossref] [PubMed]

D. Xu, Y. Huang, and J.U. Kang, “GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography,” Opt. Express 22(12), 14871–14884 (2014).
[Crossref] [PubMed]

Opt. Lett. (3)

Proc. SPIE (1)

D. Xu, Y. Huang, and J.U. Kang, “Assessment of robust reconstruction algorithms for compressive sensing spectral-domain optical coherence tomography,” Proc. SPIE 8589, 85890C (2013).
[Crossref]

SIAM J. on Scientific Computing (1)

J. Yang and Y. Zhang, “Alternating direction algorithms for L1-problem in compressive sensing,” SIAM J. on Scientific Computing 33(1–2), 250–278 (2011).
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic demonstration of the proposed sampling pattern which under-samples the original spectral volume in all three dimensions.
Fig. 2
Fig. 2 Schematic demonstration of the proposed three-step CS reconstruction strategy.
Fig. 3
Fig. 3 (a) relative error vs. axial sampling rate. The overall sampling rate is fixed; (b) relative error vs. lateral sampling rate. The axial sampling rate is fixed.
Fig. 4
Fig. 4 Reconstruction results of human skin. (a) is obtained using 100% spectral data. (b) is obtained using 25% spectral data for each A-scan and no under-sampling is applied in the fast-scanning direction. (c) and (d) are obtained using the proposed method, with the wavelet transformation and Fourier transformation as sparsifying operators, respectively. The under-sampling rates for the axial direction and fast-scanning direction are both 50%. The scale bars represent 100 μm. The image size in pixel is 900×925.
Fig. 5
Fig. 5 First row: volumetric visualization by ray-casting; second row: orthogonal cross-sectional display; first column: image obtained with 100% spectral data; second column: image obtained using the proposed method with the wavelet transformation; third column: image obtained using the proposed method with the Fourier transformation.
Fig. 6
Fig. 6 Recovered slices in the reconstructed volumetric images. Rows (a) and (b) are two en-face slices at the position of 160 μm and 800 μm below the surface, respectively. The image size in pixel is 925×250. Rows (c) and (d) are slices in the slow-scanning direction. The image size in pixel is 900×250. Rows (e) and (f) are B-scans in the fast-scanning direction. The image size in pixel is 900×925. The first column is the image obtained using 100% data. The second and third columns are images obtained using the proposed method with the wavelet and the Fourier transformation, respectively.
Fig. 7
Fig. 7 First row: representative slices obtained using 100% spectral measurements (first column) and under-sampled spectral volume using the wavelet transformation (second column) and the Fourier transformation (third column). Second row: zoom-in of the green rectangle areas in the first row.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

min x | Ψ x 1 s . t . MFx y u 2 2 ε
e = f CS f ref 2 / f ref 2
PSNR = 10 log 10 ( max 2 ( f ( x ) ) / var )

Metrics