Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

How to optimize OCT image

Open Access Open Access

Abstract

Quantization, which maps real values of raw data to a series of fixed gray levels, is an inevitable step in Optical Coherence Tomography (OCT) image formation. Three new quantization methods, Minimum Distortion, Information Expansion and Maximum Entropy are applied in the specific problem. Quantization results of a capillary with milk and the femoralis of rabbit are shown in this paper. Comparisons with the present log-based methods show that a suitable quantization method significantly increases contrast, SNR and visual fineness of the final image and reduces quantization error effectively. Applicability of different quantization methods is also discussed.

©2001 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) is a novel noninvasive tomographic imaging technique with micron scale resolution [1]. It has been widely used in many fields such as biology, medical applications, material science and so on [2]. However the quality of OCT images and the accuracy of micrometer-level information have not been validated. When OCT became a research hotspot, most researchers were interested in physics mechanism, instrumentation and practical applications. As the research goes on, more people think of using image processing to solve problems of real world. They realize that some of the problems may not be easily solved by physical methods.

Quantization, which maps real values of raw data to a series of fixed gray levels, is an inevitable step in OCT image formation. Image quantization is usually used for three purposes. The first is for image compression, transmission, storage, etc [3]. The second purpose is to enhance images by adaptation to the visual properties of the human eyes [3, 4]. In this situation, visual effect is more important than absolute distortion. For example, for an image that has few gray levels, the dithering technique can make the image look smooth by adding random noise without changing the number of gray levels [3]. This kind of quantization does not concern any real information of images but the human psychological visual impression. It is indeed a “visual perceptional deceit”. The third purpose is for data visualization or pixel level transformation [5]. For example, quantization methods that map raw data to image scale levels are employed to obtain images from FFT transformation, X-ray, MRI, ultra-sound and OCT. In this case, a distortion function, which is related to real information of raw data, should be kept to a minimum. Researchers have paid more attention to the first two purposes. For the third purpose, researchers in the area of digital image processing proposed some quantization methods for pixel level transformation [5]. However, most researchers do physical data visualization by using logarithm to compress the dynamic range and to convert raw data to image scale levels [6]. Few papers have been published to describe other quantization methods in the physics domain.

At the heart of the OCT system is a Michelson interferometer illuminated by a broadband light source. A photodiode detects interference signal that occurs only when the optical length difference between two beams of light reflected by the sample and the reference mirror is within the coherence length of the light source. Using heterodyne detection method, the signal is amplified at the modulation frequency by a band-pass filter an amplifier. An A/D converter transforms analog signals to digital data. Using a quantization method, each raw datum is mapped to image scale level. A sample’s cross-section information is obtained by performing repeated axial measurements at different transverse positions as the optical beam is scanned across the sample. The signals constitute a two-dimensional map of the backscattering or reflectance from internal structure of the sample. After interference signal of each position is converted to raw data and then transformed to image scale value, an OCT image is formed. The raw data usually have significantly different distribution from that of the image gray levels and real value can not be displayed on screen directly. Therefore, it is necessary to quantize the raw data, and the quantization is scalar quantization. The procedure will inevitably introduce quantization error that affects the image quality and may misinterpret some detail information hidden in the raw data.

OCT images are usually pseudo-color ones and pseudo-color may conceal the low contrast nature and detail structure information in the images. We investigate standard 8-bit gray images in this paper. All methods can be easily generalized to other gray levels image or pseudo-color image. In our experiments, a 1mm-diameter capillary filled with milk and the femoralis of rabbit were used as samples. The cross-section of each sample was scanned at a fixed angle. The resulting images should show the samples and the shadows due to the sideband spectral distribution of the light source. All these reveal the detail information of the raw data.

2. Quantization methods commonly used in digital image processing and in OCT image formation

In digital image processing, quantization is a monotonically increasing point operator by which each image intensity value in a digital image is assigned a new value from a given finite set of quantized values. Both values of original image and quantized image are integers and can be displayed on computer. There are four commonly used methods of quantization in this area: equal-interval, equal-probability, minimum-variance and histogram hyperbolization [5].

Equal-interval is a simple linear transformation from the original image level range to the new image level range. Equal-probability is sometimes referred as histogram equalization, which makes an equal frequency of occurrence for each quantized value in the quantized digital image [4]. In Minimum-variance quantization, the range of image values is divided into contiguous intervals, whose number is the number of quantized values, such that the weighted sum of the variances of the quantized intervals is minimized. Histogram hyperbolization takes into account the nonlinearity of the human visual system. Histogram hyperbolization first obtains histogram-equalized image, and then applies the inverse of a model function of human visual system to it [5]. Models of the nonlinear characteristic of the human visual system are usually chosen to fit data from psychophysics experiments that attempt to measure the relative sensitivity of subjective brightness to image luminance. A typical model can be found in [7].

Logarithm-based methods are commonly used currently in OCT image formation. The simplest one is Direct Logarithm (DL) method. The logarithm of the raw data are simply calculated directly and then converted to 0–255 using a linear function [6].

Some researchers employ a Truncation Logarithm (TL) method considering both dynamic range determination and noise reduction [8]. In this method, an appropriate threshold is chosen to eliminate noise and obtain a predetermined dynamic range. A detail procedure is:

i. Convert raw data linearly to [0,1].

ii. Set a threshold t, based on a predetermined dynamic range. All values less than t are set to t.

iii. Calculate logarithm of all values in [t, 1] and then convert them to 256 gray levels linearly.

Although threshold may reduce noise, it also degrades image quality. What’s more, determination of threshold is not automatic. The main reason of choosing logarithm-based algorithm is to compress dynamic range and agree with the exponential law by which light attenuates in scattering materials. However the explanation does not take into account the quantization error and the phenomena that log-based methods often result in poor contrast or loss of detail information.

3. Applying new quantization methods in OCT image formation

Other functions instead of logarithm for compression of dynamic range have been reported [9]. However no detail explanation is provided and the quality of the resulting images is still not good. All of those methods make no consideration of minimizing a distortion function of raw data. Since quantization introduces quantization noise, which will have a great impact on image processing and therefore it should be investigated thoroughly. We propose three new quantization methods here.

3.1 Minimum Distortion (MD) and Truncation MD (TMD) Methods

The Minimum Distortion (MD) method is based on the minimum distortion principle that has been thoroughly discussed in the rate distortion theory [10]. If a mean-square measure is used as measure of distortion, MD method will be transformed to minimum-variance method. In this paper, we only concern the mean-square measure. Under mean-square error measure, for the input signal x with probability density function p(x), the optimal quantization output levels q1,…,qN and the internal breakpoints Z1,…,Z N+1 of minimum distortion are subject to the following formula [11]:

Zk=12(qk1+qk)qkZkZk+1xp(x)dxZkZk+1p(x)dx

Where N is the number of the output levels, k is from 1 to N for qk while 2 to N+1 for Zk. Typically, endpoints Z1 and Z N+1 are known a priori.

For quantization in OCT image formation, N usually equals 256. Despite the real value qk, each output level is mapped to a fixed gray level sequentially after quantization, i.e., the smallest output level is mapped to gray level 0, the second smallest to gray level 1, and so on. This is different from the common quantization procedure and is a particularity of OCT data quantization.

An iterative method is presented to compute the exact quantizer parameters [11]. Concerning the sensitivity to the initial conditions and the computational complexity of the iteration method, we used a clustering method instead.

Let ai and a i+1 the ith and the (i+1)th internal breakpoints of the raw data. The number of output levels is 256 and i is set from 0 to 255. The ith output level di and the distortion function Je can be defined as following:

di=y=aiai+11yn(y)y=aiai+11n(y)Je=i=0255y=aiai+11n(y)(ydi)2

Where y is the value of the raw data. n(y) is the number of raw data with value y. a0 and a256 -1 are the minimum value and the maximum value of the raw data respectively.

All di can be determined by minimizing the distortion function, which is similar to that of the c-means clustering method in pattern recognition [12]. Since y is scalar, it is not necessary to examine all clusters to decide whether Je is reduced, a comparison between adjacent clusters should be sufficient. All data with the same value y should be moved between clusters simultaneously. Therefore the common c-means algorithm can be modified and employed to execute MD method as the following [13]:

i. Set the initial clusters using simple quantization methods such as logarithm-based methods or linear methods.

ii. Suppose samples with value y are in γi’ and γi is the ith cluster in which all data will be mapped to the ith image gray level (i=0,…,255). Calculate ρj as the following:

ρj={Njn(y)Nj+n(y)ymj2j=i1,i+1Njn(y)Njn(y)ymj2j=1

Where mj is the center of the jth cluster, n(y) is the number of samples whose value is y, Nj is the total number of samples in jth cluster.

iii. If ρiρj, move y from γi to γj.

iv. Calculate new mi, mj and Je

v. Back to ii and repeat above procedure, till Je is small enough or Je remains unchanged.

To reduce the effect of raw data with a very large value, a Truncation Minimum Distortion (TMD) method can be used. In this method, all data are sorted. The values of a predetermined percentage of the largest data are set to the value of the remaining largest datum before applying the common clustering procedure.

3.2 Information Expansion (IE) Method

Information Expansion (IE) method takes into account the phenomenon that the probability density function of OCT raw data usually has sharp peaks. Although the sharpness of the peaks depends on different samples, the facts that raw data have concentrative densities and OCT images have inferior contrast are common [9]. Therefore in the IE method, the raw data are quantized to image gray values evenly. It is a close analogy to the concept of histogram equalization in image processing which is also called equal-probability method [4, 5]. However, equalization of raw data is different from equalization of image gray levels. It can be proved that the entropy of the raw data remains unchanged during equalization, while the equalization of image gray levels often reduces image entropy. Equalization before quantization is better because the distortion error caused by equalization is not introduced while equalization after quantization will introduce additional errors. Detail algorithm of IE method is presented as the following:

i. Count the number of levels of raw data and the number of data in each level.

ii. Calculate accumulative real value histogram of the raw data.

iii. Do raw data histogram equalization as what is done for image [4]. In this procedure, it is not necessary to round result data to an integer as what is done in image equalization.

iv. Convert results to image gray levels linearly.

Since equalized raw data can be obtained as above, technique similar to histogram hyperbolization in digital image processing, which we call Information Hyperbolized Expansion (IHE), can also be applied. The only difference between IE and IHE is that in step 3 after raw data histogram equalization is done, an inverse of a model function is applied to the equalized raw data. Details about model and operating procedure can be found in [7].

3.3 Maximum Entropy (ME) Method

The Maximum Entropy (ME) method is concerned with the preservation of the information hidden in the raw data. From the point of view of information theory, OCT quantization transfers structure information of a sample from raw data to digital image and can be viewed as an information channel. The information is the uncertainty of the data. Preserving more information should be the essential purpose of quantization in OCT image formation. According to information theory [10], when mutual information of data before and after quantization reaches the maximum, the loss of information reduces to the minimum. As the quantization function is deterministic, maximization of mutual information equals maximization of the entropy of image data after quantization.

In view of the property of entropy, if and only if the probability of each image gray level is identical, will the entropy of image data reach its maximum. Thus the method should make the probability of each image level basically identical, which means the raw data are mapped to gray levels from small to large and should make the number of data in each gray level closest to the average number. The detail algorithm is presented as following:

i. Count and sort the raw data.

ii. Calculate the average number of data in each image gray level.

iii. Map raw data to gray levels (0–255) from small value to large value and make the number of data in each gray level as close to the average number as possible. Rawdata with the same value must be set into the same gray level.

4. Experiments and Results

Two samples were used for experiments: a 1mm-diameter capillary filled with milk and the femoralis of rabbit. We scanned one cross-section of each sample at a fixed angle. Besides the methods proposed above, we also tested the performance of equal interval (EI) method, i.e. simple linear transformation, for comparison [5]. Using different quantization methods, 8 images of the same cross-section of each sample were constructed from the raw data respectively, as shown in Fig.1 and Fig.2.

Since the final outputs of OCT system are images, we should use criterions that are defined for images. To evaluate the image quality of different quantization methods, we used three objective criterions and a subjective visual criterion that are listed in Table 1:

Tables Icon

Table 1. Criterion of image evaluation

In the above table, µo is the mean value of the object region, µb is the mean value of the background region, σn is the standard deviation of the noise in the background region. Se is the mean energy of the object region, ne is the mean energy of the background region.

Se=1No(i,j)objpixel(i,j)2ne=1Nb(i,j)bgpixel(i,j)2

No and Nb are numbers of pixels in the object region and the background region respectively, pixel(i,j) is the gray value of the point (i,j). In our experiments, object region and background region were determined manually, since the sample shape are known in advance.

The resulting images are shown in Fig.1 and Fig.2. All comparisons of different quantization methods are listed in Table 2 and Table 3.

Tables Icon

Table 2. Image quality of capillary with milk

 figure: Fig. 1.

Fig. 1. Resulting images of capillary with milk. (a) Direct Logarithm, (b) Truncation Logarithm, (c) Minimum Distortion, (d) Truncation Minimum Distortion, (e) Information Expansion, (f) Information Hyperbolized Expansion (g) Maximum Entropy (h) Equal Interval

Download Full Size | PDF

Tables Icon

Table 3. Image quality of the femoralis of rabbit

 figure: Fig. 2.

Fig. 2. Resulting images of the femoralis of rabbit. (a) Direct Logarithm, (b) Truncation Logarithm, (c) Minimum Distortion, (d) Truncation Minimum Distortion, (e) Information Expansion, (f) Information Hyperbolized Expansion (g) Maximum Entropy (h) Equal Interval

Download Full Size | PDF

It can be seen from the figures and the tables that methods based on logarithm give relatively inferior contrast, CNR and SNR. DL obtains better contrast, CNR and SNR, but more detail loss than TL. There is a tradeoff between better detail preserving and higher contrast or SNR, which implies that it is hard to get a comprehensively good result.

MD and TMD methods both reduce noise significantly and obtain high contrast and SNR. MD loses most details of samples, while TMD reveals some details. If the truncation threshold is determined appropriately, TMD can reveal most details without loss of contrast.

IE method gives the most abundant details at the cost of low contrast and high noise. Since it reveals most details, it can be used as a detail preserving criterion. However it is not suitable for actual quantization due to low contrast and SNR. IHE method is an extension of IE method. It also remains the abundant details. It improves contrast and SNR to some extent by modifying the raw data histogram rather than equalizing it. However contrast and SNR of IHE are still not high enough. This improvement may depend on the model selection.

ME method yields high contrast and sufficiently low noise. It also preserves most details. What’s more, there is no parameter to choose, which is more convenient than TMD method.

Although EI method obtains considerable contrast and SNR, it can not reveal any details. Thus it is of no use in OCT image formation.

All experiments, including those using other samples that not listed here, such as mouse brains, yielded similar results. The log-based quantization methods are not the best quantization methods in OCT image formation. They often result in low contrast, low SNR or detail loss and can not get overall good OCT images. Other quantization methods are better than logarithm-based methods to some extent. By applying a suitable quantization method, without any modification of the OCT system, the quality of the final images can be improved greatly. Log-based methods are usually faster than methods using sorting, iteration, etc. However, if data sorting is performed line by line while scanning, methods using sorting can also be very fast.

5. Conclusion and Discussion

OCT image degradation is caused mainly by three kinds of reasons: physical reasons such as multi-scattering, equipment reasons such as electronic noise, data (image) processing reasons such as quantization and filtering. Improvements of quantization methods are mainly concerned with the third reason. The goal is to preserve the most primitive information that hides in the raw data and minimize the quantization error. Quantization only deals with the value of each pixel and is independent of the position of the pixel. Since some kinds of noise are position-dependent, a good quantization method may not eliminate all kinds of noise. However, a good method will preserve more structure information of samples and introduce less quantization noise. This is very important for image processing. Appropriate selection of quantization methods can also reduce the impact of low-precision equipment to some extent.

Experiments show that log-based methods are not the best quantization methods. It often loses structure information or leads to poor contrast. The advantage of them is the low computational complexity. The MD method is especially good for improving contrast and reducing quantization noise. The IE method is extremely useful for revealing detail information. The IHE method improves IE method to some extent by both raising contrast and SNR and preserving abundant details. This improvement is restricted and model-dependent. The TMD and the ME method are compromising between IE and MD methods. They can both obtain satisfactory contrast and details. The TMD method needs a predetermined truncation percentage while the ME method can run automatically.

The MD and the TMD methods take a long time for quantization and the selection of truncation threshold is not automatic. Therefore they are not suitable for real-time imaging. All these limit the use of MD-based quantization methods. The EI method can not effectively reveal detail information, which makes it useless in OCT image formation.

The new quantization methods in OCT image formation have similar idea with quantization methods used in digital image processing. MD and minimum-variance, IE and equal-probability, IHE and histogram-hyperbolization have similar idea respectively. However due to real-value property and highly compacted distribution of raw data in OCT image formation, subtle changes have to be made on the methods of image processing as described above.

The reason that different quantization methods result in different image quality is that the raw data are scalar signals with finite precision and they have probability density functions (PDF) which differ greatly from those of images of the natural scene. Most physical data visualization issues, such as those in MRI, ultrasound and so on, have similar problems.

There are two factors affecting the choice of quantization methods: the PDF and the precision of the raw data. The PDF indicates whether and how much a special quantization method should be used to expand or compress the raw data. Simple linear function or its variations are often good quantization methods if the PDF is similar to that of the images of the natural scene. If the PDF is compact, nonlinear function such as logarithm, square root [9], and so on, should be employed. When the PDF is too compact, the IE method should be employed first to reveal all details and then to determine what nonlinear function should be employed.

The precision of raw data determines the number of different values that can be obtained. It indicates what kind of function should be employed to quantize the raw data. If the precision is very high, rounding error will be small and arithmetical function can be employed. In this situation, the ME method and the IE method will obtain similar result. If the precision is not high enough as the OCT system, functions based on relative relationship of the rawdata, such as ME, MD, etc. would be a better choice.

Acknowledgement

This work was supported by Chinese Natural Science Foundation under Grant No. 39770227 and 69908004 and “985” Research Fund, THSJZ of Tsinghua University.

Reference and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J.G. Fujimoto, “Optical Coherence Tomography,” Science 254, 1178(1991) [CrossRef]   [PubMed]  

2. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron 5, 1205(1999) [CrossRef]  

3. K. R. Castleman, Digital Image Processing, (Prentice Hall, Inc., 1996)

4. J. S. Lim, Two-Dimensional Signal and Image Processing, (Englewood Cliffs, NJ: Prentice Hall, 1990)

5. R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, (Reading Press: Addison-Wesley, 1993)

6. J. P. Dunkers, R. S. Parnas, C. G. Zimba, R. C. Peterson, K. M. Flynn, J. G. Fujimoto, and B. E. Bouma, “Optical coherence tomography of glass reinforced polymer composites,” Compos. Pt. A: Appl. Sci. and Mfg. 30, 139 (1999) [CrossRef]  

7. W. Frei, “Image enhancement by histogram hyperbolization,” Comput. Graph. Image Process. , 6, 286(1977) [CrossRef]  

8. Y. Tao, Experimental Research of OCT System, MA’s thesis of Tsinghua University, (1998)

9. H. Ishikawa, R. Gurses-Ozden, S. T. Hoh, H. L. Dou, J. M. Liebmann, and R. Ritch, “Grayscale and proportion-corrected optical coherence tomography images,” Ophthal. Surg. and Lasers 31, 223 (2000)

10. Jan C.A. Van der Lubbe, Information theory, (English translation Cambridge University Press, 1997)

11. J. Max, “Quantizing for minimum distortion,” IEEE Trans. Inf. Theory IT-6, 7 (1960) [CrossRef]  

12. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, (New York: Wiley, 1973)

13. M. Friedman and K. Abraham, Introduction to pattern recognition: statistical, structural, neural, and fuzzy logic approaches, (River Edge, NJ: World scientific, 1999) [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1.
Fig. 1. Resulting images of capillary with milk. (a) Direct Logarithm, (b) Truncation Logarithm, (c) Minimum Distortion, (d) Truncation Minimum Distortion, (e) Information Expansion, (f) Information Hyperbolized Expansion (g) Maximum Entropy (h) Equal Interval
Fig. 2.
Fig. 2. Resulting images of the femoralis of rabbit. (a) Direct Logarithm, (b) Truncation Logarithm, (c) Minimum Distortion, (d) Truncation Minimum Distortion, (e) Information Expansion, (f) Information Hyperbolized Expansion (g) Maximum Entropy (h) Equal Interval

Tables (3)

Tables Icon

Table 1. Criterion of image evaluation

Tables Icon

Table 2. Image quality of capillary with milk

Tables Icon

Table 3. Image quality of the femoralis of rabbit

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Z k = 1 2 ( q k 1 + q k ) q k Z k Z k + 1 xp ( x ) dx Z k Z k + 1 p ( x ) dx
d i = y = a i a i + 1 1 y n ( y ) y = a i a i + 1 1 n ( y ) J e = i = 0 255 y = a i a i + 1 1 n ( y ) ( y d i ) 2
ρ j = { N j n ( y ) N j + n ( y ) y m j 2 j = i 1 , i + 1 N j n ( y ) N j n ( y ) y m j 2 j = 1
S e = 1 N o ( i , j ) obj pixel ( i , j ) 2 n e = 1 N b ( i , j ) bg pixel ( i , j ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.