Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Double-shot depth-resolved displacement field measurement using phase-contrast spectral optical coherence tomography

Open Access Open Access

Abstract

We describe a system for measuring sub-surface displacement fields within a scattering medium using a phase contrast version of spectral Optical Coherence Tomography. The system provides displacement maps within a 2-D slice extending into the sample with a sensitivity of order 10 nm. The data for a given deformation state is recorded in a single image, potentially allowing sub-surface displacement and strain mapping of moving targets. The system is based on low cost components and has no moving parts. The theoretical basis for the system is presented along with experimental results from a simple well-controlled geometry consisting of independently-tilting glass sheets. Results are validated using standard two-beam interferometry. A modified system was used to measure through-the-thickness phase changes within a porcine cornea due to displacements produced by an increase in the intraocular pressure.

©2006 Optical Society of America

1. Introduction

Since the advent of the laser, research in structural integrity and mechanics of materials has made extensive use of optical non-destructive techniques such as Holography, Electronic Speckle Pattern Interferometry (ESPI) and Moiré interferometry [1]. These techniques, however, only provide surface displacement fields. Internal damage or defects in a test sample, as well as internal elastic constants, have to be obtained from the measured displacements indirectly, usually by solving an inverse problem [2]. A broad range of methods to measure internal displacement fields and structure have been developed in the last few decades such as X-ray diffraction, photoelastic tomography (PT) [3, 4], Phase Contrast Magnetic Resonance Imaging (PCMRI)[5, 6], 3-D Digital Image Correlation (DIC) using data acquired with X-ray Computed Tomography (XCT) [7] and Optical Coherence Tomography (OCT) elastography[8, 9]. Each technique has a restricted range of materials to which it can be applied: PT, for example, is suitable only for materials that exhibit photo-elasticity; PCMRI requires significant water or fat content in the sample, and OCT is appropriate for studying weakly scattering translucent materials. For many technologically- and medically-important materials (adhesives, polymers, composites, skin and ocular tissues) the existing techniques are often either non-applicable, have insufficient spatial resolution or else are too insensitive to measure displacement fields within their volume.

OCT is an exciting technique that provides depth-resolved microstructure images primarily for medical applications. It is based on a Michelson interferometer and a low temporal coherence broadband source and is usually implemented in the time domain, in which case a reference mirror is scanned to provide cross-sections of the sample. It can also be realised in the spectral domain (SOCT) where all the information of a slice inside the material is registered simultaneously by using a spectrometer, an area photo detector array and no scanning devices. A 2-D interferogram is recorded with depth encoded as spatial frequency along the wavelength axis of the spectrometer, rather than as a function of time [10]. The microstructure is then extracted from the spectral magnitude of the Fourier transform along the wave-number axis. Following a demonstration of SOCT to measure intraocular distances [11], phase-contrast SOCT has been successfully applied in microscopy to study cellular motion and structure [12, 13]. However, the main applications of SOCT are in the field of ophthalmic imaging for inspection of different eye structures and tissues such as cornea [14, 15] and retina [16, 17]. Recent advances in SOCT have resulted in very high-resolution [18, 19] and high-speed [2022] systems that outperform their time domain counterparts. This is mainly due to the increased signal-to-noise ratio obtained with SOCT systems, which makes them more power-efficient and therefore capable of working at higher framing rates [2326]. SOCT has also been used to map spatial variations in sample birefringence by using polarization sensitive systems [27, 28].

Although correlation methods can be used to quantify displacement fields from before- and after-deformation OCT microstructure images, they are computationally intensive and the displacement sensitivity is limited by the depth resolution of typically 1–10µm. This constrains the ability to detect and quantify deformation fields due to small loads (mechanical, thermal or chemical) even for compliant materials.

A recent trend in whole-field optical metrology has been to merge concepts from two beam interferometry and OCT so as to provide depth-resolved displacement fields from measurements of changes in the optical phase [2931]. In low coherence ESPI, displacement fields are obtained from phase measurements across en-face slices inside the material [29] and different depths can be probed by scanning a reference mirror to reposition the coherence gate. Wavelength scanning interferometry (WSI) [30, 31] relies on a tuneable light source to provide through-the-thickness displacement fields within the sample. As with low coherence ESPI, it requires the object to be static to within a fraction of a wavelength during the scanning period of several seconds, which severely limits its applicability for in-vivo investigations. In order to speed up acquisition, one of the spatial axes can be traded for a wavelength axis, as in SOCT. Schwider et al used this concept to measure 1-D surface shape profiles of a solid object in a single shot [32].

The present work is focused on using optical phase information obtained from a SOCT system to determine depth-resolved displacement fields with a sensitivity of some tens of nanometers. In Section 2 the principle and theoretical background of the technique are presented. In Section 3 the proposed approach is demonstrated and validated with a proof-of-principle experiment in which the tilt angle of two thin glass cover slips are obtained and compared against two beam interferometry measurements. In Section 4 the method is used to measure phase changes through-the-thickness of an ex-vivo porcine cornea, arising from changes in the intraocular pressure (IOP). The suitability of the technique to study dynamic events is illustrated with movies of depth-resolved wrapped phase distributions in the cornea.

2. Phase-contrast spectral optical coherence tomography

SOCT can be considered as a parallel form of WSI. Whereas in WSI the interference signal is modulated along the time axis, in SOCT it is modulated along the spatial axis of a spectrometer that samples a range of wavelengths simultaneously. A broadband source of bandwidth Δλ and wavelength λ centered on λc is used to illuminate both a reference surface and a narrow sheet of scatterers within the object (see Fig. 1(a)). Following recombination of the two beams at a beam splitter, a diffraction grating spatially separates the different wavelength components of the object and reference beams, which form an interference pattern on a two-dimensional photo detector array.

2.1 Phase measurement and displacement sensitivity

Consider the optical set up shown in Fig. 1(a) which is used to illuminate a sample whose cross section is illustrated in Fig. 1(b). It is convenient to consider the sample as consisting of a set of ‘slices’, each of thickness equal to the depth resolution of the system, δz. At a given point in the image plane, the phase difference between the reference beam, and light back-scattered from the jth slice of the object, can be expressed as:

ϕj(λ)=ϕj0+4πλzj

where λ ranges from λcλ/2 to λc +Δλ/2, ϕj0 is the phase change introduced by reflection at the j-th slice and zj is the optical path difference between the reference surface and the jth slice. Equation (1) can be expressed in terms of the wave-number k=2π/λ, reducing to the linear relationship

ϕj(k)=ϕj0+2kzj

from which the frequency fk along the k-axis is given by:

fk=12πϕk=zjπ

In the following analysis we use a k-space, rather than λ-space, representation because fk is independent of k whereas the equivalent frequency derived from Eq. (1), fλ , would vary with λ. Sampling the interferogram intensity data uniformly along the k-axis will therefore ultimately provide better depth resolution than sampling uniformly along the λ axis. In references [30, 31], where a tunable laser source was used, the wavelength range swept was under 0.12nm and therefore small enough to ignore these non-linearities.

 figure: Fig. 1.

Fig. 1. Schematic of a SOCT system consisting of a broadband source BBS, beam splitter BS, object O, reference mirror R (apparent position also shown as a dashed line), diffraction grating G, lens L of focal length f, and two-dimensional photo detector array D. θ and βc denote, respectively, the incident angle of the broadband beam and the diffraction angle for the central wavelength λc .

Download Full Size | PDF

Neglecting the influence of multiple reflections or multiple scattering within the material (one of the normal assumptions for OCT), and representing the medium by M independent slices, the intensity distribution along the k-axis of the spectrometer can be expressed in general form as:

I(k)=I0+2j=1MIRIjcos[ϕj(k)]+2i=1Mj=i+1MIiIjcos[ϕi(k)ϕj(k)]

I0 accounts for the dc term, and IR and Ij for the intensity of the beams coming from the reference surface and the j-th layer respectively. The second term in the right hand side of Eq. (4) represents the interference between the reference beam and light from all slices within the object. This term corresponds to intensity modulation along the k-axis with frequencies fkj=zj that are directly proportional to the optical path difference between the j-th slice within the object and the reference surface. The double summation term in Eq. (4) represents the autocorrelation or auto-interference of light from within the object, with ϕi(k)-ϕj(k) the phase difference between light scattered from the i-th and the j-th slices. ϕi(k)-ϕj(k) can be expressed in terms of frequencies in k-space as ϕi(k)-ϕi(k)=2π(fki-fkj)k. This term therefore contributes to I(k) with modulation frequencies fki-fkj=(zi-zj)/π that depend on the optical path difference between the corresponding slices within the object.

Fourier transformation of I(k) leads to a spectrum with three main components, associated with the three terms on the right hand side of Eq. (4). The correlation component, associated with the second term, maps the internal structure or scattering potential of the object under study and has to be separated from the autocorrelation component so that there is no cross-talk between them. This can be done by increasing the distance from the reference surface to the object, with the drawback of halving the depth range. Alternatively, phase shifting can be used to eliminate the auto-correlation terms without loss of depth range, as was implemented in differential Fourier Domain SOCT [16].

If the j-th slice within the object moves to a new position so that the optical path difference is now zj +wj , then the phase ϕj in Eq. (1) will change to ϕjϕj , thereby modifying the intensity distribution I(k). The complex nature of the Fourier transform of I(k), Ĩ(fk), allows not only for the computation of the magnitude spectrum to get the structure of the object, as is commonly used in SOCT, but also the phase change Δϕj which occurs at the frequency component fkj . If the interferograms recorded along the k-axis before and after deformation are denoted by I 1(k) and I 2(k), then the wrapped phase difference Δϕj can be obtained using the difference-of-phases equation [33]:

Δϕj=tan1{Re[I˜1(fkj)]Im[I˜2(fkj)]Im[I˜1(fkj)]Re[I˜2(fkj)]Im[I˜1(fkj)]Im[I˜2(fkj)]+Re[I˜1(fkj)]Re[I˜2(fkj)]}

where Ĩ 1(fkj ) and Ĩ 2(fkj ) are the Fourier transforms of I 1(k) and I 2(k), respectively, evaluated at fk=fkj , and Re and Im represent the real and imaginary part of a complex number. The calculated value of Δϕj lies within the range -π to π, and will usually require unwrapping. Eq. (5) evaluates the difference of the phases before and after deformation in a single step, rather than requiring the evaluation of each phase separately and then subtracting the phases. A further advantage of Eq. (5) is that the numerator and denominator terms can be convolved with top hat kernels to reduce speckle noise when studying scattering materials and get a maximum likelihood estimate of the phase difference [33].

The change in the optical path difference between the reference surface and the j-th slice within the medium, wj , can be obtained from Eq. (1) as

wj=λc4πΔϕj

If we assume constant refractive index n within the sample, and constant refractive index n0 outside it, wj can be expressed as

wj=(n0n)d1+ndj

where d 1 is the out-of-plane displacement of slice 1 (the object surface), and dj the corresponding displacement of the j-th slice. d 1=w 1/n 0 is obtained by evaluating the phase change Δϕ1 at the object surface (j=1) using f k1 in Eq. (5). The out-of-plane displacement dj is then obtained from Eq. (7). For more complex distributions of known refractive index within the object, Eq. (7) will include an integral over the range [d 1, dj ) and therefore the displacement of shallow slices should be calculated first and used to correct the displacement of the deeper ones. As the displacement sensitivity is defined as the minimum displacement that can be measured with the system, from Eq. (6) it follows that it depends on the minimum phase change measurable by the system, and therefore will be limited by the phase noise.

The analysis above considered I(k) to be a 1-dimensional signal, encoding the depth of scatterers along a line parallel to the illumination direction. The use of a 2-D array of photo detectors allows I(k) to be recorded independently at each of a series of points along a single spatial axis. The sample is illuminated with a sheet of light, and 1-D Fourier transforms carried out parallel to the k-axis (along the row direction in the example shown in the next section) of the interference pattern recorded on the 2-D array. The resulting image represents data measured within a plane extending into the sample, defined by the region illuminated by the light sheet.

2.2 Depth range and depth resolution

In a SOCT system, the maximum optical path difference Δz between a slice within the object and the reference surface – known as the depth range – is limited by the spectrometer resolution. The modulation frequency fk in Eq. (3) is limited by the Nyquist frequency fNy =N/2Δk, with N the number of pixels used to sample I(k) over the wave-number range Δk=2πΔλ/λc2 . Therefore:

Δz=Nλc24Δλ

The depth resolution, or the minimum axial optical path between two points inside the medium whose corresponding interference signals can be fully resolved in the frequency domain, is given by

δz=γλc2Δλ

where γ is a constant that depends on the windowing function used to sample the interferograms I(k) in the spectrometer. If a rectangular window is used, γ=1 or γ=0.603 depending on whether the resolution criteria is the distance between first zeroes or FWHM of the resulting sinc function in the frequency domain, respectively. If a Hanning window is used instead, then γ=2 if the resolution criteria is the distance between first zeroes, or γ=1 if FWHM is considered. Another way to evaluate the depth resolution is from the so-called round trip coherence length of the light source [9, 16] which gives a theoretical lower limit (γ=2ln2/π=0.44) for a source with a Gaussian spectral profile.

3. Proof-of-principle and validation experiment

Figure 2 shows a schematic configuration of the optical setup which is based on a Michelson interferometer with two sources of illumination: a super luminescent diode (Superlum Diodes Ltd., HP1) with a central wavelength of 840nm, bandwidth of 50nm and optical power of 15mW; and a CW laser (Lightwave 142) with a wavelength of 532 nm. The beams coming from each of the sources were first collimated, then brought into alignment using a 50:50 cube beam splitter CBS, and then split into object and reference beams by means of an 80:20 wedge beam splitter WBS. Test objects S1 and S2 (in this case, microscope cover slips) were oriented normal to the z-axis within the depth range of the system. The reference beam consisted of a plane wave reflected from the reference mirror R. Upon recombination through a wedged beam splitter WBS, the reference and the object beams were diffracted by a transmission grating G with 1200 lines/mm and optimized efficiency at 840nm. NF is a neutral density filter used to optimize the speckle contrast upon interference of the reference and the object beams.

 figure: Fig. 2.

Fig. 2. Optical setup for simultaneous SOCT and two beam interferometry.

Download Full Size | PDF

The interference pattern from the SLD was recorded by the camera C1 (Trust 380 Spacecam OS webcam 640×480 pixels, 8 bits), and the one from the laser by camera C2 (Vosskühler GmbH HCC-1000 CMOS 1024×1024 pixels, 8 bits). The distances between diffraction grating and image planes were the same for both sources. The wedged beam splitter avoided spurious fringes due to multiple reflections from the laser and allowed the setup to be used with both SLD and laser at the same time. The first order diffracted beams from the two sources were spatially separated due to the difference in wavelength between them.

To illuminate a narrow line on the sample (S1 and S2), three cylindrical lenses were used [34]. CLy1 and CLy2 were oriented with their axes parallel to the y-axis, and CLx parallel to the x-axis. The lens orientations are shown separately in the xz and yz planes in Fig. 3(a) and 3(b) respectively. The beams from the SLD and laser were initially collimated, so CLy1 generated a line parallel to the y-axis on the object at a distance equal to its focal length (f y1=100 mm). Light reflected from the object re-entered the interferometer through the same lens. The combined object and reference beams passed through CLx which was positioned at twice its focal length (fx =160 mm) from the object. The SLD interference pattern was diffracted by the grating and CLy2 focused it onto C1 at a distance equal to its focal length of 160 mm. The distance between C1 and CLx was 320 mm (see Fig. 3(b)). The laser interference pattern was diffracted directly into C2, but no cylindrical lens was used here in order to spread the light from the monochromatic source over the full 2-D array.

 figure: Fig. 3.

Fig. 3. (a) x and (b) y-axis image formation.

Download Full Size | PDF

In order to allow independent tilts to be applied to both cover slips, S1 and S2 were mounted in tilting stages whose angles were controlled by, respectively, three fine-pitch screws and three piezo-electric lead zirconate titanate (PZT) actuators. The latter device was a highly repeatable tilting stage that made it possible to apply a tilt to S2 and then return it to its previous state, which was necessary for the measuring process described below.

The following six-step recording process was performed so as to measure the displacement of the two surfaces both at the same time (using SOCT), and independently (using standard two-beam interferometry for validation of the SOCT results) [31]. During the entire process the reference beam was present, as shown in Fig. 4. Subscripts a and b indicate states before and after tilting, respectively: (a) With S2 in its reference state, a laser interference pattern was recorded for S2, denoted Ia (S2); (b) After applying a tilt to S2, a second laser interference pattern was recorded for S2, denoted Ib (S2); (c) with S2 moved back to its reference state (a), S1 was inserted in its reference state and a first SLD interference pattern with both S1 and S2 in place was recorded, denoted Ia (S1 S2); (d) a black screen was placed between S1 and S2 and a laser interference pattern was recorded for S1, denoted Ia (S1); (e) after applying a tilt to S1, a second laser interference pattern was recorded for S1, denoted Ib (S1); and (f) after removing the screen, a second SLD interference pattern with both S1 and S2 in place was recorded, denoted Ib (S1 S2).

The angles θ1 and θ2 in Fig. 4 are the tilts applied to each surface about the x-axis. Use of the black screen between S1 and S2 in steps (d) and (e) made it possible not to have a repeatable motion stage for S1 and thereby simplified the mechanical arrangement.

 figure: Fig. 4.

Fig. 4. Experimental procedure for measuring tilt on S1 and S2 using SOCT, and validation using two-beam interferometry.

Download Full Size | PDF

Simple tests were performed on a sample consisting of two thin glass sheets to assess the performance of the proposed SOCT system. The objects were microscope cover slips, one behind another, with a thickness of ~300µm. One cover slip was painted black on one side to suppress reflections from that interface, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Sample geometry consisting of two microscope cover slips. The rear surface of the second cover slip was painted black to suppress the back reflections from it.

Download Full Size | PDF

The known thickness and refractive index (n=1.5) of the cover slips were used to calibrate the system, i.e., to find the wavelength bandwidth Δλ falling onto camera C1 through the use of Eq. 3. The bandwidth of the illumination falling onto the sensor was 24 nm. From Eq. (9) and (10)the resulting depth range and depth resolution (using a Hanning filter and 640 pixels to sample the k-axis) were ~4.7 mm and ~58.8 µm, respectively.

Figure 6(a) shows a raw fringe pattern from C1 where the λ-axis gives the range of detected wavelengths for the sensor. Each image of this type was re-sampled along the λ-axis (assuming uniformly spaced λ values), so as to give uniformly-sampled points along the k-axis. Although the variation of diffracted wavelength with position in the spectrometer detector array is strictly speaking nonlinear [22], the nonlinear effects are quite small. For example, for a bandwidth Δλ=50 nm centered at λc =840nm, incidence angle θ=π/4 rad, an imaging lens of focal length f=160 mm, a detector size of 6.4 mm along the wavelength axis, and a grating with 1200 lines/mm, the maximum non-linear error is below 10-3 nm across the range of Δλ. This means that with those parameters, the wavelength-position relationship can be safely considered as linear.

 figure: Fig. 6.

Fig. 6. (a) Interference pattern with both cover slips present, (b) spectrum of intensity along line y=1.9 mm with frequency converted to optical depth by Eqn. 3 (linear scale).

Download Full Size | PDF

Figure 6(b) is the spectrum of the pattern along one line parallel to the fk -axis where frequency has been converted to optical path length through Eq. 3. Peaks RS1 and RS2 are due to the interference between beams reflected at the reference surface R and surfaces S1 and S2 respectively. RS1’ is the interference peak coming from the reference and the back surface of the first cover slip, S1’. The auto interference between S1 and S1’ generates the first peak in Fig. 6(b), located at an apparent depth of 440 µm, the same as the distance between RS1 and RS1’. These values nearly match the optical thickness of the cover slip. The depth resolution of the system is consistent with the expected theoretical value: the peaks average full width at half maximum (FWHM) shown in Fig. 6(b) is 52.2 µm.

Applying Eq. (5) to the interference patterns Ia (S1 S2) and Ib (S1 S2) from steps (c) and (f) of Section 3, the wrapped phase difference map is obtained for the entire medium as shown in Fig. 7(a). For clarity, regions containing no useful phase information have been masked out by applying a binary mask derived from the magnitude image. RS1 and RS1’ have the same phase difference, as expected since they both come from the same cover slip undergoing a rigid body rotation around an axis parallel to its surface. The phase difference measured at the auto interference peak (situated at 440µm) is also very close to zero for the same reason. Figure 7(b) shows the unwrapped phase map where the tilt is clearly observed for each cover slip interface. RS2 has a smaller tilt than RS1 and RS1’ due to the limited range of the PZT driven tilting stage. The unwrapped phase map is then converted to optical path change at each interface using Eq. 6 and 7.

Figure 8 shows a comparison of the displacement profiles measured at each surface by means of (i) phase-contrast SOCT, and (ii) two beam interferometry using images Ia (S2), Ib (S2), Ia (S1) and Ib (S1) acquired by camera C2 during steps (a), (b), (d) and (e). Two beam interference images contained a narrow band of fringes due to the monochromatic light source and were converted to phase using the Takeda Fourier transform method applied along the y direction [35].

Tilt angles measured with phase-contrast SOCT for S1 and S2 were 1,070.9 µrad and 116.69 µrad respectively, compared to 1,074.8 µrad and 117.21 µrad obtained with two beam interferometry. Respective errors of 0.36% and 0.44% for the two surfaces can be regarded as a very good agreement. The phase noise of the system was estimated by evaluating the standard deviation between the measured phase profiles and their best linear fit, and was equivalent to a displacement of 8.2 nm. It can be shown that the optical path variation along the illumination/observation direction due to the cover slip tilt is less than 0.1nm, well below the phase noise, and therefore can be neglected in these measurements.

 figure: Fig. 7.

Fig. 7. (a) Wrapped and (b) unwrapped phase difference map representing out of plane displacement field at the interfaces shown in Fig. 6(b) as RS1, RS1’ and RS2.

Download Full Size | PDF

The use of specular surfaces rather than a scattering medium simplifies the validation experiment and the results remain valid for partially scattering surfaces, as was demonstrated in reference [30] (specular surfaces used) and reference [31] (scattering surfaces used) for WSI. In terms of the mathematical description of the problem, in the case of using scattering media, the only difference would be that the term ϕj0 in Eq. (1) would include a random phase responsible for the speckle from different depths within the material. That random phase would be subtracted, however, when the phase difference is evaluated as long as the speckle does not decorrelate. The effect in the final result will be a reduction in the signal-to-noise ratio due to the multiple scattered light and the reduction in the backscattered power. As a consequence, the displacement sensitivity that ultimately depends on the phase RMS noise will be reduced.

 figure: Fig. 8.

Fig. 8. Displacement field profiles measured using phase-contrast SOCT (solid lines) and standard two-beam interferometry (dashed lines). An arbitrary offset of 0.1µm has been added for clarity.

Download Full Size | PDF

4. Phase change measurements within a porcine cornea

Porcine corneal trephinates were mounted on a pressure cell and loaded with hydrostatic pressure at IOP levels by using a column of a saline solution. A SOCT system similar to the one shown in Fig. 2 was used, the difference being that camera C2 was not present and that a blazed reflection grating (1200 lines/mm) was used instead of the transmission grating in order to increase light efficiency. To improve both dynamic range and depth range, a 12-bit camera (Vosskühler GmbH CCD 1300 QBF) with 1024×1024 pixels was used as camera C1. The pressure in the cell was increased from 2.06 kPa to 2.16 kPa at time t=0. At t=10 seconds a sequence of m interferograms was recorded at t=t 1, t 2, t 3,… tm using the first interferogram at t 1 as the reference state. Figures 9(a) and 9(b) show a magnitude image and the wrapped phase change map obtained between the reference state at t 1 and a loaded state recorded at t=t 1+9.1 seconds. One fringe corresponds to 420nm out-of-plane displacement in the axial direction. The depth range and depth resolution were 3.6 mm and 28 µm respectively, with 1024 pixels sampling a total bandwidth of 50nm.

The aim of this experiment was to demonstrate that depth-resolved phase changes due to deformation of a weakly scattering sample can be obtained with high sensitivity using a SOCT setup. Displacement fields can be obtained by unwrapping the modulo 2π phase and multiplying the result by the sensitivity of 420nm for every 2π phase change. The system was not optimized for axial resolution and therefore dispersion effects reduce the depth resolution as one gets deeper into the tissue. Dispersion compensation could be implemented by using plate compensators in the reference arm to account for the extra optical elements in the sample arm of the interferometer, and also for corneal dispersion. This approach, however, does not correct for depth dependent dispersion in the cornea. A better solution and particularly easy to implement in SOCT is numerical dispersion compensation, which allows depth dependent compensation as well as higher orders of dispersion to be corrected simultaneously [18]. The effects of dispersion in phase change measurements remain to be investigated. As dispersion involves the introduction of an extra wavelength dependent phase term in Eq. 1, as long as that term does not change when a load is applied, it will disappear when the phase difference is evaluated between the loaded and the reference states. In this case, dispersion will only reduce the axial resolution, and the measured phase will correspond to an average over a bigger resolution area (defined by the axial and lateral resolutions). It must be noted that a swept source WSI system would be affected by dispersion in a similar way [18].

 figure: Fig. 9.

Fig. 9. (a) Magnitude and (b) phase difference map from slice through porcine cornea. One fringe corresponds to a displacement of 420nm.

Download Full Size | PDF

The movie in Fig. 10(a) shows the wrapped phase change measured through the thickness of the corneal trephinate after the pressure in the cell was increased from 2.06 kPa to 2.16 kPa at t=0. The reference interferogram was recorded at t 1=10 seconds after the pressure change, and then a sequence of interferograms was recorded at 3.6 frames per second with an exposure time of 60ms. Figure 10(b) shows a similar case, in which the pressure was changed from 2.06 kPa to 2.45 kPa. Note that the anterior layers of the cornea move less (as indicated by a smaller phase change during the movie sequence) than the posterior ones, indicating a slight compression of the tissue.

 figure: Fig. 10.

Fig. 10. Movies of the wrapped phase change through-the-thickness of a porcine cornea trephinate after a change in the hydrostatic pressure from (a) (411 KB) 2.06 to 2.16 kPa and (b) (453 KB) 2.06 to 2.45 kPa. Framing rate: 3.6fps; Exposure time: 60ms.

Download Full Size | PDF

The high displacement sensitivity makes the system vulnerable to motion of the sample in the axial direction. The condition for successful temporal phase unwrapping (unwrapping the phase along the time axis in the movie) is that the phase change at a pixel between two successive frames must be less than π (corresponding to a displacement of 210nm). In the presence of external motion or eye motion in an in-vivo application, the framing rate has to be set in order to verify that condition. The effect of the exposure time is mainly related to the visibility of the interference fringes recorded by the camera in the spectrometer. The interferogram (a speckle interferogram in case of a scattering material) needs to stay stationary along the wavelength axis during the exposure time in order to record a high-modulation signal so that the microstructure and phase can be evaluated. The minimum exposure time necessary to obtain high modulation interferograms that span the full dynamic range of the sensor will be limited by the power of the light source, the numerical aperture of the objective lens, the backscattering properties of the sample in the spectral region used, the diffraction grating efficiency, the spectral sensitivity of the CCD sensor array and other power losses in the interferometer.

Motion in the lateral direction in the plane of the microstructure image will result in evaluating the phase difference at different points in the sample, rather than at the same point, and therefore the phase difference will become sensitive the derivative of the displacement, as in a shearing interferometer. This can be easily avoided by re-registering the magnitude images prior to phase evaluation.

For a WSI system to have the same depth range as a SOCT system with N pixels across the wavelength axis in the spectrometer, it would need to record m frames. This clearly limits the applicability of WSI in dynamic applications even using high speed cameras [31]. Another advantage of phase-contrast SOCT is that it can achieve higher depth resolutions at a lower cost as compared with WSI. This is because WSI would require wavelength swept sources that can scan a broad spectral band without mode hopping. Fibre ring, Ti:sapphire or dye lasers are expensive and prone to mode hopping, which leads to speckle decorrelation thus preventing phase evaluation. Phase contrast SOCT, however, provides phase distributions only across 2-D slices within the material, whether WSI provides full 3-D data volumes.

A practical inconvenience of our system is that the spectral data is obtained a posteriori, once the interferograms have been saved. This makes it difficult to adjust experimental parameters such as beams ratio, exposure time, framing rate and position of the reference mirror to position the spectral information within the usable bandwidth. A real time Fourier transform implementation (1-D, along the wave-number axis) would facilitate these tasks considerably.

5. Conclusions

Optical phase information extracted from interferograms recorded with a Spectral Optical Coherence Tomography system was used for the first time to measure depth-resolved displacement fields through-the-thickness of a sample. A displacement sensitivity of a few tens of nm is achievable, some two to three orders of magnitude better than the depth resolution of state-of-the-art OCT systems. The method was demonstrated with a proof-of-principle experiment and validated against two beam interferometry. The suitability of the method to study dynamic events was illustrated with movies of wrapped phase change distributions within the stromal tissue of an ex-vivo porcine cornea after a change in the intraocular pressure. Even though dispersion compensation could be numerically implemented to increase the axial resolution of the system, the effects of dispersion in the optical phase change remain to be investigated.

Acknowledgments

Manuel De la Torre-Ibarra thanks the Consejo Nacional de Ciencia y Tecnología (CONACYT, grant 42971) and also the Engineering and Physical Sciences Research Council (EPSRC, grant GR/T25040/01) for partially supporting this research. Jonathan M Huntley is grateful to the Royal Society and Wolfson Foundation for a Royal Society-Wolfson Research Merit Award. We also thank Dr. Tim Wess, School of Optometry and Vision Sciences, Cardiff University, for supplying the corneal samples.

References and links

1. P. Rastogi, Optical Measurement Techniques and Applications (The Artech house publishers, 1997).

2. A. Marañon, A. D. Nurse, J. M. Huntley, and P. D. Ruiz, “A low population genetic algorithm applied to characterization of sub-surface delamination”, in Proceedings of the International Conference on Computational Intelligence for Modeling, Control and Automation (CIMCA 2004), M. Mohammadian ed. (ISBN 1740881885, 2004) pp. 12–14.

3. T. Abe, Y. Mitsunaga, and H. Koga, “Photoelastic computer tomography: a novel measurement method for axial residual stress profile in optical fibers,” J. Opt. Soc. Am. A 3, 133–138 (1986). [CrossRef]  

4. Hillar Aben, Andrei Errapart, Leo Ainola, and Johan Anton, “Photoelastic tomography for residual stress measurement in glass,” Opt. Eng. 44, 93601 (2005). [CrossRef]  

5. Mary T. Draney, et al., “Quantification of Vessel Wall Cyclic Strain Using Cine Phase Contrast Magnetic Resonance Imaging,” Annals of Biomed. Eng. 30, 1033–1045 (2002). [CrossRef]  

6. DD Steele, TL Chenevert, AR Skovoroda, and SY. Emelianov “Three-dimensional static displacement stimulated echo NMR elasticity imaging,” Physics in Medicine and Biology 45, 1633–1648 (2000). [CrossRef]   [PubMed]  

7. B. K. Bay, T. S. Smith, D. P. Fyhrie, and M. Saad, “Digital volume correlation: three-dimensional strain mapping using X-ray tomography,” Exp. Mech. 39, 217–226 (1999). [CrossRef]  

8. J. Schmitt, “OCT elastography: imaging microscopic deformation and strain of tissue,” Opt. Express 3, 199–211 (1998), http://www.opticsinfobase.org/abstract.cfm?URI=oe-3-6-199 [CrossRef]   [PubMed]  

9. A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography -principles and applications,” Rep. Prog. Phys. 66, 239–303 (2003). [CrossRef]  

10. T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). [CrossRef]   [PubMed]  

11. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. El-Zaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. 117, 43–48 (1995). [CrossRef]  

12. M. A. Choma, A. K. Ellerbee, C. Yang, T. L. Creazzo, and J. A. Izatt, “Spectral-domain phase microscopy,” Opt. Lett. 30, 1162–1164 (2005). [CrossRef]   [PubMed]  

13. C. Joo, T. Akkin, B. Cense, B. H. Park, and J. F. de Boer, “Spectral-domain optical coherence phase microscopy for quantitative phase-contrast imaging,” Opt. Lett. 30, 2131–2133 (2005). [CrossRef]   [PubMed]  

14. Y. Yasuno, S. Makita, T. Endo, G. Aoki, H. Sumimura, M. Itoh, and T. Yatagai, “One-shot-phase-shifting Fourier domain optical coherence tomography by reference wavefront tilting,” Opt. Express 12, 6184–6191 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-25-6184 [CrossRef]   [PubMed]  

15. B. Grajciar, M. Pircher, A. Fercher, and R. Leitgeb, “Parallel Fourier domain optical coherence tomography for in vivo measurement of the human eye,” Opt. Express 13, 1131–1137 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-4-1131 [CrossRef]   [PubMed]  

16. M. Wojtkowski, R. Leigeb, A. Kowalczyk, T. Bajraszewski, and A. Fercher, “In vivo human retinal imaging by Fourier domain optical coherence tomography,” J. Biomed. Opt. 7, 457–463 (2002). [CrossRef]   [PubMed]  

17. S. Jiao, R. Knighton, X. Huang, G. Gregori, and C. Puliafito, “Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography,” Opt. Express 13, 444–452 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-2-444 [CrossRef]   [PubMed]  

18. M. Wojtkowski, V. Srinivasan, T. Ko, J. Fujimoto, A. Kowalczyk, and J. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12, 2404–2422 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-11-2404 [CrossRef]   [PubMed]  

19. B. Cense, N. Nassif, T. Chen, M. Pierce, S. -H. Yun, B. Park, B. Bouma, G. Tearney, and J. de Boer, “Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography,” Opt. Express 12, 2435–2447 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-11-2435 [CrossRef]   [PubMed]  

20. Maciej Wojtkowski, Tomasz Bajraszewski, Piotr Targowski, and Andrzej Kowalczyk, “Real-time in vivo imaging by high-speed spectral optical coherence tomography,” Opt. Lett. 28, 1745–1747 (2003). [CrossRef]   [PubMed]  

21. R. A. Leitgeb, L. Schmetterer, C. K. Hitzenberger, A. F. Fercher, F. Berisha, M. Wojtkowski, and T. Bajraszewski, “Real-time measurement of in vitro flow by Fourier-domain color Doppler optical coherence tomography,” Opt. Lett. 29, 171–173 (2004). [CrossRef]   [PubMed]  

22. B. Park, M. C. Pierce, B. Cense, S. -H. Yun, M. Mujat, G. Tearney, B. Bouma, and J. de Boer, “Real-time fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 µm,” Opt. Express 13, 3931–3944 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-11-3931 [CrossRef]   [PubMed]  

23. M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of swept source and Fourier domain optical coherence tomography,” Opt. Express 11, 2183–2189 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-18-2183 [CrossRef]   [PubMed]  

24. Johannes F. de Boer, Barry Cense, B. Hyle Park, Mark C. Pierce, Guillermo J. Tearney, and Brett E. Bouma, “Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography,” Opt. Lett. 28, 2067–2069 (2003). [CrossRef]   [PubMed]  

25. A. V. Zvyagin, “Fourier-domain optical coherence tomography: optimization of signal-to-noise ratio in full space,” Opt. Commun. 242, 97–108 (2004). [CrossRef]  

26. R. Leitgeb, C. Hitzenberger, and A. Fercher, “Performance of fourier domain vs. time domain optical coherence tomography,” Opt. Express 11, 889–894 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-8-889 [CrossRef]   [PubMed]  

27. B. Hyle Park, Mark C. Pierce, Barry Cense, and Johannes F. de Boer, “Optic axis determination accuracy for fiber-based polarization-sensitive optical coherence tomography,” Opt. Lett. 30, 2587–2589 (2005). [CrossRef]   [PubMed]  

28. Y. Yasuno, S. Makita, Y. Sutoh, M. Itoh, and T. Yatagai, “Birefringence imaging of human skin by polarization-sensitive spectral interferometric optical coherence tomography,” Opt. Lett. 27, 1803–1805 (2002). [CrossRef]  

29. G. Gülker, K. D. Hinsch, and A. Kraft, “Low coherence ESPI in the investigation of ancient terracotta warriors,” in Proceedings of Speckle Metrology 2003, K. Gastinger, O. Løckberg, and S. Winther, eds., Proc. SPIE4933, 53–58 (2003).

30. P. D. Ruiz, Y. Zhou, J. M. Huntley, and R. D. Wildman, “Depth-resolved whole field displacement measurement using wavelength scanning interferometry,” J. Opt. A: Pure Appl. Opt. 6, 679–683 (2004). [CrossRef]  

31. P. D. Ruiz, J. M. Huntley, and R. D. Wildman, “Depth-resolved whole-field displacement measurements by wavelength-scanning electronic speckle pattern interferometry,” Appl. Opt. 44, 3945–3953 (2005). [CrossRef]   [PubMed]  

32. J. Schwider and L. Zhou, “Dispersive interferometric profilometer,” Opt. Lett. 19, 995–997 (1994). [CrossRef]   [PubMed]  

33. J. M. Huntley, “Automated Analysis of Speckle Interferograms,” in Proceedings of Digital Speckle Pattern Interferometry and Related Techniques, P. K. Rastogi ed., (Chichester, West Sussex, England, John Wiley & Sons., 2001) pp. 59–139.

34. Y. Teramura, M. Suekuni, and F. Kannari, “Two-dimensional optical coherence tomography using spectral domain interferometry,” J. Opt. A: Pure Appl. Opt. 2, 21–26 (2000). [CrossRef]  

35. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe pattern analysis for computer based topography and interferometry,” J. Opt. Soc. Am. 72, 156–160 (1982). [CrossRef]  

Supplementary Material (2)

Media 1: MPG (410 KB)     
Media 2: MPG (453 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic of a SOCT system consisting of a broadband source BBS, beam splitter BS, object O, reference mirror R (apparent position also shown as a dashed line), diffraction grating G, lens L of focal length f, and two-dimensional photo detector array D. θ and βc denote, respectively, the incident angle of the broadband beam and the diffraction angle for the central wavelength λc .
Fig. 2.
Fig. 2. Optical setup for simultaneous SOCT and two beam interferometry.
Fig. 3.
Fig. 3. (a) x and (b) y-axis image formation.
Fig. 4.
Fig. 4. Experimental procedure for measuring tilt on S1 and S2 using SOCT, and validation using two-beam interferometry.
Fig. 5.
Fig. 5. Sample geometry consisting of two microscope cover slips. The rear surface of the second cover slip was painted black to suppress the back reflections from it.
Fig. 6.
Fig. 6. (a) Interference pattern with both cover slips present, (b) spectrum of intensity along line y=1.9 mm with frequency converted to optical depth by Eqn. 3 (linear scale).
Fig. 7.
Fig. 7. (a) Wrapped and (b) unwrapped phase difference map representing out of plane displacement field at the interfaces shown in Fig. 6(b) as RS1, RS1’ and RS2.
Fig. 8.
Fig. 8. Displacement field profiles measured using phase-contrast SOCT (solid lines) and standard two-beam interferometry (dashed lines). An arbitrary offset of 0.1µm has been added for clarity.
Fig. 9.
Fig. 9. (a) Magnitude and (b) phase difference map from slice through porcine cornea. One fringe corresponds to a displacement of 420nm.
Fig. 10.
Fig. 10. Movies of the wrapped phase change through-the-thickness of a porcine cornea trephinate after a change in the hydrostatic pressure from (a) (411 KB) 2.06 to 2.16 kPa and (b) (453 KB) 2.06 to 2.45 kPa. Framing rate: 3.6fps; Exposure time: 60ms.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

ϕ j ( λ ) = ϕ j 0 + 4 π λ z j
ϕ j ( k ) = ϕ j 0 + 2 k z j
f k = 1 2 π ϕ k = z j π
I ( k ) = I 0 + 2 j = 1 M I R I j cos [ ϕ j ( k ) ] + 2 i = 1 M j = i + 1 M I i I j cos [ ϕ i ( k ) ϕ j ( k ) ]
Δ ϕ j = tan 1 { Re [ I ˜ 1 ( f kj ) ] Im [ I ˜ 2 ( f kj ) ] Im [ I ˜ 1 ( f kj ) ] Re [ I ˜ 2 ( f kj ) ] Im [ I ˜ 1 ( f kj ) ] Im [ I ˜ 2 ( f kj ) ] + Re [ I ˜ 1 ( f kj ) ] Re [ I ˜ 2 ( f kj ) ] }
w j = λ c 4 π Δ ϕ j
w j = ( n 0 n ) d 1 + n d j
Δ z = N λ c 2 4 Δ λ
δ z = γ λ c 2 Δ λ
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.