Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

On the quantification of spatial resolution for three-dimensional computed tomography of chemiluminescence

Open Access Open Access

Abstract

Three-dimensional computed tomography of chemiluminescence (CTC) for combustion diagnostics is attracting a surged research interest due to recent progress in sensor technologies and reduced costs of high-speed cameras. For example, it has been applied to recover the 3D distributions of intermediate chemical species such as CH* and OH*, heat release rate, and flame topology. Although these applications were demonstrated to be successful, there are still a few drawbacks of this technique that have not be cured. For example, to the best of the authors’ knowledge, all the imaging models that have been developed so far ignore the imperfections of cameras such as lens distortion and skewness. However, this will unavoidably introduce errors into the weight matrix. In addition, spatial resolution of a CTC system is a critical performance parameter. However, it has only been studied qualitatively and no quantitative quantification method is reported so far. This work aims to solve these problems by improving the imaging model and developing a method based on edge spread function for the quantification of spatial resolution. Although this work is conducted under the context of CTC for combustion diagnostics, it also provides useful insights for other tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence.

© 2017 Optical Society of America

Corrections

Tao Yu, Hecong Liu, and Weiwei Cai, "On the quantification of spatial resolution for three-dimensional computed tomography of chemiluminescence: erratum," Opt. Express 26, 2557-2557 (2018)
https://opg.optica.org/oe/abstract.cfm?uri=oe-26-3-2557

1. Introduction

Optical techniques have become the major tools that the combustion scientists utilize for flame diagnostics [1–3]. A variety of such methods are available which should be chosen accordingly for different applications. For example, for a steady laminar flame, the thermodynamic properties of the flame remain stable for the time being. In this case, there is no special requirement for the temporal resolution and a point measurement technique is sufficient to resolve potential non-uniformities within the flame using mechanical scanning mechanism [4–6]. However, for the measurement of the turbulent flows or transient combustion phenomena such as ignition and flashback, both spatial and temporal resolutions should be enabled, necessitating a rapid imaging technique.

So far there are three categories of imaging techniques that have been developed and applied for combustion diagnostics including planar imaging [7–11], computed tomography (CT) [12–14], and light field imaging [15], respectively. The first category is two-dimensional in nature and typically requires a thin excitation laser sheet to illuminate a selected cross-section of the target flame. The ensuing signal on the illumination plane following the excitation is then imaged using an array detector. Light field imaging is three-dimensional in nature and can achieve 3D spatial resolution (SR) by using only one single camera which is equipped with a micro-lens array in addition to a normal imaging lens [16]. The disadvantage of this method is the trade-off between its field of view and SR [17]. In different to the two types of methods mentioned earlier, CT can either be two- or three-dimensional [18–22]. As indicated by its name, CT is an indirect imaging method which recovers the field of interest from its projections i.e. the integrals of the field. Depending on the number of views used in the measurement, CT can be further divided into multi-angular CT [23] and optical sectioning tomography (OST) [24] respectively. The latter sub-category only takes measurements from one view and the depth information of the flame can be recorded by refocusing either using a tunable lens or the position of the camera so as to obtain multiple projection data. Optical section tomography is typically limited in both spatial and temporal resolution compared with multi-angular CT. A comprehensive overview of CT modalities for gas dynamics and reactive flows can be found in [25].

Among all these methods, computed chemiluminescence tomography (CTC) [26–28] is of special interests due to its ease of implementation, cost-effectiveness as no laser source is required, and capability in measuring multiple important intermediate flame species such as CH* and OH* [29–31], which are precursors for flame front and heat release. Despite these successful demonstrations and applications, this technique is not flawless. For example, to the best of the authors’ knowledge, all the imaging models that have been developed so far ignore the imperfections of cameras such as lens distortion and skewness [21, 22, 28, 32]. However, this will unavoidably introduce errors into the weight matrix. In addition, SR is the most critical performance parameter for a CTC system. Nevertheless, the quantification of SR is an extremely complicated process as it is determined by many factors such as the noise level in the measurements, the reconstruction fidelity, and accuracy of the imaging model et al. In addition, due to non-uniform spatial sampling, SR varies across the reconstruction volume. Though a few qualitative methods had been proposed to analyze the SR, none of these methods can provide a spatially resolved quantification of SR within the region of interest (ROI) [28, 33]. Thus, this work aims to develop such a technique that can provide a SR map which can aid the experimental design such as optimization of the SR of a specific location within the ROI. In this technique, an artificially made phantom was used to generate sharp edges at multiple locations within the reconstruction volume. After the reconstruction, edge spread functions (ESFs) were then extracted and used to calculate the modulation transfer function (MTFs) from which the SR map can be obtained [34–36].

The remainder of this paper is organized as follows: Section 2 introduces mathematical formulation of a typical CTC problem, a new imaging model that considers the camera imperfections, the inversion algorithm, and the method for the quantification of SR; Section 3 details the experimental design and calibration process; Section 4 discusses the results from the quantified SR; and the final section summarizes this work.

2. Theory

2.1 Mathematical formulation of the CTC problem

A 3D chemiluminescent field can be denoted as a continuous function as f(x,y,z), and its projection on the imaging plane can be described as:

p(xc,yc)=Vf(x,y,z)w˙(x,y,z,xc,yc)dV,
where p(xc,yc) represents the signal intensity at the point (xc,yc) on the imaging plane; w˙(x,y,z,xc,yc) is also a continuous function meaning the percentage of contribution of f(x,y,z) to the point (xc,yc); and V is the integral volume. When the reconstruction volume is discretized into cubic voxels, the discrete approximation of Eq. (1) can be expressed as:
ps,t=i=1If(xi,yi,zi)w˙(xi,yi,zi,s,t)ΔV=i=1If(xi,yi,zi)w(xi,yi,zi,s,t),
where i and I are the voxel index and the total number of voxels respectively; (xi,yi,zi) represents the central point of i-th voxel; ps,t indicates the measured signal on the s-th pixel of the t-th projection; and w is the so-called weight matrix. Each element of the matrix represents the percentage of luminescence intensity from the i-th voxel to the s-th pixel of the t-th projection. Repeating Eq. (2) for every pixel of all projections, a total number of s×q linear equations can be obtained, where q is the total number of projections. These equations can be formulated into a matrix format as:
p=Wf,
As can be seen from Eq. (3), the projection of the luminescent field is represented as a linear transformation of the field itself, and the weight matrix W essentially acts as a linear imaging model that approximate of the complicated practical imaging process. Thus, how to formulate the imaging model is of paramount importance.

2.2. Development of the imaging model

So far, a few imaging models have been developed under the context of CTC. The major difference between these models is on how the projection of a voxel on the imaging plane is approximated. Figure 1(a) illustrates three coordinates systems (world coordinate, camera coordinate and image coordinate) that are used for the development of imaging models. Three approaches are summarized in Figs. 1(b)-1(d) respectively. In the first approach, a voxel is simplified as a point light source positioned at the voxel center, and its projection is assumed to be a blurry circle with a uniform intensity profile. The pixels in the camera are assumed to be circular and the contribution of the voxel to a specific pixel, i.e. the weight coefficient, is assumed to be proportional to the distance between the centers of the two circles [1]. In the second approach, the blurry circle (the one filled with yellow) is replaced with a square (labeled in red) which has an equal area and the camera pixels are assumed to be squares [2]. The weight coefficient is determined by calculating the ratios between the intersection areas and the area of the red square. In the third model, the camera pixels are considered to be squares and each pixel is further divided into a large number of sub-squares so that the intersection areas between the pixels and blurry circle can be approximated more accurately.

 figure: Fig. 1

Fig. 1 (a): Definition of three coordinate systems for the development of imaging models; (b), (c), and (d) illustrate three approaches that have been used to approximate the projection of a voxel.

Download Full Size | PDF

However, these models suffer from a common limitation that the imperfections of the camera such as lens distortion and skewness are not included. Fortunately, these effects can be quantified with the so-called intrinsic parameters of the camera which are readily available from a calibration process using the so-called Camera Calibration Toolbox for Matlab [37]. To facilitate discussion, the same definition of coordinate systems will be used as shown in Fig. 1(a). The origins of these coordinate systems are set on the top left corner of the reconstruction volume, the center of lens, and the upper left of the camera chip. The projection of a point (xw,yw,zw) on the focusing plane in the world coordinate system to the point (xc,yc,zc) in camera coordinate system can be described as:

(xcyczc)=Rt(xwywzw)+Tt,
where Rt is the rotation matrix and Tt the translation matrix for the t-th projection, both can be obtained from the calibration. The corresponding point coordinate (xima,yima,zima) in the camera coordinate system can be calculated using the thin lens formula and the principle of similar triangles as described by Eqs. (5) and (6) respectively:
1zima+1zc=1fc.
ximaxc=yimayc=zimazc.
where fc is the focal length which can also be obtained from calibration.

Next, the point (xima,yima,zima) in the camera coordinate system is normalized and projected to the imaging plane according to:

(xnyn)=(xima/zimayima/zima).
The position of the point (xn,yn) in the imaging plane can be better approximated by introducing lens distortions effects in the imaging model, and the new position can be calculated as:
(xdyd)=(1+kc(1)r2+kc(2)r4+kc(5)r6)(xnyn)+(dxdy),
where r2=xn2+yn2, and kc is a 5-vector containing both radial and tangential distortion coefficients. The second term on the RHS is the so-called tangential distortion vector and is defined as:

(dxdy)=(2kc(3)xnyn+kc(4)(r2+2xn2)kc(3)(r2+2yn2)+2kc(4)xnyn).

Considering the skewness, the final position of the projected point from the world coordinate system to the imaging plane can be calculated as:

{xp=fc(xd+αcyd)+cc(1)yp=fcyd+cc(2)n!r!(nr)!
where the parameter αc is the skew coefficient that represents the angle between the horizontal and vertical pixel axes; the 2-vector cc is the principal point of the camera on the imaging plane.

However, a point light source in the world coordinate system is not always imaged as a point in the imaging plane due to limited size of the lens aperture, which will incur a blurry circle i.e. the so-called circle of confusion (CoC). The diameter of CoC can be calculated based on ray-tracing and paraxial optics. The detailed schematic illustration of this process is depicted in Fig. 2(a). A point originating from the focusing plane projects exactly a single point in the image plane as discussed above. The depth-of-field (DoF) is the region within which the projected CoCs are within the maximum acceptable size. According to geometric optics, the size of CoC can be determined by:

DCoC=fcN|do1df1|fc1df1,
where df is the distance from the focusing plane to lens; do is the actual object distance; and N represents the F number of the lens system.

 figure: Fig. 2

Fig. 2 (a): An illustration for calculating the diameters of CoCs based on ray-tracing and paraxial optics; and (b): illustration on how the projection of a voxel is approximated in the new imaging model.

Download Full Size | PDF

As mentioned earlier, how the projection of a voxel is approximated is critical for the determination of weight matrix. In the new model a voxel is considered to be a large set of points and according to Eqs. (4)-(10) the projections of these points can be calculated. The projected points essentially consists of the distorted projection of a voxel. The weight coefficient of a certain voxel to a certain pixel is the ratio between the projected area within the pixel and the total projected area as shown in Fig. 3(a). When the voxel locates within the DoF i.e. the reconstruction volume is small enough, the CoCs are within the acceptable size, and the weight coefficient can be estimated as the ratio between the number of points projected on a specific pixel and the total number of projected points as shown in Fig. 3(b). Nevertheless, when the voxel is outside of DoF, CoCs cannot be ignored in the projection. In this case, the weight coefficient is calculated as the ratio between the overall areas of all CoCs and the sum of areas each CoCs falls within the pixel. For simplicity, in this model, when the center coordinate of a pixel falls within a CoC, this pixel is assumed to be covered completely, as shown in Fig. 2(b) in which the green pixels are the ones that considered to be covered by the CoC.

 figure: Fig. 3

Fig. 3 (a): illustration of the projected area and the area falls within a specific pixel; and (b): the projected area which is approximated by the projection of many points. .

Download Full Size | PDF

2.3 Algebraic reconstruction technique

When the weight matrix is obtained according to the process detailed above, Eq. (3) can then be solved using a well-established tomographic inversion algorithm [38]. The algebraic reconstruction technique is adopted here due to its ease of implementation and also its prevalence in the previous demonstrations of 3D tomographic techniques as applied to combustion diagnostics [39–45]. The iterative process of the ART algorithm can be described as:

fh+1=fh+αws,tps,tws,tfhws,tws,t,
where h is the iteration index; ws,t is a vector and its dot product with f produces the value of the s-th pixel on t-th projection; and the parameter α is the relaxation factor that controls the convergence rate of the iteration.

2.4 Method for the quantification of spatial resolution

There are many ways to define SR, a rigid definition which bases on modulation transfer function (MTF) will be adopted in this work [34]. Since a phantom that contains a sharp edge is relatively easy to produce in an experiment than a point source, in this work the MTF is obtained by measuring the edge spread function (ESF). An example ESF (the response to a step signal) is shown in Fig. 4(a). The line spread function (LSF) as shown in Fig. 4(b) can be calculated by taking the first derivative of ESF. The MTF can be obtained by performing 1D discrete Fourier transform of the LSF as shown in Fig. 4(c). In this example, a 10% cutoff threshold is applied to MTF and the SR is determined to be 0.24 lp/mm.

 figure: Fig. 4

Fig. 4 (a): an example edge spread function; (b) the line spread function calculated from (a); (c) modulation transfer function which is obtained from 1D Fourier transform of the line spread function shown in Panel (b).

Download Full Size | PDF

Figure 5 is a flowchart that describes a complete procedure on how to quantify SR of a CTC system. In the first step, a phantom that contains sharp edges should be made; tomographic measurement and inversion are then performed to obtain the 3D reconstruction, from which the ESF of a ROI is extracted; the ESF is further processed to produce LSF, the 1D discrete Fourier transform of which produces the MTF; and finally the SR can be obtained by applying a cutoff threshold to the MTF. By moving the phantom across the reconstruction volume, SR can be quantified for any desired position.

 figure: Fig. 5

Fig. 5 The flowchart that describes the method for the quantification of SR.

Download Full Size | PDF

3. Experimental design and calibration

To illustrate the method detailed in the previous section, an example experiment was performed and the setup is shown in Fig. 6(a). As can be seen, eight sliding rails are positioned along a semicircular ring, at the center of which is the phantom to be measured. Here a burner is plotted just for illustrative purposes. The actual phantom used was a cup [Fig. 6(b)] that contains fluid with stable fluorescence over time. The dimensions of the cup is measured by a vernier caliper and provided in Fig. 6(b). The fluid contains two organic compounds which are CPPO (C26H24O8Cl6) and hydrogen peroxide respectively. The chemical reaction between them generates energy which is then transferred to the dye. The dye then produces fluorescence. As only one camera was available in our lab, the images taken from different angles had to be acquired sequentially by moving the camera from rail to rail. The camera used in this work is Photron FASTCAM Mini AX100, with a pixel resolution of 1024 × 1024 and a pixel size of 20 μm × 20 μm. An AF Nikon lens (50 mm focal length and f/1.8) was used to collect the fluorescence. Also, since the calibration process and registration of the projections had to be performed separately, eight fixtures were installed in each rail to guarantee the position of the camera can be repeated in each rail. It has to be pointed out that the calibration process was conducted without the presence of the cup. However, as can be seen from Fig. 6(b), the refraction of the cup will distort the calibration pattern if it is in place. Such effect introduces certain error into the imaging model and eventually leads to the degradation of SR, which is determined by many factors such as the accuracy of the imaging model, the reconstruction fidelity, and the number of projections, etc. In this work, all these effects are considered all together and their influence will be reflected in the final calibrated spatial resolution.

 figure: Fig. 6

Fig. 6 (a): experimental setup: eight sliding rails were positioned as a semicircular ring and the phantom was arranged at the center. A camera was used to sequentially capture images from eight view angles; and (b): a photography of the checkerboard and the cup.

Download Full Size | PDF

The calibration was conducted three times to test whether the camera can be exactly repeated at the same position on each rail. During the calibration process, a checkerboard [shown in Fig. 6(b)] with 5mm×5mm grids was placed in the position where the phantom was supposed to be. The coordinates of the intersections between the black and white squares on the image coordinate system were then extracted from the registered images. By using the coordinates of the intersections in both the world and image coordinate systems as inputs, both the extrinsic and intrinsic parameters can then be fitted according to the imaging model. It was found that the variations between the calibrated Euler angles, which were calculated from the rotation matrix, were within 0.06 degrees, indicating a good repeatability can be guaranteed during the experimental process. The averaged extrinsic parameters for the eight views are summarized in Table 1 and were used later in the reconstruction process. The parameters Yaw, Pitch and Roll are the Euler angles calculated from the rotation matrix, and Tx, Ty and Tz are the three elements in the translation matrix.

Tables Icon

Table 1. Summary of averaged extrinsic parameters from three calibration processes.

In addition to the extrinsic parameters, the intrinsic parameters such as the skew coefficients and distortion coefficients also strongly affects the accuracy of the imaging model as discussed earlier. To demonstrate this claim, the comparisons between the exact and estimated intersection points on the 7-th view according to the imaging models both with and without consideration of lens distortion and skewness are shown in Fig. 7. Panel (a) and (b) are the results for the cases with and without consideration of these effects; and Panel (c) and (d) are the zoomed versions of Panel (a) and (b) respectively. The red crosses and blue circles are the exact and estimated intersection points respectively. According to the results, the maximum pixel error was reduced from about three pixels to less than one by taking into consideration of lens distortion and skewness, proving the superiority of the newly proposed imaging model.

 figure: Fig. 7

Fig. 7 (a) comparisons between the exact and estimated intersection points on the 7-th view according to the imaging model with consideration of lens distortion; (b) counterpart of (a) without consideration of lens distortion; (c) zoomed version of (a); and (b) zoomed version of (d). The red crosses and blue circles are the exact and estimated intersection points respectively.

Download Full Size | PDF

The eight projections of the phantom are shown in Fig. 8. To better visualize the phantoms, the figure was plotted in a colored manner. As can be seen from this figure, at the top of the luminescent phantom, the fluid is higher near the wall than in the middle due to fluid viscosity. To minimize the effects of measurement noise, the images without the phantom in place were taken and subtracted from the phantom projections. A threshold was then applied to remove the remaining background. The processed projections were then used as the inputs for the tomographic reconstruction.

 figure: Fig. 8

Fig. 8 Eight projections of the phantom plotted in a colored manner.

Download Full Size | PDF

4. Results and discussion

4.1 Validation of reconstruction

In this study, the reconstruction volume was set as 30×60×60 mm3 in dimension. The (x, y, z) coordinates of the upper left corner of the reconstruction volume close to the calibration pattern is (35 mm, 4.6 mm, 11.6 mm). The volume is discretized into 120×120×120 voxels, resulting in a voxel size of 0.5 mm in the y and z axes, and 0.25 mm in the x axis. Before the tomographic inversion, the measured projections were scaled so that the integrated intensity is consistent from view to view. The 3D structure of the phantom can then be obtained by performing the tomographic reconstruction using the ART algorithm as introduced in Section 2. The relaxation factor α was set to be 10−6 so as to guarantee a good convergent behavior in the presence of measurement noise. The iterative process was terminated when the relative change in f between two consecutive iterations is smaller than a small positive number e.g. 10−6. Additional termination criterion was applied when the iteration number achieves a certain number e.g. 500 to avoid over-iteration. A few representative layers from the reconstruction along the x, y, and z axes in the world coordinate system are plotted in each of the three rows of Fig. 9, respectively. It has to be noted that the layer index increases along the positive directions of the x, y, z axis as plotted in Fig. 6(b). Thus, the Layer x50 is 12.5 mm (i.e. 50 × 0.25 mm) away from the top of the reconstruction volume. Since the coordinates of the reconstruction volume is well-defined in the coordinate system, the position of any layer can be inferred according to the layer index. It has to be noted that the color bar shown in Fig. 9 is unit-less and only serves as a relative scale. The circular profile of the phantom is obviously captured as can be seen from the top view (along x axis); and from the side views (along both y and z axes), it can be found that at the top of the reconstruction, the luminescent intensity is higher near the wall than in the middle, which agrees with the observation from the side view. To provide a more quantitative assessment of the reconstruction fidelity, the diameters of the reconstructed circles in a few sampled layers have been calculated and compared with the actual diameters of the circles estimated from the dimensions of the cup. The results are shown in Fig. 10, from which we can see that the reconstructed diameters agrees quite well with the diameters estimated from the dimensions of the cup. Further, the actual height of the fluorescent fluid is 15.3 mm, and the height estimated from reconstruction is 15.5 mm, the relative error is 1.31%. Thus, both the qualitative and quantitative analysis of the results suggest a good reconstruction fidelity. Unfortunately, the reconstruction was not flawless. There are some artefacts which are majorly caused by insufficient number of projections and the error introduced into the imaging model due to the refraction of the cup. To alleviate this problem, prior information should be included. As can be seen from Fig. 8, the background pixels essentially contains no signal from the phantom, meaning their values are exactly zeros. Thus, the voxels that can contribute signals to these pixels are essentially non-fluorescent i.e. zero-valued. This information can be included in the reconstruction as an effective regularization to remove the streak artefacts as shown in Fig. 11. Nevertheless, artifacts reduction is inappropriate when quantifying the SR, since the ESF was altered intentionally to be sharper leading to a greatly improved SR. Thus, when quantifying SR, the reconstruction without regularization was used instead.

 figure: Fig. 9

Fig. 9 A few representative layers from the reconstruction along the x, y, and z axes in the world coordinate system are plotted in each of the three rows respectively.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Comparison between the reconstructed and actual diameters of the circles within the sampled layers. The range of the horizontal axis is selected to include all the layers shown in Fig. 9.

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Counterpart of Fig. 9 but of regularized reconstruction.

Download Full Size | PDF

4.2 Quantification of the spatial resolution

To illustrate how the SR was quantified according to the method detailed in the previous sections, the middle layer i.e. the Layer x60 along the x-axis from the reconstruction without regularization was selected as an example as shown in Fig. 12(a), within which the circular shape acts as the sharp edges. Eight tracks of interest labeled as red lines were selected along the radial directions, from which the ESFs were then extracted. The normalized ESF along Track 1 is plotted in Fig. 12(b). The corresponding LSF [as shown in Fig. 12(c)] was obtained by taking the first derivative of the ESF with respect to voxel along the radial direction. The LSF was then oversampled using spline interpolation, and finally the normalized MTF was calculated by taking the 1D DFT of the LSF as shown in Fig. 12(d). A cutoff threshold of 10% was then applied and the SR was determined to be 0.67 lp/mm.

 figure: Fig. 12

Fig. 12 (a): Reconstruction of the 60-th layer along the x axis, eight tracks of interest are labeled as red lines; (b): extracted ESF along Track 1; (c): LSF calculated by differentiating the ESF shown in Panel (b); and (d): MTF obtained by performing 1D Fourier transform of the LSF shown in Panel (c).

Download Full Size | PDF

The SR of the six tracks shown in Fig. 12(a) can be quantified using the procedure described in Figs. 12(b)-12(d), and the results are listed in Table 2. To distinguish the artefacts from the real sharp edge, no tracks were selected from areas where artefacts prevails. In addition, a red circle with the actual diameter is overlaid on the reconstruction to show where the shape edge should be. As can be seen, SR varies from position to position within the tomographic field due to reasons such as non-uniform spatial sampling and measurement noise etc. The best SR occurs at Track 1 and 4, which are aligned with the viewing angle of the fifth camera which has a Yaw angle of about 180 degrees.

Tables Icon

Table 2. The spatial resolutions quantified for the six tracks in the reconstruction of Layer x60.

The SR achievable for a CTC system depends on the number of projections and their arrangement, the accuracy of the physical model, the signal-to-noise ratio, the performance of the reconstruction algorithm, etc. However, all these effects are considered together and reflected in the final quantified SR. According to [34] the point spread function (PSF) of the camera and the size of its pixels are the two major parameters that imposes the fundamental limitations to SR of the camera. For example, when the PSF spans many pixels in the camera, a sharp edge which features an infinitely small transition will be imaged as a smooth curve. The wider the PSF the smoother the imaged curve. In this case, the SR is limited by width of the PSF. On the other hand, when the PSF is narrower than the pixel size, a sharp edge will be imaged by only one pixel. In this case, the SR cannot be better than the pixel size. Similarly, for the CTC system developed here, due to factors such as artifacts caused by insufficient number of projections and the reconstruction error originated from the measurement noise, the width of PSF of the tomographic system is much larger than the voxel size. In this case, the ESF will spread over several consecutive voxels, resulting in reduced SR. Thus, by increasing the number of projections and enhancing the signal-to-noise ratio, the SR of the CTC system can be improved.

A 3D map of SR was obtained via repeating the process as illustrated in Figs. 12(b)-12(d) for all tracks and layers where sharp edge exists. The obtained SR in the unit of lp/mm for a few sampled layers is plotted in Fig. 13. For each layer, the SR for different tracks are labeled with distinct numbers and the colors of the circles indicates the values of the corresponding SR. By moving the phantom within the reconstruction volume, the SR of an arbitrary position can be obtained. This is especially useful in experimental design when the SR of a specific location needs to be optimized.

 figure: Fig. 13

Fig. 13 The quantified 3D map of SR for the CTC system developed in this work. The units of the three axes are voxels, each of which has a dimension of 0.25×0.5×0.5 mm3.

Download Full Size | PDF

5. Summary

In summary, this work proposed an improved imaging model that incorporates the imperfections of cameras such as lens distortion and skewness. It has been shown that the pixel error was reduced to within one pixel, demonstrating its superiority over the existing models. In addition, an experiment was conducted in this work to recover the 3D structure of a phantom that features sharp edges using the new imaging model. The results validated the correctness of this model. In addition, a quantitative method for the quantification of spatial resolution was developed in this work. This method relies on the measurement of the edge spread function, from which the modulation transfer function can be calculated and analyzed to obtain the spatial resolution. Based on this method, a map of spatial resolution was generated. This map is expected to be extremely useful for experimental design when the spatial resolution of a specific location needs to be refined. Although this study is performed under the context of CTC for combustion diagnostics, it can also provide useful insights for other tomographic modalities such as volumetric laser-induced fluorescence [14] and tomographic laser-induced incandescence [23].

Funding

This work was funded by the Chinese Government ‘Thousand Youth Talent Program’.

References and links

1. J. Wolfrum, T. Dreier, V. Ebert, and C. Schulz, “Laser-based combustion diagnostics,” Encyclopedia of Analytical Chemistry, 2118–2148 (2000).

2. F. Li, X. Yu, W. Cai, and L. Ma, “Uncertainty in velocity measurement based on diode-laser absorption in nonuniform flows,” Appl. Opt. 51(20), 4788–4797 (2012). [CrossRef]   [PubMed]  

3. C. Liu, L. Xu, J. Chen, Z. Cao, Y. Lin, and W. Cai, “Development of a fan-beam TDLAS-based tomographic sensor for rapid imaging of temperature and gas concentration,” Opt. Express 23(17), 22494–22511 (2015). [CrossRef]   [PubMed]  

4. S. Roy, J. R. Gord, and A. K. Patnaik, “Recent advances in coherent anti-Stokes Raman scattering spectroscopy: Fundamental developments and applications in reacting flows,” Pror. Energy Combust. Sci. 36(2), 280–306 (2010). [CrossRef]  

5. B. Williams, M. Edwards, R. Stone, J. Williams, and P. Ewart, “High precision in-cylinder gas thermometry using Laser Induced Gratings: Quantitative measurement of evaporative cooling with gasoline/alcohol blends in a GDI optical engine,” Combust. Flame 161(1), 270–279 (2014). [CrossRef]  

6. A. Morones Ruelas, “Turbulence Measurements in a Fan-Stirred Flame Bomb Using Laser Doppler Velocimetry,” (2015).

7. C. Schulz and V. Sick, “Tracer-LIF diagnostics: quantitative measurement of fuel concentration, temperature and fuel/air ratio in practical combustion systems,” Pror. Energy Combust. Sci. 31(1), 75–121 (2005). [CrossRef]  

8. L. Itani, G. Bruneaux, A. Lella, and C. Schulz, “Two-tracer LIF imaging of preferential evaporation of multi-component gasoline fuel sprays under engine conditions,” Proc. Combust. Inst. 35(3), 2915–2922 (2015). [CrossRef]  

9. O. M. Feroughi, H. Kronemayer, T. Dreier, and C. Schulz, “Effect of fluctuations on time-averaged multi-line NO-LIF thermometry measurements of the gas-phase temperature,” Appl. Phys. B 120(3), 429–440 (2015). [CrossRef]  

10. A. Charogiannis and F. Beyrau, “Laser induced phosphorescence imaging for the investigation of evaporating liquid flows,” Exp. Fluids 54(5), 1–15 (2013). [CrossRef]  

11. C. Abram, B. Fond, A. L. Heyes, and F. Beyrau, “High-speed planar thermometry and velocimetry using thermographic phosphor particles,” Appl. Phys. B 111, 1–6 (2013).

12. T. Yu, B. Tian, and W. Cai, “Development of a beam optimization method for absorption-based tomography,” Opt. Express 25(6), 5982–5999 (2017). [CrossRef]   [PubMed]  

13. T. Yu and W. Cai, “Benchmark evaluation of inversion algorithms for tomographic absorption spectroscopy,” Appl. Opt. 56(8), 2183–2194 (2017). [CrossRef]   [PubMed]  

14. B. R. Halls, D. J. Thul, D. Michaelis, S. Roy, T. R. Meyer, and J. R. Gord, “Single-shot, volumetrically illuminated, three-dimensional, tomographic laser-induced-fluorescence imaging in a gaseous free jet,” Opt. Express 24(9), 10040–10049 (2016). [CrossRef]   [PubMed]  

15. J. Sun, C. Xu, B. Zhang, M. M. Hossain, S. Wang, H. Qi, and H. Tan, “Three-dimensional temperature field measurement of flame using a single light field camera,” Opt. Express 24(2), 1118–1132 (2016). [CrossRef]   [PubMed]  

16. S. Shi, J. Wang, J. Ding, Z. Zhao, and T. H. New, “Parametric study on light field volumetric particle image velocimetry,” Flow Meas. Instrum. 49, 70–88 (2016). [CrossRef]  

17. T. W. Fahringer, K. P. Lynch, and B. S. Thurow, “Volumetric particle image velocimetry with a single plenoptic camera,” Meas. Sci. Technol. 26(11), 115201 (2015). [CrossRef]  

18. W. Cai and C. Kaminski, “A numerical investigation of high-resolution multispectral absorption tomography for flow thermometry,” Appl. Phys. B 119, 1–7 (2015).

19. W. Cai and C. F. Kaminski, “A tomographic technique for the simultaneous imaging of temperature, chemical species, and pressure in reactive flows using absorption spectroscopy with frequency-agile lasers,” Appl. Phys. Lett. 104(3), 034101 (2014). [CrossRef]  

20. W. Cai and C. F. Kaminski, “Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy,” Appl. Phys. Lett. 104(15), 154106 (2014). [CrossRef]  

21. W. Cai, X. Li, and L. Ma, “Practical aspects of implementing three-dimensional tomography inversion for volumetric flame imaging,” Appl. Opt. 52(33), 8106–8116 (2013). [CrossRef]   [PubMed]  

22. W. Cai, X. Li, F. Li, and L. Ma, “Numerical and experimental validation of a three-dimensional combustion diagnostic based on tomographic chemiluminescence,” Opt. Express 21(6), 7050–7064 (2013). [CrossRef]   [PubMed]  

23. T. R. Meyer, B. R. Halls, N. Jiang, M. N. Slipchenko, S. Roy, and J. R. Gord, “High-speed, three-dimensional tomographic laser-induced incandescence imaging of soot volume fraction in turbulent flames,” Opt. Express 24(26), 29547–29555 (2016). [CrossRef]   [PubMed]  

24. B. Zhou, S. Wang, C. Xu, and J. Zhang, “3-D flame temperature reconstruction in optical sectioning tomography,” in IEEE International Workshop on Imaging Systems and Techniques (IEEE, 2009), 313–318.

25. W. Cai and C. F. Kaminski, “Tomographic absorption spectroscopy for the study of gas dynamics and reactive flows,” Pror. Energy Combust. Sci. 59, 1–31 (2017). [CrossRef]  

26. J. Floyd and A. Kempf, “Computed tomography of chemiluminescence (CTC): high resolution and instantaneous 3-D measurements of a matrix burner,” Proc. Combust. Inst. 33(1), 751–758 (2011). [CrossRef]  

27. J. Floyd, P. Geipel, and A. Kempf, “Computed tomography of chemiluminescence (CTC): instantaneous 3D measurements and phantom studies of a turbulent opposed jet flame,” Combust. Flame 158(2), 376–391 (2011). [CrossRef]  

28. J. Floyd, “Computed tomography of chemiluminescence: a 3D time resolved sensor for turbulent combustion,” (Imperial College London, 2009).

29. N. A. Worth and J. R. Dawson, “Tomographic reconstruction of OH* chemiluminescence in two interacting turbulent flames,” Meas. Sci. Technol. 24(2), 024013 (2013). [CrossRef]  

30. N. B. Anikin, R. Suntz, and H. Bockhorn, “Tomographic reconstruction of 2D-OH*-chemiluminescence distributions in turbulent diffusion flames,” Appl. Phys. B 107(3), 591–602 (2013). [CrossRef]  

31. Y. Jin, Y. Song, X. Qu, Z. Li, Y. Ji, and A. He, “Three-dimensional dynamic measurements of CH* and C2* concentrations in flame using simultaneous chemiluminescence tomography,” Opt. Express 25(5), 4640–4654 (2017). [CrossRef]   [PubMed]  

32. Y. Jin, Y. Song, W. Wang, Y. Ji, Z. Li, and A. He, “An improved calculation model of weight coefficient for three-dimensional flame chemiluminescence tomography based on lens imaging theory,” Proc. SPIE 10026, 1002612 (2016).

33. X. Li and L. Ma, “Capabilities and limitations of 3D flame measurements based on computed tomography of chemiluminescence,” Combust. Flame 162(3), 642–651 (2015). [CrossRef]  

34. S. W. Smith, Chapter 25 - Special Imaging Techniques (Elsevier Inc., 2002).

35. F. Viallefont-Robinet and D. Léger, “Improvement of the edge method for on-orbit MTF measurement,” Opt. Express 18(4), 3531–3545 (2010). [CrossRef]   [PubMed]  

36. U. Neitzel, E. Buhr, G. Hilgers, and P. R. Granfors, “Determination of the modulation transfer function using the edge method: Influence of scattered radiation,” Med. Phys. 31(12), 3485–3491 (2004). [CrossRef]   [PubMed]  

37. J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab”, retrieved http://www.vision.caltech.edu/bouguetj/calib_doc/.

38. T. Y. H. Liu, M. Zhang, and W. Cai, “Demonstration of 3D computed tomography of chemiluminescence with a restricted field of view,” Appl. Opt. 56(29), 71077 (2017).

39. G. T. Herman, A. Lent, and S. W. Rowland, “ART: Mathematics and applications. A report on the mathematical foundations and on the applicability to real data of the algebraic reconstruction techniques,” J. Theor. Biol. 42(1), 1–32 (1973). [CrossRef]   [PubMed]  

40. W. Cai and L. Ma, “Comparison of approaches based on optimization and algebraic iteration for binary tomography,” Comput. Phys. Commun. 181(12), 1974–1981 (2010). [CrossRef]  

41. M. Lin, Q. Lei, W. Yue, W. Xu, T. M. Ombrello, and C. D. Carter, “From ignition to stable combustion in a cavity flameholder studied via 3D tomographic chemiluminescence at 20kHz,” Combust. Flame 165, 1–10 (2016). [CrossRef]  

42. L. Ma, Q. Lei, T. Capil, S. D. Hammack, and C. D. Carter, “Direct comparison of two-dimensional and three-dimensional laser-induced fluorescence measurements on highly turbulent flames,” Opt. Lett. 42(2), 267–270 (2017). [CrossRef]   [PubMed]  

43. L. Ma, Q. Lei, J. Ikeda, W. Xu, Y. Wu, and C. D. Carter, “Single-shot 3D flame diagnostic based on volumetric laser induced fluorescence (VLIF),” Proc. Combust. Inst. 36(3), 4575–4583 (2017). [CrossRef]  

44. L. Ma, Y. Wu, Q. Lei, W. Xu, and C. D. Carter, “3D flame topography and curvature measurements at 5kHz on a premixed turbulent Bunsen flame,” Combust. Flame 166, 66–75 (2016). [CrossRef]  

45. Y. Wu, Q. Lei, and L. Ma, “Experimental demonstration of 4D imaging in two-phase flows based on computed tomography at 5 kHz,” Appl. Opt. 53(24), 5547–5553 (2014). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 (a): Definition of three coordinate systems for the development of imaging models; (b), (c), and (d) illustrate three approaches that have been used to approximate the projection of a voxel.
Fig. 2
Fig. 2 (a): An illustration for calculating the diameters of CoCs based on ray-tracing and paraxial optics; and (b): illustration on how the projection of a voxel is approximated in the new imaging model.
Fig. 3
Fig. 3 (a): illustration of the projected area and the area falls within a specific pixel; and (b): the projected area which is approximated by the projection of many points. .
Fig. 4
Fig. 4 (a): an example edge spread function; (b) the line spread function calculated from (a); (c) modulation transfer function which is obtained from 1D Fourier transform of the line spread function shown in Panel (b).
Fig. 5
Fig. 5 The flowchart that describes the method for the quantification of SR.
Fig. 6
Fig. 6 (a): experimental setup: eight sliding rails were positioned as a semicircular ring and the phantom was arranged at the center. A camera was used to sequentially capture images from eight view angles; and (b): a photography of the checkerboard and the cup.
Fig. 7
Fig. 7 (a) comparisons between the exact and estimated intersection points on the 7-th view according to the imaging model with consideration of lens distortion; (b) counterpart of (a) without consideration of lens distortion; (c) zoomed version of (a); and (b) zoomed version of (d). The red crosses and blue circles are the exact and estimated intersection points respectively.
Fig. 8
Fig. 8 Eight projections of the phantom plotted in a colored manner.
Fig. 9
Fig. 9 A few representative layers from the reconstruction along the x, y, and z axes in the world coordinate system are plotted in each of the three rows respectively.
Fig. 10
Fig. 10 Comparison between the reconstructed and actual diameters of the circles within the sampled layers. The range of the horizontal axis is selected to include all the layers shown in Fig. 9.
Fig. 11
Fig. 11 Counterpart of Fig. 9 but of regularized reconstruction.
Fig. 12
Fig. 12 (a): Reconstruction of the 60-th layer along the x axis, eight tracks of interest are labeled as red lines; (b): extracted ESF along Track 1; (c): LSF calculated by differentiating the ESF shown in Panel (b); and (d): MTF obtained by performing 1D Fourier transform of the LSF shown in Panel (c).
Fig. 13
Fig. 13 The quantified 3D map of SR for the CTC system developed in this work. The units of the three axes are voxels, each of which has a dimension of 0.25×0.5×0.5 mm3.

Tables (2)

Tables Icon

Table 1 Summary of averaged extrinsic parameters from three calibration processes.

Tables Icon

Table 2 The spatial resolutions quantified for the six tracks in the reconstruction of Layer x60.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

p( x c , y c )= V f(x,y,z) w ˙ (x,y,z, x c , y c )dV ,
p s,t = i=1 I f( x i , y i , z i ) w ˙ ( x i , y i , z i ,s,t)ΔV = i=1 I f( x i , y i , z i )w( x i , y i , z i ,s,t) ,
p =W f ,
( x c y c z c )= R t ( x w y w z w )+ T t ,
1 z ima + 1 z c = 1 f c .
x ima x c = y ima y c = z ima z c .
( x n y n )=( x ima / z ima y ima / z ima ).
( x d y d )=(1+ k c (1) r 2 + k c (2) r 4 + k c (5) r 6 )( x n y n )+( dx dy ),
( dx dy )=( 2 k c (3) x n y n + k c (4)( r 2 +2 x n 2 ) k c (3)( r 2 +2 y n 2 )+2 k c (4) x n y n ).
{ x p = f c ( x d + α c y d )+cc(1) y p = f c y d +cc(2) n! r!( nr )!
D CoC = f c N | d o 1 d f 1 | f c 1 d f 1 ,
f h+1 = f h +α w s,t p s,t w s,t f h w s,t w s,t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.