Abstract

A multi-resolution foveated laparoscope (MRFL) with autofocus and zooming capabilities was previously designed to address the limiting trade-off between spatial resolution and field of view during laparoscopic minimally invasive surgery. The MRFL splits incoming light into two paths enabling simultaneous capture of the full surgical field and a zoomed-in view of the local surgical site. A fully functional prototype was constructed to demonstrate and test the autofocus, zooming capabilities, and clinical utility of this new laparoscope. The test of the prototype in both dry lab and animal models was successful, but it also revealed several major limitations of the prototype. In this paper, we present a brief overview of the aforementioned MRFL prototype design and results, and the shortcomings associated with its optical and mechanical designs. We then present several methods to address the shortcomings of the existing prototype with a modified optical layout and redesigned mechanics. The performances of the new and old system prototypes are comparatively analyzed in accordance with the design goals of the new MRFL. Finally, we present and demonstrate a real-time digital method for correcting transverse chromatic aberration to further improve the overall image quality, which can be adapted to future MRFL systems.

© 2020 Optical Society of America

1. INTRODUCTION

Laparoscopic minimally invasive procedures have grown exceedingly popular, as they exhibit many advantages over their conventional open surgery counterparts [1,2]. Current industry standard laparoscopes fall victim to an inherent trade-off between resolution and instantaneous field of view (FOV). Surgeons sacrifice instantaneous FOV in exchange for increased resolution when performing intricate procedures on specific regions of interest (ROIs), which can potentially lead to adverse events and complications, such as electrical arcing and tool collisions, to occur outside of the FOV without the surgeon knowing [3,4].

To address the aforementioned FOV-resolution trade-off issue, Qin et al. developed a multi-resolution foveated laparoscope (MRFL) that simultaneously captures a low-resolution, wide-angle view and a high-resolution, zoomed-in view of the surgical site [58]. Figure 1 shows the optical layout of a MRFL prototype design by Qin et al. [7,8]. The MRFL design consists of the main endoscopic optics, a beam splitting and scanning assembly, a wide-angle imaging probe, and a zoomable high-magnification imaging probe. The main endoscopic optics are comprised of an objective, a relay lens, and a scanning lens, which are assembled into a rigid tube and inserted into a body through a trocar in a way similar to a conventional laparoscope. All light entering the MRFL propagates through the main endoscope optics. The beam-splitting and scanning assembly, as shown by the inset image of Fig. 1, consists of a polarizing beam splitter (PBS), quarter-wave plate (QWP), and 2D scanning mirror. The PBS splits the light into the $s$ and $p$ orthogonal polarization states. The $p$-polarized light is transmitted toward the scanning mirror whereas the $s$-polarized light is reflected to the wide-angle imaging probe. The QWP converts the $p$-polarized light to circular polarization and then to $s$ polarization after reflection from the scanning mirror. Upon conversion, the $s\!$-polarized light reflects off the PBS to the zoom-view probe. The zoom-view probe contains two electrically tunable lenses (ETLs). The optical power of each ETL is dependent on the amount of current that is sent through it. The optical powers of the ETLs are changed in a coordinated and calibrated manner such that the high-magnification probe acts as a zoom lens. The wide-angle probe images the entire surgical field, whereas the zoom-view probe images a sub-region of the surgical field. The sub-region imaged by the zoom-view probe can be positioned anywhere in the surgical field by adjusting the azimuth and elevation angles of the 2D scanning mirror.

 

Fig. 1. Optical system layout of an existing MRFL prototype design by Qin et al. [8] with a magnified inset view of the beam splitting and scanning assembly.

Download Full Size | PPT Slide | PDF

Although the MRFL design above successfully addresses the resolution-FOV trade-off and a prototype was successfully built, the preliminary in vivo test with pig models revealed several significant shortcomings. Some of the limitations, namely its cumbersome form factor, chromatic aberration, and unbalanced light distribution to the two imaging probes, are attributed to the optical design, while others, such as its non-robustness, may be attributed to inadequate consideration of tolerances during the opto-mechanical design process. In this paper, we present a new MRFL prototype with modified optical layout and redesigned mechanics that address the drawbacks listed above. The paper is organized as follows. Section 2 presents the redesigned optical layout and discusses the changes made to address the faults of the previous MRFL system. Section 3 discusses key mechanical design challenges and analyzes the prototype system performance. Lastly, Section 4 provides and demonstrates a detailed method for digitally correcting transverse chromatic aberration in real time.

2. OPTICAL DESIGN CHANGES

Figures 2(a)2(c) show a photograph of a pre-clinical testing setup of a prototype inserted in a porcine model and two example images captured by the system without any post-enhancement applied. Two significant limitations of the prototype can be recognized through the clinical testing setup and results shown in the figure. One problem is the inadequate light throughput toward the zoomed-in imaging probe. Another problem is the awkward and bulky form factor of the prototype with the two imaging probes arranged laterally with respect to the endoscope tube, which makes it difficult to maneuver the scope and causes ergonomic interference with other surgical access. The goal of a modified MRFL optical design is to address the light distribution and form factor problems without degrading image quality.

 

Fig. 2. (a) Pre-clinical testing setup of a prototype inserted in a porcine model; (b) image captured by the wide-angle probe; and (c) image captured by the zoomed probe with its field of view corresponding to the blue box marked in the wide-angle image.

Download Full Size | PPT Slide | PDF

Both of the problems described above stem from the design of the beam-splitting and scanning assembly of the current MRFL prototype. As shown in Fig. 1, the beam-splitting and scanning assembly plays a key role for selecting the region of interests for the high-resolution imaging probe and properly distributing the light captured by the shared endoscopic optics into the two separate imaging probes. Its design has critical impacts on both the resulting image quality and the form factor. In the design shown in Fig. 1, the PBS splits the light captured by the endoscopic optics evenly at a 50-50 ratio into the two arms. However, due to the significant difference of field coverage between the wide-angle and zoomed imaging probes as illustrated by Fig. 3, the throughput of the zoom-view probe is inherently much smaller than that of the wide-angle probe. The throughput of the main endoscopic optics is calculated as

$${T_{\rm obj}} = {A_{\rm FOV}}{(N{A_{\rm obj}})^2}\pi ,$$
where ${NA_{\rm obj}}$ is the object-space numerical aperture and ${A_{\rm FOV}}$ is the area of the FOV in the object plane. Since both arms of the system share the endoscope optics, ${NA_{\rm obj}}$ is constant. The FOV, however, differs between the arms as shown in Fig. 3. The wide-view probe is designed to capture an area of about $A_{\rm FOV}^{\rm Wide} = 192\,\,{\rm cm^2}$, while the area captured by the zoom-view probe is anywhere between $A_{\rm FOV}^{Z\min} = 48\,\,{\rm cm^2}$ and $A_{\rm FOV}^{Z\max} = 21.48\,\,{\rm cm^2}$ depending on the magnification applied. Consequently, the throughput of the zoom-view probe is between 11%–25% of the throughput of the wide-view probe. Ignoring Fresnel reflections, with a 50-50 PBS split, the wide-view sensor ideally receives 50% of all light captured by the endoscopic optics, whereas the zoom-view sensor receives about 5.5%–12.5%. As a result, the wide-view image is easily visible, but the zoom-view image is too dark to see without adding additional light sources and increasing detector gain. The maximum amount of light available, however, is limited to that of current industry standard laparoscopic light sources, and using multiple light sources is not feasible because it would require subjecting a patient to further trauma in order to introduce the additional source(s) to the surgical site.
 

Fig. 3. MRFL captures two FOVs simultaneously. The wide-view FOV (blue) is constant and large, which provides the wide-view probe with greater throughput. The adjustable magnification of the zoom-view probe allows the user to vary the ratio between the areas imaged by the wide-view probe and the zoomed view from ${2} \times$ to ${3} \times$. At a ${2} \times$ (orange) zoom ratio, the area in the surgical field imaged by the zoom-view probe is half that of the wide-view probe. At a ${3} \times$ (green) zoom ratio, the area imaged by the zoom-view probe is a third of the area imaged by the wide-view probe.

Download Full Size | PPT Slide | PDF

One approach to balancing the image brightness and visibility differences between the wide- and zoomed-view probes is to adjust the camera settings of the individual detectors, i.e., exposure time, gain, etc. This approach would be valid if the light distribution imbalance between the probes is small. Since the imbalance in the MRFL is large, however, the magnitudes of gain, shutter speed adjustment, etc., necessary to correct the imbalance introduce undesirable effects such as lag and noise. Using medical grade 3-CMOS cameras with higher low-light performance would reduce the amount of gain needed for balancing the images and make digital correction more viable and mitigate the imbalance to some extent, but relying upon adjusting exposure and gain controls to balance the large difference is still problematic. The MRFL system discussed in this paper, however, is an early-stage prototype designed to prove feasibility, novelty, and usefulness. As such, the budget and purpose of the MRFL prototype do not warrant using high-performance medical cameras at this time. Thus, an optical light-balancing solution is paramount.

Besides the issue of unbalanced light distribution between the two imaging arms, the beam-splitting sub-system perpendicularly folds the optical paths, resulting in a T-shaped system. Ideally, the optical paths of the two arms should run parallel to that of the endoscope optics in order to reduce form factor and make the scope more ergonomically friendly in a clinical setting, which was discovered as being an obvious obstacle during our clinical trial.

To address these issues, the new optical design layout shown in Fig. 4 features a simplified beam-splitting and field selection sub-system. All other sub-systems are identical to those in the design by Qin et al., but the spacings between the sub-systems were re-optimized. The new design replaces the 50-50 PBS used in the previous design (Fig. 1) with a 90-10 non-polarizing beam splitter (BS) and eliminates the QWP components. Furthermore, the new design mounts the BS on a 2D scanning stage and uses it as the scanning element. As such, the BS functions as the mechanism for both beam splitting and field selection. In the previous design, the PBS splits the light captured by the endoscopic optics into two paths while the azimuth and elevation angles of the scanning mirror are adjusted to select the sub-field reflected back through the system for the zoom-view probe. In the new design, the light transmitted by the BS is imaged by the wide-angle probe, while the scanning angles of the BS determine the sub-field reflected and captured by the zoom-view probe. The simplified beam splitting and scanning mechanism frees up enough optical path length between the BS and the zoom-view probe to accommodate a folding mirror, as shown in Fig. 4, which allows both arms of the system to run parallel to the endoscope optics and substantially improves the ergonomic fit of the system in a clinical setting.

 

Fig. 4. System layout of the new MRFL with a magnified inset view of the improved beam splitting and scanning assembly. The “h” shape of the system makes it compact and easier to maneuver.

Download Full Size | PPT Slide | PDF

Though the new design exhibits negligible changes in image quality, it yields significant improvements in the balance of light distributions between the two imaging arms. Ignoring Fresnel reflections, the scanning BS reflects 90% of incoming light to the zoom-view probe, and the remaining 10% is transmitted to the wide-view probe. Taking into account the throughput difference between the two imaging probes discussed earlier, overall the wide-view sensor receives 10% of all light that enters the MRFL whereas the zoom-view image receives 9.9%–22.5% for zoom ratios between ${2\times}$ and ${3\times}$.

3. PROTOTYPING AND PERFORMANCE

A. Opto-Mechanical Design and Prototype

The opto-mechanical design for a MRFL prototype based on the optical layout in Fig. 4 confronts multi-fold challenges. First, the field selection capabilities of the MRFL require computer-controlled tip and tilt of the 90-10 BS about its center point on the optical axis. Driven by the movement speed, interval resolution, and angular range requirements, the T-OMG motorized gimbal mount by Zaber Technologies Inc. was selected to hold the BS. The size, orientation, and unique shape of the T-OMG, however, complicate the opto-mechanical design and limit the compaction of the system. A compact and robust frame reference is required that can effectively secure the main endoscopic optics unit, the scanning unit, the two imaging probes and the motorized gimbal mount. To address this issue, a tightly toleranced box frame that affixes to the stationary T-OMG base was devised. All other structural components of the design are secured into slots in the box frame to ensure the system is compact and well aligned.

Second, in the new optical layout of Fig. 4, the T-OMG is oriented such that the BS is angled at ${45} \pm {2}^\circ$ with respect to the optical axis. At this angle, the component of the T-OMG that holds the BS is too thick and introduces non-rotationally symmetric vignetting into the wide-angle probe. Thus, we designed and fabricated a thinner, custom mounting piece out of aluminum to replace the T-OMG’s native optical mounting component. Figure 5 shows a comparison of ray bundle occlusion for the native and custom mounting pieces, while Fig. 6 shows a comparison between the packaging dimensions and form factors of the new and previous MRFL prototypes.

 

Fig. 5. Comparison of ray-bundle obstruction when using the (top) native and (bottom) custom TOMG mounting pieces. Direction of light is into the paper. The beam footprint shown in each is a cross section taken at the beam splitter of the nearly collimated ray bundle. The custom mounting piece allows transmission of the beam without obstruction. Conversely, the native mounting piece obstructs one side of the beam, introducing non-rotationally symmetric vignetting.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Comparison between the packaging dimensions and form factors of (top) the new MRFL prototype and (bottom) the prototype designed by Qin et al. [8]. The new prototype achieves a 32% decrease in overall packaging volume.

Download Full Size | PPT Slide | PDF

Third, the design must allow the pitch and yaw of the BS and T-OMG components to rotate $\pm {2^\circ}$ in order to scan through the entire surgical field. Due to the close proximity of optical components to the BS, there is great potential for the opto-mechanics of the scanning lens and wide-angle probe to encroach on the T-OMG components and restrict their rotation. To avoid this problem, the scanning lens housing tube and the custom BS mount described above were designed with non-symmetrical cutouts to provide clearance for the movement of the T-OMG components (Fig. 7).

 

Fig. 7. Blue arrows indicate the non-rotationally symmetrical cutouts that provide clearance for the beam splitter mount to move. Left: part of the scanning lens housing is sectioned out to prevent metal-on-glass collisions with the beam splitter. Right: the notch removed from the beam splitter mount prevents it from hitting the scanning lens housing.

Download Full Size | PPT Slide | PDF

 

Fig. 8. Images of I3A/ISO Resolution Test Chart taken with the old (left column) and new (right column) MRFL systems. Each picture in the left column was taken using the same working distance, lighting conditions, and camera settings as its corresponding adjacent image in the right column. Row 1: images captured with the wide-view probes. Row 2: ${2} \times$ zoomed-view images where the zoomed sub-region is centered in the surgical field. Row 3: ${2} \times$ zoomed-view images where the zoomed sub-region is positioned at the edge of the surgical field. Row 4: ${3} \times$ zoomed-view images where the zoomed sub-region is centered in the surgical field. Row 5: ${3} \times$ zoomed-view images where the zoomed sub-region is positioned at the edge of the surgical field.

Download Full Size | PPT Slide | PDF

Finally, the opto-mechanical design of the previous MRFL prototype does not account for typical CNC machining tolerances, relies on adhesives to hold the optics in place, and has no way of ensuring proper spacing between the endoscope optics and scanning lens. Due to these downfalls, the previous MRFL prototype is not robust and exhibits significant alignment errors. The new opto-mechanics design for the system needs to address these flaws of the previous design through proper tolerance analysis of the optical system, matching the fabrication accuracy of machining, and proper spacing and assembly process.

B. Performance Evaluation

In this section, the performance of the new MRFL is evaluated comparatively with respect to its predecessor. The evaluation compares three metrics of performance: (1) image brightness, (2) slanted edge modulation transfer function (MTF), and (3) resolution. An Edmund Optics 1X–I3A/ISO Resolution Test Chart was used to compare image brightness and slanted edge MTFs. Both MRFL systems use the same image sensors, which simplifies the comparison. The old MRFL and new MRFL each captured three sets of images. For each set of images, the working distance, lighting conditions, and camera settings are identical for both MRFL systems. The first set includes the wide-view image of the full surgical field. The second set is a pair of ${2} \times$ zoomed-view images capturing different ROIs. The first image in the pair was taken with the zoomed sub-region located at the center of the surgical field (center ROI). The second image was taken with the zoomed sub-region located at the edge of the surgical field (edge ROI). The third set is also comprised of center ROI and edge ROI images but at ${3} \times$ zoom. These images are all shown in Fig. 8.

1. Relative Brightness

For a given image captured by the new MRFL and the matching image captured by its predecessor, relative image brightness was computed as the ratio of the average pixel value (AVP) of the new MRFL’s image to that of the old MRFL’s image. Table 1 shows the average pixel values and relative brightness of each set of images as well as the overall average relative brightness (ARB) of the wide and zoomed views.

Tables Icon

Table 1. Relative Brightness Analysis of the New MRFL with Respect to Its Predecessor Using Average Pixel Value Ratios

 

Fig. 9. Plot showing the measured wide-view MTF performances of the old and new MRFL systems along with the MTF curves of the design.

Download Full Size | PPT Slide | PDF

 

Fig. 10. Plots showing the measured zoomed-view MTF performances of the old and new MRFL systems along with the MTF curves of the design. Top: ${2} \times$ zoom ratio, bottom: ${3} \times$ zoom ratio.

Download Full Size | PPT Slide | PDF

 

Fig. 11. Comparison images of USAF 1951 resolution target taken with the old (left images) and new (right images) MRFL systems: (a) wide view; (b) ${2} \times$ zoomed view; (c) ${3} \times$ zoomed view.

Download Full Size | PPT Slide | PDF

The old MRFL design allocates 50% of incoming light to the wide view and 50% to the zoomed view. The new MRFL design allots 10% of incoming light to the wide view and 90% to the zoomed view. Thus, the new MRFL should ideally produce zoomed-view images 1.8 times brighter than those of the old MRFL and wide-view images 0.2 times as bright as those of the old MRFL. Table 1 shows, however, that the new MRFL zoomed-view images are on average 2.64 times brighter than the old MRFL images. As mentioned previously, the old MRFL suffers significant alignment problems due to the mechanical design. In addition, the lens cleaning techniques used while assembling the old MRFL were not as refined and efficient as those used for the new MRFL. These issues likely lower the brightness of the old MRFL images and contribute the majority of the 17.7% gap between the ideal and measured zoomed-view relative brightness. Similarly, the measured 0.41 wide-view relative brightness is higher than the ideal 0.2. As seen in Fig. 8, the wide-view image taken with the old MRFL is saturated while the wide-view image by the new MRFL is much less saturated. Thus, the pixel values of 255 of the saturated regions are not representative of the larger true difference in brightness between the two MRFL systems. Overall, the new MRFL system yields well-balanced image brightness for both the wide-view and zoomed-view probes.

2. Slanted Edge MTF

In addition to relative brightness, the images in Fig. 8 were also used to measure the slanted edge MTF. The wide-view images cover a large FOV and the MTF change relative to the field angle is significant. For each wide-view image, the MTF was measured at 14.5° and 30°. The resulting MTF curves are plotted with the designed MTF curves in Fig. 9. The 14.5° MTF measured for the old MRFL is higher than expected. This is likely because the 14.5° field falls within the saturated region of the image thus creating a misleadingly sharp contrast between the white background and black slanted edge. The slanted edge MTF for each of the zoomed-view images was also measured. The average resulting MTF curves are plotted with the designed MTF curves in Fig. 10. The results demonstrated noticeable improvements on the MTF performance for the zoom views under both ${2\times}$ and ${3\times}$ zoom ratio.

3. Resolution

A USAF 1951 resolution target was used to compare the image resolutions of the old and new MRFL systems. For both systems, the target was placed at the nominal working distance of 120 mm and lighting conditions were unchanged. Wide-view, ${2} \times$ zoomed-view, and ${3} \times$ zoomed-view images were captured using both MRFL systems. The camera settings were adjusted on a picture-by-picture basis to ensure that each image achieves the highest resolution possible. Figure 11 shows side-by-side comparisons of the resolution images taken with the old MRFL and the new MRFL, while Table 2 lists the smallest resolvable 1951 USAF resolution target elements and the associated spatial and angular object space resolutions for each system.

Tables Icon

Table 2. Resolution Limits of the Old and New MRFL Systems

As seen in Table 2, the new MRFL system has slightly higher ${2} \times$ and ${3} \times$ spatial resolutions than those of the old MRFL. Conversely, the new MRFL exhibits lower wide-view spatial resolution than its predecessor. The decrease in wide-view spatial resolution from the old system to the new is likely due to the large disparity in the amount of light that each receives. The amount of light received by the wide-view probe in the new system is one fifth that of the old system. Therefore, in order to capture the images seen in Fig. 11(a), the gain was set significantly higher for the new MRFL than for the old system, which greatly increased image noise.

4. CHROMATIC ABERRATION

The optical system design of the original MRFL prototype suffered from severe lateral chromatic aberrations (LCAs) contributed by the objective and relay lenses and caused chromatic change of magnification (CCM). The purpose of the new MRFL prototype design was to improve performance without changing the overall lens design. Therefore, the new MRFL system inherited the same CCM problem and could not be optically corrected. In the new MRFL system, a real-time digital correction was implemented instead.

Both the wide-view and zoomed-view imaging probes use a Chameleon3 camera (CM3-U3-13S2C-CS) from FLIR Integrated Imaging Solutions, Inc. The CM3-U3-13S2C-CS has a resolution of ${1288} \times {964}$ pixels and uses a RGB Bayer color filter. Thus, an image produced by the camera is effectively a ${1288} \times {964} \times {3}$ matrix. Any red, green, or blue pixel intensity can then be referenced by its matrix indices $({x_i},{y_i},{z_i})$ where $({x_i},{y_i})$ specifies pixel location, and ${z_i}$ specifies the RGB channel (see Fig. 12). In an ideal system, the wavefronts of all wavelengths emanating from a single point on the object shall converge to the same pixel on the detector. Conversely, in a system only suffering from LCAs the wavefronts of different wavelengths from a single object point are imaged to spatially displaced pixel locations, as illustrated in Fig. 12, and these pixel locations are denoted as $({{x_r},{y_r},1})$, $({{x_g},{y_g},2})$, and $({{x_b},{y_b},3})$, respectively. The goal of the digital correction is to provide a mapping from $({{x_r},{y_r},1})$ and $({{x_b},{y_b},3})$ to $({x_r^{\rm New},y_r^{\rm New},1})$ and $({x_b^{\rm New},y_b^{\rm New},3})$ for every pixel of the image by shifting the pixels of red and blue channels by small amounts of ΔR and ΔB, respectively, such that $x_r^{\rm New} = x_b^{\rm New} = {x_g}$ and $y_r^{\rm New} = y_b^{\rm New} = {y_g}$. The process of implementing digital correction to LCAs can be carried in the same way as the well-understood digital correction of image distortion, except that LCA correction is required for the different color channels.

 

Fig. 12. Illustration of the effect of LCA on the wide-view camera sensor and method of digital correction. The magenta dome denotes the pixel location, $({{C_\textit{Wx}},{C_\textit{Wy}}})$, of the distortion center on the sensor, which is the intersection of the optical axis with the image sensor.

Download Full Size | PPT Slide | PDF

 

Fig. 13. Wide-view LCA profiling. (a) Image of a vertical line grid taken with the wide-view probe. Matching radial line profiles are extracted from the red, green, and blue channels along the path denoted by the red arrow. The path travels normal to the grid lines and extends from the distortion center to the edge of the image. (b) The RGB line profiles are superimposed on the same plot resulting in a series of peak triplets. Each triplet consists of a green intensity peak and its corresponding red and blue intensity peaks. (c) The displacements between the reference green peaks and their corresponding red or blue peaks are plotted against the radial distance of the green peak to the distortion center. Linear trendlines fitted to the data reveal the slope values, $M_\textit{Wr}^{\rm vert}$ and $M_\textit{Wb}^{\rm vert}$.

Download Full Size | PPT Slide | PDF

A. Wide-View Correction

Considering the rotationally symmetric nature of the wide-view imaging probe, the LCA ray error at the image plane increases linearly with field height, while the distortion ray error increases cubically with field height. The field height of a given pixel may be quantified by the radial distance of a pixel to the intersection point, ${C_W}$ $({{C_\textit{Wx}},{C_\textit{Wy}}})$, of the optical axis with the image sensor of the wide-view probe, which is often referred to as the radial distortion center or the principle point in camera calibration literature [9,10]. Due to machining tolerances and assembly errors, the distortion center is shifted from the center of the image sensor and its pixel location on the sensor is illustrated by a magenta dome in Fig. 12. For the purpose of LCA correction, for a given field point ${Q}$ on the image, we chose its pixel location, $({{x_\textit{Wg}},{y_\textit{Wg}}})$, of the green channel in the image plane as the reference. As illustrated in Fig. 12, its corresponding pixel location of the red channel, $({{x_\textit{Wr}},{y_\textit{Wr}}})$, is displaced from the green reference and the errors in pixel location, $({\Delta\! {R_\textit{Wx}},\Delta\! {R_\textit{Wy}}})$, for the associated red pixel changes linearly with respect to the radial distance of the reference pixel from the distortion center, ${C_W}$. The same is true for the pixel location errors of the blue channel, $({\Delta {B_\textit{Wx}},\Delta {B_\textit{Wy}}})$. The pixel errors for the red and blue channels may be expressed as

$$\left\{{\begin{array}{*{20}{c}}{\overrightarrow {\Delta\! {R_W}} = \left\langle {\Delta\! {R_\textit{Wx}},\quad \Delta\! {R_\textit{Wy}}} \!\right\rangle}\\[4pt]{\Delta\! {R_\textit{Wx}} = {M_\textit{Wr}}\centerdot \left({{x_\textit{Wg}} - {C_\textit{Wx}}} \right)}\\[4pt]{\Delta\! {R_\textit{Wy}} = {M_\textit{Wr}}\centerdot \left({{x_\textit{Wg}} - {C_\textit{Wy}}} \right)}\end{array}} \right.$$
and
$$\left\{{\begin{array}{*{20}{c}}{\overrightarrow {\Delta {B_W}} = \left\langle {\!\Delta {B_\textit{Wx}},\quad \Delta {B_\textit{Wy}}\!} \right\rangle}\\[4pt]{\Delta {B_\textit{Wx}} = {M_\textit{Wb}}\centerdot \left({{x_\textit{Wg}} - {C_\textit{Wx}}} \right)}\\[4pt]{\Delta {B_\textit{Wy}} = {M_\textit{Wb}}\centerdot \left({{x_\textit{Wg}} - {C_\textit{Wy}}} \right)}\end{array}} \right.,$$
where ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$ are the linear rates at which $\Delta\! {R}$ and $\Delta {B}$ change with respect to the radial distance of the reference pixel, $({{x_\textit{Wg}},{y_\textit{Wg}}})$, from the center of distortion, $({{C_\textit{Wx}},{C_\textit{Wy}}})$. Consequently, LCAs may be corrected by applying the mappings shown in Eqs. (4) and (5) to every pixel in the red and blue channels:
$$\left\{{\begin{array}{*{20}{c}}{x_\textit{Wr}^{\rm New} = \frac{{{x_\textit{Wr}} - {C_\textit{Wx}}}}{{1 + {M_\textit{Wr}}}} + {C_\textit{Wx}}}\\[4pt]{y_\textit{Wr}^{\rm New} = \frac{{{y_\textit{Wr}} - {C_\textit{Wy}}}}{{1 + {M_\textit{Wr}}}} + {C_\textit{Wy}}}\end{array}} \right.,$$
$$\left\{{\begin{array}{*{20}{c}}{x_\textit{Wb}^{\rm New} = \frac{{{x_\textit{Wb}} - {C_\textit{Wx}}}}{{1 + {M_\textit{Wb}}}} + {C_\textit{Wx}}}\\[4pt]{y_\textit{Wb}^{\rm New} = \frac{{{y_\textit{Wb}} - {C_\textit{Wy}}}}{{1 + {M_\textit{Wb}}}} + {C_\textit{Wy}}}\end{array}} \right..$$
The linear characterization of LCA in the wide-view probe amounts to locating ${C_W}$ and measuring ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$. The radial distortion center, ${C_W}$, was found via a commonly used camera calibration process [9,10]. To measure ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$, a target composed of a grid of evenly spaced, vertical, white lines on a black background was constructed via MATLAB and displayed on a 4 K monitor at the nominal working distance of the MRFL [see Fig. 13(a)]. The high pixel density of the 4 K monitor ensures that the resolution of the displayed target is well beyond that of the MRFL system and thus not a limiting factor. A RGB-color image of the line-grid target was captured by the MRFL. For each RGB channel of the captured image, a pixel intensity line-profile was measured along the direction perpendicular to the grid lines starting at the distortion center, ${C_W}$, and ending at the edge of the image, as shown by the red arrow in Fig. 13(a). The intensity profiles of the RGB color channels were all plotted on the same graph in Fig. 13(b). Each of the periodically spaced green intensity peaks is accompanied by its associated red and blue intensity peaks. For a single RGB peak triplet, the pixel locations of the peaks along the intensity profile (displacement, in pixels, of the peaks from the distortion center) were recorded. The green peak location is designated the “reference pixel” location since all wavelengths would converge to that same location if the system was free of LCA. The displacements, $\overrightarrow {\Delta\! {R_W}}$ and $\overrightarrow {\Delta {B_W}}$, in pixels, between the red or the blue peak locations and the reference pixel were found for each RGB peak triplet in the line profile, and they were plotted in Fig. 13(c) with respect to the radial distance from the corresponding green peak location to the distortion center. The plots of $\Delta\! {R_W}$ and $\Delta {B_W}$ were fitted with linear regression lines that intersect the origin. The slopes of the $\Delta\! {R_W}$ and $\Delta {B_W}$ regression lines are noted as $M_\textit{Wr}^{\rm vert}$ and $M_\textit{Wb}^{\rm vert}$, respectively. To increase the characterization accuracy, the procedure was repeated for a grid of horizontal white lines and again for a grid of diagonal (tilted 45°) white lines. The resulting values were $M_\textit{Wr}^{\rm horiz}$, $M_\textit{Wb}^{\rm horiz}$, $M_\textit{Wr}^{\rm diag}$, and $M_\textit{Wb}^{\rm diag}$. The average of $M_\textit{Wr}^{\rm vert}$, $M_\textit{Wr}^{\rm horiz}$, and $M_\textit{Wr}^{\rm diag}$ was taken to be the linear rate ${M_\textit{Wr}} = 0.0105$. Similarly, ${M_\textit{Wb}} = - 0.0144$ was found by averaging $M_\textit{Wb}^{\rm vert}$, $M_\textit{Wb}^{\rm horiz}$, and $M_\textit{Wb}^{\rm diag}$. The results of the wide-view digital LCA correction using these averaged linear rates are shown in Fig. 14.
 

Fig. 14. Wide-view digital LCA correction results: (a) original image, (b) digitally corrected image.

Download Full Size | PPT Slide | PDF

B. Zoomed-View Correction

Although the same characterization procedure and equations as those used for profiling the wide-view probe can be generally applied to the zoomed-view probe, one challenge surfaced due to the scanning nature of the foveated probe. The LCA correction method given by Eqs. (2)–(5) works for a rotationally symmetric system where the distortion center is free from chromatic aberration. It further assumes a local coordinate frame of a captured image with the frame origin located at the top-left corner of the image. These assumptions are readily applicable to the wide-view probe, but not to the zoomed-view probe because the 2D scanning nature breaks the assumption of rotational symmetry. The distortion center of a zoomed view, ${C_Z}$, which is the intersection of the zoom probe’s optical axis with the image sensor, is no longer LCA free when the scanner is deviated from the $({\theta = 0,\;\varphi = 0})$ scanning angle, where $\theta$ and $\varphi$ are the azimuth and elevation angles of the 2D scanner, respectively. It is unrealistic to capture and calibrate the zoomed-view probe for every possible 2D position of the scanner. To simplify the problem, we assume the overall LCA error of a foveated image is the accumulative effects of two sources. We will adopt a notation in which $({{x_\textit{Zr}},{y_\textit{Zr}}})$, $({{x_\textit{Zg}},{y_\textit{Zg}}})$, and $({{x_\textit{Zb}},{y_\textit{Zb}}})$ respectively mark the corresponding red, green, and blue pixel locations, ${P_r}$, ${P_g}$, and ${P_b}$, of the image point, $P$. The first source of error is contributed by the endoscope optics and is composed of the chromatic displacements, $({\overrightarrow {\Delta\! {R^{\theta ,\varphi}}} ,\overrightarrow {\Delta {B^{\theta ,\varphi}}}})$. The second error source is contributed by the zoom-probe optics and is composed of the additional chromatic displacements, $({\overrightarrow {\Delta\! {R_Z}} ,\overrightarrow {\Delta {B_Z}}})$.

Figure 15 illustrates the additive effects using the red channel as an example. The blue box represents the wide-view image with its distortion center, ${C_W}$, located at a fixed point in the wide-view coordinate system, $({{x_W},\;{y_W}})$. The zoomed-view image is represented by the pink box with its distortion center, ${C_Z}$, located at a fixed pixel position within the zoom-view coordinate system $({{x_Z},\;{y_Z}})$, and the pink box can be moved around within the blue box by adjusting the scanning mirror angle, ($\theta$, $\varphi$). Considering the optical magnification difference between the wide and zoomed views, the pixel distance measured in the wide view is different from that measured in the zoomed view by a factor of magnification difference. To account for this factor, for a given zoom view under a given zoom ratio, we project the wide-view distortion center, ${C_w}$, into its corresponding position, $D_w^{\theta ,\varphi}({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, with respect to the zoom-view coordinate system. The relative displacement, $\overrightarrow {{D^{\theta ,\varphi}}}$, defined as the directional deviation of the projected distortion center of the wide-view window from the origin of the zoomed-view window, changes as the foveated sub-region is moved around via the scanning mirror. For a pixel, ${P}$, in the zoomed-view image, the LCA contribution of the zoom probe, $\overrightarrow {\Delta\! {R_Z}}$, is linearly dependent on the displacement, $\overrightarrow {{r_Z}}$, from ${C_Z}$ to the green reference pixel location, ${P_g}$, and is unaffected by changes to the scanning mirror angle. Contrastingly, the contribution from the endoscope optics, $\overrightarrow {\Delta\! {R^{\theta ,\varphi}}}$, is greatly affected by changes to the scanning mirror angle because it is linearly proportional to the displacement, ${r^{\theta ,\varphi}}$, between the projected wide-view distortion center, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, and the green reference pixel. The contributions of both sources, $\overrightarrow {\Delta\! {R_Z}}$ and $\overrightarrow {\Delta\! {R^{\theta ,\varphi}}}$, add together resulting in a total shift, $\Delta R_Z^{\theta ,\varphi}$, of the red pixel, ${P_r}$, from the green reference location.

 

Fig. 15. Representation of the additive nature of the zoom probe and endoscope LCA contributions in the red channel.

Download Full Size | PPT Slide | PDF

The first error source is caused by the objective and relay lens of the main scope and thus depends on the mirror scanning angle, while the second error source is caused by the imaging optics behind the scanning mirror of the foveated probe and therefore are independent of the scanner position. We therefore can model the magnitudes of $({\overrightarrow {\Delta\! {R^{\theta ,\varphi}}} ,\overrightarrow {\Delta {B^{\theta ,\varphi}}}})$ on the zoomed-view detector as being linearly proportional to the radial distance, ${r^{\theta ,\varphi}}$, between the green reference pixel, ${P_g}$, and the projected position, $\overrightarrow {{D^{\theta ,\varphi}}} = \langle {D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}} \rangle$, of the main scope’s optical axis. The magnitudes of $({\overrightarrow {\Delta\! {R_Z}} ,\overrightarrow {\Delta {B_Z}}})$ can be modeled as being linearly proportional to the radial displacement, $\overrightarrow {{r_Z}}$, from the distortion center of the zoomed probe to the reference pixel, ${P_g}$. By modifying Eqs. (2) and (3), the pixel errors for the red and blue channels of the zoomed-view image may be expressed as

$$\left\{\!{\begin{array}{*{20}{c}}{\overrightarrow {\Delta R_Z^{\theta ,\varphi}} = \left\langle {\;\Delta R_\textit{Zx}^{\theta ,\varphi},\;\Delta R_\textit{Zy}^{\theta ,\varphi}\;} \right\rangle}\\[4pt]{\Delta R_\textit{Zx}^{\theta ,\varphi} = \Delta R_x^{\theta ,\varphi} + \Delta\! {R_\textit{Zx}} = {M_\textit{Wr}}\centerdot \left({{x_\textit{Zg}} - D_x^{\theta ,\varphi}} \right) + {M_\textit{Zr}}\centerdot \left({{x_\textit{Zg}} - {C_\textit{Zx}}} \right)}\\[4pt]{\Delta R_\textit{Zy}^{\theta ,\varphi} = \Delta R_y^{\theta ,\varphi} + \Delta\! {R_\textit{Zy}} = {M_\textit{Wr}}\centerdot \left({{y_\textit{Zg}} - D_y^{\theta ,\varphi}} \right) + {M_\textit{Zr}}\centerdot \left({{y_\textit{Zg}} - {C_\textit{Zy}}} \right)}\end{array}}, \right.$$
$$\begin{split}\left\{\!{\begin{array}{*{20}{c}}{\overrightarrow {\Delta B_Z^{\theta ,\varphi}} = \left\langle {\;\Delta B_\textit{Zx}^{\theta ,\varphi},\;\Delta B_\textit{Zy}^{\theta ,\varphi}\;} \right\rangle}\\[4pt]{\Delta B_\textit{Zx}^{\theta ,\varphi} = \Delta B_x^{\theta ,\varphi} + \Delta {B_\textit{Zx}} = {M_\textit{Wb}}\centerdot \left({{x_\textit{Zg}} - D_x^{\theta ,\varphi}} \right) + {M_\textit{Zb}}\centerdot \left({{x_\textit{Zg}} - {C_\textit{Zx}}} \right)}\\[4pt]{\Delta B_\textit{Zy}^{\theta ,\varphi} = \Delta B_y^{\theta ,\varphi} + \Delta {B_\textit{Zy}} = {M_\textit{Wb}}\centerdot \left({{y_\textit{Zg}} - D_y^{\theta ,\varphi}} \right) + {M_\textit{Zb}}\centerdot \left({{y_\textit{Zg}} - {C_\textit{Zy}}} \right)}\end{array}} ,\right.\end{split}$$
where ${M_\textit{Zr}}$ and ${M_\textit{Zb}}$ are the linear rates at which $\Delta R_Z^{\theta ,\varphi}$ and $\Delta B_Z^{\theta ,\varphi}$ change with respect to the radial distance from the distortion center, ${C_Z}$, to ${P_r}$, of the foveated probe. The LCA correction to the zoomed-view image can then be applied by combining Eqs. (6) and (7) with Eqs. (4) and (5) to the foveated image.
 

Fig. 16. Zoomed-view profiling rectification. (a) Image captured from wide-view feed. The blue box indicates the region being imaged by the zoomed-view probe. A green hash mark was created by clicking the point of intersection of a grid line with the overlaid yellow line. (b) Image captured from zoomed-view feed. The green hash mark appears in the zoomed view as well.

Download Full Size | PPT Slide | PDF

If the LCA introduced by the zoom probe is negligible compared to that of the objective and relay lenses, then the corresponding terms in Eqs. (6) and (7) are inconsequential and may be dropped. In such a case, calibration of ${C_Z}$, ${M_\textit{Zr}}$, and ${M_\textit{Zb}}$ is unnecessary. This is a preferable scenario, as it is much less computationally and time intensive. To determine whether LCA contributed by the zoom probe can be considered negligible, the zoom probe is used to measure ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$, the linear rates of change of $\Delta\! {R_W}$ and $\Delta {B_W}$ seen by the wide-view probe. To validate dropping the zoom-probe contributions in Eqs. (6) and (7), it was decided that the error between the ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$ values measured using the zoom-view probe and those measured using the wide-view probe must result in pixel disparities at the edge of the wide-view FOV that are less than or equal to 3 pixels.

Once again, a grid of evenly spaced, vertical, white lines on a black background was constructed via MATLAB and displayed on a 4 K monitor at the nominal working distance of the MRFL (Fig. 16). Since the foveated sub-region can be moved to any location within the surgical field covered by the wide-angle probe, the displacements $\Delta\! {R_W}$ and $\Delta {B_W}$ must be measured across the entire radial path from the distortion center, ${C_W}$, to the edge of the wide field. Two foveated sub-region images were needed to cover the horizontal radial path from the distortion center to the edge of the surgical field. These two sub-regions are the red-boxed portions of the images at the top of Fig. 17. For a meaningful comparison to the wide-view measurements of ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$, it must be assumed that the zoom probe is free of all chromatic aberration when it is used to measure ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$. Other sources of error inherent to the zoom view, however, must be considered.

 

Fig. 17. Top: two zoomed-view sub-regions (indicated by the red boxes) were needed to span the full path from the distortion center to the edge of the surgical field. The RGB intensity profiles of each grid line in the two sub-regions was extracted and plotted separately to reveal an RGB peak triplet. Bottom: for a single grid line, the displacements of the red and blue peaks from the corresponding green peak is plotted against the radial distance of the green peak to the distortion center resulting in a single red and single blue data point on the graph. Repeating this process for every grid line in the two sub-regions results in the scatter plot above.

Download Full Size | PPT Slide | PDF

The zoomed-view camera calibrations exhibit far greater pixel errors than that of the wide-view probe due to the relatively high image noise. The main error source for calibrating the wide-view probe comes from the camera calibration for determining the distortion center. The inherently larger throughput of the wide-view probe produces an image with less noise, which facilitates accurate pattern detection during calibration and accurate distortion calibration with a pixel error of 0.1–0.3 pixels. The calibration error of the zoomed-view probe comes from both the camera calibration and scanning BS calibration. The lower throughput of the zoomed-view probe produces an image with greater noise, which reduces the accuracy of the camera calibration’s automatic pattern detection. In addition, the wide-view probe better matches the Brown’s lens distortion model used in the calibration, whereas the zoomed-view probe does not. The resulting pixel error of the zoomed-view camera calibration is 0.7–1.5 pixels. Calibration of the scanning BS requires the user to manually click the locations of features in the wide-view image that match those displayed in the zoomed-view image. Image degradation in the larger fields makes it difficult to match exact feature locations. The scanning BS calibration is, therefore, highly dependent on the user. Careful calibration, however, typically results in errors of 1–2 pixels.

Errors in camera and BS calibrations of the zoomed view perturb the geometrical mapping of features from the surgical field to the distortion-corrected zoomed-view image produced by the MRFL. Consequently, a straight line in the surgical field may not appear perfectly straight in the zoomed-view image. For this reason, it is unacceptable to take a line intensity profile across all vertical grid lines that fall on the horizontal radial path from the projected ${C_W}$ to the image edge. The more favorable solution is to digitally overlay a reference line in the wide-view image since the distortion center location of the wide view is known within $\pm {0.3}$ pixels. The sub-pixel error of the wide-view calibration is negligible in relation to the calibration error of the zoom view. Therefore, superimposing a straight line onto the distortion-corrected wide-view image is equivalent to drawing the reference line in the surgical field so long as the superimposed line runs perpendicular to the grid lines and intersects both the distortion center and image edge. Then, a separate set of RGB intensity profiles can be taken for each grid line at the point in the zoomed-view image that corresponds to the intersection point of the grid line with the superimposed line in the wide-view image.

A yellow reference line with a red crosshair denoting the distortion center was superimposed onto the real-time, distortion-corrected wide-view camera feed. The zoomed-view feed was displayed simultaneously in real time as well. The MATLAB script used to generate the grid was altered so that a green hash mark would be placed at any location within that grid that the user clicks on. A separate computer and monitor were used to generate and display the grid lines so that the computer cursor could be seen in the real-time MRFL camera feeds. As shown in Fig. 16, the intersection of each grid line with the superimposed reference line was marked by lining the cursor up with the intersection point in the wide-view feed and clicking. Upon clicking, a hash mark appears that is visible in both the wide view and zoomed view.

 

Fig. 18. Zoom-view distortion center calibration. (a) Circular grid generated via MATLAB script. (b) Circular grid displayed on 4 K monitor. Grid is positioned such that its center is aligned with the distortion center crosshairs overlaid on the wide-view feed. (c) The surgical field sampled by the zoom probe at nine reference locations arranged in a ${3 \times 3}$ grid. Each reference location corresponds to a known pair of azimuth and elevation parameters for the beam splitter.

Download Full Size | PPT Slide | PDF

Once all intersection points were marked, we captured zoomed-view images of the two foveated sub-regions needed to span the horizontal radial path from the distortion center to the edge of the surgical field. The $\Delta\! {R_W}$ and $\Delta {B_W}$ plots resulting from the two images were stitched together into one plot (Fig. 17). Both $\Delta\! {R_W}$ and $\Delta {B_W}$ were fit with trend lines of slopes ${M_\textit{Wr}} = 0.0074$ and ${M_\textit{Wb}} = - 0.0122$. The newly measured ${M_\textit{Wr}}$ and ${M_\textit{Wb}}$ rates differ from those measured in the wide view by $\Delta {M_\textit{Wr}} = 0.0031$ and $\Delta {M_\textit{Wb}} = 0.0022$. At the furthest pixel from ${C_W}$ in the wide view, $\Delta {M_\textit{Wr}}$ and $\Delta {M_\textit{Wb}}$ produce pixel disparities of around 2.73 pixels and 1.94 pixels, respectively. Thus, the LCA contributions of the zoom probe are negligible for our purposes and may be removed from Eqs. (6) and (7) to yield

$$\left\{{\begin{array}{*{20}{c}}{\Delta R_\textit{Zx}^{\theta ,\varphi} = \Delta R_x^{\theta ,\varphi} = {M_\textit{Wr}}\centerdot \left({{x_\textit{Zg}} - D_x^{\theta ,\varphi}} \right)}\\[4pt]{\Delta R_\textit{Zy}^{\theta ,\varphi} = \Delta R_y^{\theta ,\varphi} = {M_\textit{Wr}}\centerdot \left({{y_\textit{Zg}} - D_y^{\theta ,\varphi}} \right)}\end{array}} \right.$$
and
$$\left\{{\begin{array}{*{20}{c}}{\Delta B_\textit{Zx}^{\theta ,\varphi} = \Delta B_x^{\theta ,\varphi} = {M_\textit{Wb}}\centerdot \left({{x_\textit{Zg}} - D_x^{\theta ,\varphi}} \right)}\\[4pt]{\Delta B_\textit{Zy}^{\theta ,\varphi} = \Delta B_y^{\theta ,\varphi} = {M_\textit{Wb}}\centerdot \left({{y_\textit{Zg}} - D_y^{\theta ,\varphi}} \right)}\end{array}} \right..$$
As a result, LCA may be corrected in the zoom view by applying the mappings shown in Eqs. (10) and (11) to every pixel in the red and blue channels:
$$\left\{{\begin{array}{*{20}{c}}{x_\textit{Zr}^{\rm New} = \frac{{{x_\textit{Zr}} - D_\textit{Zx}^{\theta ,\varphi}}}{{1 + {M_\textit{Wr}}}} + D_\textit{Zx}^{\theta ,\varphi}}\\[4pt]{y_\textit{Zr}^{\rm New} = \frac{{{y_\textit{Zr}} - D_\textit{Zy}^{\theta ,\varphi}}}{{1 + {M_\textit{Wr}}}} + D_\textit{Zy}^{\theta ,\varphi}}\end{array}} \right.,$$
$$\left\{{\begin{array}{*{20}{c}}{x_\textit{Zb}^{\rm New} = \frac{{{x_\textit{Zb}} - D_\textit{Zx}^{\theta ,\varphi}}}{{1 + {M_\textit{Wb}}}} + D_\textit{Zx}^{\theta ,\varphi}}\\[4pt]{y_\textit{Zb}^{\rm New} = \frac{{{y_\textit{Zb}} - D_\textit{Zy}^{\theta ,\varphi}}}{{1 + {M_\textit{Wb}}}} + D_\textit{Zy}^{\theta ,\varphi}}\end{array}} \right..$$
Equations (10) and (11) exhibit the same structure as Eqs. (4) and (5). However, examination of Eqs. (10) and (11) reveals a new challenge that is unique to the zoomed-view pixel mappings. Unlike the wide-view color correction, which has no dependence on scanning angle, the zoom-view correction mappings depend on $\overrightarrow {{D^{\theta ,\varphi}}} = \langle {D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}} \rangle$, which changes as the foveated view is moved around via the scanner. Every scanning angle $({\theta ,\;\varphi})$ corresponds to a unique projected distortion center, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, and, by extension, a unique mapping solution given by Eqs. (10) and (11).
 

Fig. 19. (a) The grid is centered on the wide-view distortion center, ${C_W}$, so that all radial grid lines converge at $({{C_\textit{Wx}},{C_\textit{Wy}}})$ in the wide-view coordinate system. (b) The intersection point of the radial lines in the zoomed-view image is the projected location, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, of the wide-view distortion into the zoomed-view coordinate system. By this method, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$ can be calculated even if it lies outside of the zoomed-view image, as is illustrated above.

Download Full Size | PPT Slide | PDF

 

Fig. 20. Zoomed-view digital LCA correction results for edge ROI: (a) original image, (b) digitally corrected image.

Download Full Size | PPT Slide | PDF

Moreover, the foveated sub-region is often moved to positions in which the distortion center $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$ may lie outside of the captured image. In order to apply the zoomed-view LCA correction equations, a robust calibration of $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$ with respect to scanning angle is necessary to ensure successful real-time correction mappings.

Once again, the line and crosshairs were superimposed on the wide view to mark ${C_W}$. A circular grid was generated via MATLAB and displayed in a figure window on the 4 K monitor [Fig. 18(a)]. The figure window was positioned on the monitor such that the center of the grid was coincident with the red crosshairs and that all radial lines of the grid converge at the distortion center, ${C_W}$ [Fig. 18(b)]. Thus, for any one scanning angle, $({\theta ,\;\varphi})$, the projected distortion center, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, can be found by calculating the intersection point of the radial grid lines that appear in the zoomed-view image. Figure 19 illustrates this approach. Next, the surgical field was sampled by taking a zoomed-view image at nine different foveated sub-region positions [Fig. 18(c)].

Tables Icon

Table 3. Measured Execution Times of Computer Operations

Each of the nine images was run through a MATLAB script that records the straight segments of the radial grid lines, and outputs the average intersection coordinates of all of the line segments. Thus, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$ was found for nine different foveated sub-region reference positions. For each reference position image, Eqs. (10) and (11) were used to construct a lookup table of $x$ and $y$ pixel shifts for every pixel in the blue and red channels. These pixel shifts are $({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Red}^{\rm ref}$ and $({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Blue}^{\rm ref}$, where $i,j$ denote the row and column of the pixel and ref is an integer ($1 \le\, \text{ref}\, \le 9$) that denotes the reference position. When the foveated sub-region is moved to a position $({\psi ,\phi})$ that is not one of the nine reference positions, then $({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Red}^{({\psi ,\phi})}$ and $({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Blue}^{({\psi ,\phi})}$ are found through linearly interpolating the ${({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Red}}$ and ${({\Delta {x_{i,j}},\Delta {y_{i,j}}})_{\rm Blue}}$ values of the four closest reference position images. The results of the zoomed-view digital LCA correction are shown in Fig. 20.

For reference, Table 3 lists the measured execution time of all computer and software processes. All times were measured on a desktop PC with an Intel i7-8700 processor and NVIDIA RTX 2070 graphics card installed. The computer was running 64-bit Microsoft Windows 10 with 16 Gb RAM and all image processing was performed in the GPU.

5. SUMMARY AND FUTURE WORK

In summary, this paper discusses the optical and mechanical design overhaul and actualization of the multi-resolution foveated laparoscope. The redesigned system’s performance was evaluated on the metrics of image brightness, MTF, and resolution using the previous MRFL system as basis for comparison. Additionally, a digital real-time method of LCA correction is detailed and demonstrated. In the future, we will conduct studies to test the objective and subjective feasibility of the redesigned MRFL for laparoscopic training and surgery. Furthermore, design has begun on the next version of the MRFL with focus on improving performance and reducing size.

Funding

National Institutes of Health (1R01EB18921-01).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003). [CrossRef]  

2. H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016). [CrossRef]  

3. M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000). [CrossRef]  

4. A. Madani and C. Mueller, “Fundamentals of energy utilization in the operating room,” in Fundamentals of General Surgery (Springer, 2018), pp. 129–136.

5. Y. Qin, H. Hua, and M. Nguyen, “Multiresolution foveated laparoscope with high resolvability,” Opt. Lett. 38, 2191–2193 (2013). [CrossRef]  

6. Y. Qin, H. Hua, and M. Nguyen, “Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery,” Biomed. Opt. Express 5, 2548–2562 (2014). [CrossRef]  

7. Y. Qin and H. Hua, “Optical design and system engineering of a multiresolution foveated laparoscope,” Appl. Opt. 55, 3058–3068 (2016). [CrossRef]  

8. Y. Qin and H. Hua, “Continuously zoom imaging probe for the multi-resolution foveated laparoscope,” Biomed. Opt. Express 7, 1175–1182 (2016). [CrossRef]  

9. J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1111.

10. J. Heikkila and O. Silvén, “Calibration procedure for short focal length off-the-shelf CCD cameras,” in International Conference on Pattern Recognition (ICPR) (1996), Vol. 1, pp. 166–170.

References

  • View by:
  • |
  • |
  • |

  1. J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
    [Crossref]
  2. H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
    [Crossref]
  3. M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
    [Crossref]
  4. A. Madani and C. Mueller, “Fundamentals of energy utilization in the operating room,” in Fundamentals of General Surgery (Springer, 2018), pp. 129–136.
  5. Y. Qin, H. Hua, and M. Nguyen, “Multiresolution foveated laparoscope with high resolvability,” Opt. Lett. 38, 2191–2193 (2013).
    [Crossref]
  6. Y. Qin, H. Hua, and M. Nguyen, “Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery,” Biomed. Opt. Express 5, 2548–2562 (2014).
    [Crossref]
  7. Y. Qin and H. Hua, “Optical design and system engineering of a multiresolution foveated laparoscope,” Appl. Opt. 55, 3058–3068 (2016).
    [Crossref]
  8. Y. Qin and H. Hua, “Continuously zoom imaging probe for the multi-resolution foveated laparoscope,” Biomed. Opt. Express 7, 1175–1182 (2016).
    [Crossref]
  9. J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1111.
  10. J. Heikkila and O. Silvén, “Calibration procedure for short focal length off-the-shelf CCD cameras,” in International Conference on Pattern Recognition (ICPR) (1996), Vol. 1, pp. 166–170.

2016 (3)

2014 (1)

2013 (1)

2003 (1)

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

2000 (1)

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Alemzadeh, H.

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Chen, S. L.

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Frede, T.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Hatzinger, M.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Heikkila, J.

J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1111.

J. Heikkila and O. Silvén, “Calibration procedure for short focal length off-the-shelf CCD cameras,” in International Conference on Pattern Recognition (ICPR) (1996), Vol. 1, pp. 166–170.

Hua, H.

Iyer, R. K.

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Kalbarczyk, Z.

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Leveson, N.

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Madani, A.

A. Madani and C. Mueller, “Fundamentals of energy utilization in the operating room,” in Fundamentals of General Surgery (Springer, 2018), pp. 129–136.

Mueller, C.

A. Madani and C. Mueller, “Fundamentals of energy utilization in the operating room,” in Fundamentals of General Surgery (Springer, 2018), pp. 129–136.

Nguyen, M.

Ou, C. S.

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Qin, Y.

Raman, J.

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Rassweiler, J.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Rowbotham, R.

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Schulze, M.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Seemann, O.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Silvén, O.

J. Heikkila and O. Silvén, “Calibration procedure for short focal length off-the-shelf CCD cameras,” in International Conference on Pattern Recognition (ICPR) (1996), Vol. 1, pp. 166–170.

J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1111.

Teber, D.

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Wu, M. P.

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Yen, E. Y. T.

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Am. J. Surg. (1)

M. P. Wu, C. S. Ou, S. L. Chen, E. Y. T. Yen, and R. Rowbotham, “Complications and recommended practices for electrosurgery in laparoscopy,” Am. J. Surg. 179, 67–73 (2000).
[Crossref]

Appl. Opt. (1)

Biomed. Opt. Express (2)

J. Urol. (1)

J. Rassweiler, O. Seemann, M. Schulze, D. Teber, M. Hatzinger, and T. Frede, “Laparoscopic versus open radical prostatectomy: a comparative study at a single institution,” J. Urol. 169, 1689–1693 (2003).
[Crossref]

Opt. Lett. (1)

PLoS One (1)

H. Alemzadeh, J. Raman, N. Leveson, Z. Kalbarczyk, and R. K. Iyer, “Adverse events in robotic surgery: a retrospective study of 14 years of FDA data,” PLoS One 11, e0151470 (2016).
[Crossref]

Other (3)

A. Madani and C. Mueller, “Fundamentals of energy utilization in the operating room,” in Fundamentals of General Surgery (Springer, 2018), pp. 129–136.

J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1111.

J. Heikkila and O. Silvén, “Calibration procedure for short focal length off-the-shelf CCD cameras,” in International Conference on Pattern Recognition (ICPR) (1996), Vol. 1, pp. 166–170.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (20)

Fig. 1.
Fig. 1. Optical system layout of an existing MRFL prototype design by Qin et al. [8] with a magnified inset view of the beam splitting and scanning assembly.
Fig. 2.
Fig. 2. (a) Pre-clinical testing setup of a prototype inserted in a porcine model; (b) image captured by the wide-angle probe; and (c) image captured by the zoomed probe with its field of view corresponding to the blue box marked in the wide-angle image.
Fig. 3.
Fig. 3. MRFL captures two FOVs simultaneously. The wide-view FOV (blue) is constant and large, which provides the wide-view probe with greater throughput. The adjustable magnification of the zoom-view probe allows the user to vary the ratio between the areas imaged by the wide-view probe and the zoomed view from ${2} \times$ to ${3} \times$. At a ${2} \times$ (orange) zoom ratio, the area in the surgical field imaged by the zoom-view probe is half that of the wide-view probe. At a ${3} \times$ (green) zoom ratio, the area imaged by the zoom-view probe is a third of the area imaged by the wide-view probe.
Fig. 4.
Fig. 4. System layout of the new MRFL with a magnified inset view of the improved beam splitting and scanning assembly. The “h” shape of the system makes it compact and easier to maneuver.
Fig. 5.
Fig. 5. Comparison of ray-bundle obstruction when using the (top) native and (bottom) custom TOMG mounting pieces. Direction of light is into the paper. The beam footprint shown in each is a cross section taken at the beam splitter of the nearly collimated ray bundle. The custom mounting piece allows transmission of the beam without obstruction. Conversely, the native mounting piece obstructs one side of the beam, introducing non-rotationally symmetric vignetting.
Fig. 6.
Fig. 6. Comparison between the packaging dimensions and form factors of (top) the new MRFL prototype and (bottom) the prototype designed by Qin et al. [8]. The new prototype achieves a 32% decrease in overall packaging volume.
Fig. 7.
Fig. 7. Blue arrows indicate the non-rotationally symmetrical cutouts that provide clearance for the beam splitter mount to move. Left: part of the scanning lens housing is sectioned out to prevent metal-on-glass collisions with the beam splitter. Right: the notch removed from the beam splitter mount prevents it from hitting the scanning lens housing.
Fig. 8.
Fig. 8. Images of I3A/ISO Resolution Test Chart taken with the old (left column) and new (right column) MRFL systems. Each picture in the left column was taken using the same working distance, lighting conditions, and camera settings as its corresponding adjacent image in the right column. Row 1: images captured with the wide-view probes. Row 2: ${2} \times$ zoomed-view images where the zoomed sub-region is centered in the surgical field. Row 3: ${2} \times$ zoomed-view images where the zoomed sub-region is positioned at the edge of the surgical field. Row 4: ${3} \times$ zoomed-view images where the zoomed sub-region is centered in the surgical field. Row 5: ${3} \times$ zoomed-view images where the zoomed sub-region is positioned at the edge of the surgical field.
Fig. 9.
Fig. 9. Plot showing the measured wide-view MTF performances of the old and new MRFL systems along with the MTF curves of the design.
Fig. 10.
Fig. 10. Plots showing the measured zoomed-view MTF performances of the old and new MRFL systems along with the MTF curves of the design. Top: ${2} \times$ zoom ratio, bottom: ${3} \times$ zoom ratio.
Fig. 11.
Fig. 11. Comparison images of USAF 1951 resolution target taken with the old (left images) and new (right images) MRFL systems: (a) wide view; (b) ${2} \times$ zoomed view; (c) ${3} \times$ zoomed view.
Fig. 12.
Fig. 12. Illustration of the effect of LCA on the wide-view camera sensor and method of digital correction. The magenta dome denotes the pixel location, $({{C_\textit{Wx}},{C_\textit{Wy}}})$, of the distortion center on the sensor, which is the intersection of the optical axis with the image sensor.
Fig. 13.
Fig. 13. Wide-view LCA profiling. (a) Image of a vertical line grid taken with the wide-view probe. Matching radial line profiles are extracted from the red, green, and blue channels along the path denoted by the red arrow. The path travels normal to the grid lines and extends from the distortion center to the edge of the image. (b) The RGB line profiles are superimposed on the same plot resulting in a series of peak triplets. Each triplet consists of a green intensity peak and its corresponding red and blue intensity peaks. (c) The displacements between the reference green peaks and their corresponding red or blue peaks are plotted against the radial distance of the green peak to the distortion center. Linear trendlines fitted to the data reveal the slope values, $M_\textit{Wr}^{\rm vert}$ and $M_\textit{Wb}^{\rm vert}$.
Fig. 14.
Fig. 14. Wide-view digital LCA correction results: (a) original image, (b) digitally corrected image.
Fig. 15.
Fig. 15. Representation of the additive nature of the zoom probe and endoscope LCA contributions in the red channel.
Fig. 16.
Fig. 16. Zoomed-view profiling rectification. (a) Image captured from wide-view feed. The blue box indicates the region being imaged by the zoomed-view probe. A green hash mark was created by clicking the point of intersection of a grid line with the overlaid yellow line. (b) Image captured from zoomed-view feed. The green hash mark appears in the zoomed view as well.
Fig. 17.
Fig. 17. Top: two zoomed-view sub-regions (indicated by the red boxes) were needed to span the full path from the distortion center to the edge of the surgical field. The RGB intensity profiles of each grid line in the two sub-regions was extracted and plotted separately to reveal an RGB peak triplet. Bottom: for a single grid line, the displacements of the red and blue peaks from the corresponding green peak is plotted against the radial distance of the green peak to the distortion center resulting in a single red and single blue data point on the graph. Repeating this process for every grid line in the two sub-regions results in the scatter plot above.
Fig. 18.
Fig. 18. Zoom-view distortion center calibration. (a) Circular grid generated via MATLAB script. (b) Circular grid displayed on 4 K monitor. Grid is positioned such that its center is aligned with the distortion center crosshairs overlaid on the wide-view feed. (c) The surgical field sampled by the zoom probe at nine reference locations arranged in a ${3 \times 3}$ grid. Each reference location corresponds to a known pair of azimuth and elevation parameters for the beam splitter.
Fig. 19.
Fig. 19. (a) The grid is centered on the wide-view distortion center, ${C_W}$, so that all radial grid lines converge at $({{C_\textit{Wx}},{C_\textit{Wy}}})$ in the wide-view coordinate system. (b) The intersection point of the radial lines in the zoomed-view image is the projected location, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$, of the wide-view distortion into the zoomed-view coordinate system. By this method, $({D_x^{\theta ,\varphi},D_y^{\theta ,\varphi}})$ can be calculated even if it lies outside of the zoomed-view image, as is illustrated above.
Fig. 20.
Fig. 20. Zoomed-view digital LCA correction results for edge ROI: (a) original image, (b) digitally corrected image.

Tables (3)

Tables Icon

Table 1. Relative Brightness Analysis of the New MRFL with Respect to Its Predecessor Using Average Pixel Value Ratios

Tables Icon

Table 2. Resolution Limits of the Old and New MRFL Systems

Tables Icon

Table 3. Measured Execution Times of Computer Operations

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

T o b j = A F O V ( N A o b j ) 2 π ,
{ Δ R W = Δ R Wx , Δ R Wy Δ R Wx = M Wr ( x Wg C Wx ) Δ R Wy = M Wr ( x Wg C Wy )
{ Δ B W = Δ B Wx , Δ B Wy Δ B Wx = M Wb ( x Wg C Wx ) Δ B Wy = M Wb ( x Wg C Wy ) ,
{ x Wr N e w = x Wr C Wx 1 + M Wr + C Wx y Wr N e w = y Wr C Wy 1 + M Wr + C Wy ,
{ x Wb N e w = x Wb C Wx 1 + M Wb + C Wx y Wb N e w = y Wb C Wy 1 + M Wb + C Wy .
{ Δ R Z θ , φ = Δ R Zx θ , φ , Δ R Zy θ , φ Δ R Zx θ , φ = Δ R x θ , φ + Δ R Zx = M Wr ( x Zg D x θ , φ ) + M Zr ( x Zg C Zx ) Δ R Zy θ , φ = Δ R y θ , φ + Δ R Zy = M Wr ( y Zg D y θ , φ ) + M Zr ( y Zg C Zy ) ,
{ Δ B Z θ , φ = Δ B Zx θ , φ , Δ B Zy θ , φ Δ B Zx θ , φ = Δ B x θ , φ + Δ B Zx = M Wb ( x Zg D x θ , φ ) + M Zb ( x Zg C Zx ) Δ B Zy θ , φ = Δ B y θ , φ + Δ B Zy = M Wb ( y Zg D y θ , φ ) + M Zb ( y Zg C Zy ) ,
{ Δ R Zx θ , φ = Δ R x θ , φ = M Wr ( x Zg D x θ , φ ) Δ R Zy θ , φ = Δ R y θ , φ = M Wr ( y Zg D y θ , φ )
{ Δ B Zx θ , φ = Δ B x θ , φ = M Wb ( x Zg D x θ , φ ) Δ B Zy θ , φ = Δ B y θ , φ = M Wb ( y Zg D y θ , φ ) .
{ x Zr N e w = x Zr D Zx θ , φ 1 + M Wr + D Zx θ , φ y Zr N e w = y Zr D Zy θ , φ 1 + M Wr + D Zy θ , φ ,
{ x Zb N e w = x Zb D Zx θ , φ 1 + M Wb + D Zx θ , φ y Zb N e w = y Zb D Zy θ , φ 1 + M Wb + D Zy θ , φ .

Metrics