Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Predicting the Farnsworth–Munsell D15 and Holmes–Wright-A lantern outcomes with computer-based color vision tests

Open Access Open Access

Abstract

This study determined the AC1 agreement values between computer-based color vision tests and the Farnworth–Munsell D-15 (F-D15) and the Holmes–Wright Type A lantern (HWA). The computer-based tests were the United States Air Force Cone Contrast Test (OCCT), Cambridge Color Test, Innova Rabin Cone Contrast, Konan–Waggoner D15 (KWC-D15), and Color Assessment and Diagnosis (CAD). Sixty-eight color-vision-defective persons participated. The KWC-D15 had the highest AC1 with the F-D15 (${\rm AC1} = {0.88}$). Both the CAD and OCCT had the highest values with the HWA (${\rm AC1} \gt {0.96}$). The KWC-D15 would be the best substitute for the F-D15. Either the CAD or OCCT would be appropriate substitutes for the HWA.

© 2020 Optical Society of America

1. INTRODUCTION

The International Civil Aviation Authority (ICAO) is a specialized United Nations agency that works with member states and other stakeholders to develop international civilian aviation standards, practices, and policies. Member states agree to follow these standards and practices to ensure a safe and efficient aviation industry throughout the world. One of the many personal licensing requirements established by the ICAO for an unrestricted pilot’s license is the color vision requirement. The ICAO standard is: “The applicant shall be required to demonstrate the ability to perceive readily those colors the perception of which is necessary for the safe performance of duties” [1].

Implementation of this requirement typically involves two stages of testing. In the first stage, the candidate is tested with a suitable pseudoisochromatic plate test. If the candidate fails the pseudoisochromatic plate test, then a subsequent stage of testing occurs. In the second stage, the candidate is assessed with a secondary test, such as a lantern test or anomaloscope, to determine whether the person could identify aviation red, green, and white [1,2]. Testing with pseudoisochromatic plates to screen for congenital red–green defects is nearly universal across civilian aviation agencies [3]. However, there is some variability in the tests used to evaluate candidates who fail the screening test at the second stage. The secondary test could be a lantern test, anomaloscope, Farnsworth D-15 (F-D15), or the Color Assessment and Diagnosis (CAD) test. Some states allow candidates who fail the secondary test to undergo a third level of testing, which could be a practical or operational test in an aircraft or simulator [3].

Canadian military and civilian aviation authorities follow the two-stage protocol. Civilian pilot candidates are tested first with one of the approved color vision screening tests. These tests are all pseudoisochromatic plate-type tests primarily used to screen for red–green defects. If candidates fail the screening test, then they must pass either the F-D15 or the Holmes–Wright Lantern Type A (HWA) to qualify for an unrestricted pilot’s license. Although the HWA is no longer manufactured, it is still in service at a few locations in Canada. The Royal Canadian Air Force (RCAF) follows a similar protocol. The screening tests are the 38 plate edition of the Ishihara color vision test and the Standard Pseudoisochromatic Plates-Part 2 (SPP2). The SPP2 is used primarily to screen for acquired blue–yellow defects. Individuals who fail either screening test are assessed further with the F-D15. If they pass the F-D15, they are considered as having met the color vision requirement to enter pilot training in the RCAF. The HWA is not an option for the RCAF pilot candidates.

Computer-based color vision tests have several advantages over the traditional printed tests. The first is that they have the potential to eliminate the problem of candidates memorizing the stimuli in advance of the test. Memorization of the printed pseudoisochromatic plate tests, particularly abbreviated versions, is a concern because the answers are posted on numerous Internet sites. Of course, the test administrator can counteract memorization by blanking out the page numbers and presenting the plates in random order. Nevertheless, this does not eliminate the possibility that the candidate could memorize the plates based on unique patterns that may be visible on each plate. Computer-based tests can eliminate this problem by generating unique or highly unpredictable stimuli for each test. Another advantage of computer-based testing is that there is less potential for administrator bias. Examples of administrator bias include recording incorrect or ambiguous responses as correct, or by allowing an excessive amount of time to view the pseudoisochromatic plates or to arrange the colored F-D15 caps [4]. For computer-based tests, the stimulus duration is computer-controlled, and responses are recorded and scored automatically, thus eliminating bias. Two other advantages are that recording errors are eliminated, and the test result may integrate seamlessly with the electronic medical record. Finally, several of the computer tests measure chromatic thresholds for individuals with normal color vision (CVN) and individuals with color vision defects (CVDs). This capability can give a quantitative assessment of the severity of any CVD and allow for better monitoring of color discrimination throughout a pilot’s career.

The focus of this study was to determine the extent to which these new computer-based color vision tests could be used to replace the F-D15 and HWA color vision tests as the second stage of color vision testing for Canadian pilots. Specifically, we examined the capacity of these tests to predict the outcomes of the F-D15 and HWA. This study is also an extension of Cole and Vingry’s work on using clinical tests to predict the outcomes on the HWA [5]. The study is not addressing the appropriateness of the current Canadian standards.

2. METHODS

A. Test Description and Procedures

The ColorDx color vision test (Konan Medical, Irving CA) consisted of a screening test, which used the digital version of the Waggoner (Waggoner Diagnostics, Rogers, AR) pseudoisochromatic plates in conjunction with a digital version of the F-D15. This digital version of the D15 will be referred to as the Konan–Waggoner Computerized D15 (KWC-D15).

In the KWC-D15 test, the individual test colors were presented as colored disks in the middle third of the monitor. The subject’s task is to select the colored disk that is most similar to the last filled rectangle at the top of the screen and drag it to the first empty rectangle. At a viewing distance of 50 cm, the angular subtense of the rectangles is 1.2° by 2.9°, and the diameter of the circles is 2.3°.

The test was displayed on a Microsoft Surface Pro (model number 1631) with the Windows 10 operating system. The monitor was calibrated using Spyder 4.5.4 (Datacolor, Lawrenceville, NJ) colorimeter to a white reference of 6500 K correlated color temperature every 30 days. The average luminance of the rectangles and circles was ${17}\;{{\rm cd}/{\rm m}^2}$. Subjects viewed the test binocularly. The test was administered 3 times without feedback. The subjects had to pass two out of the three trials for an overall passing performance.

The CAD test measures chromatic thresholds for 16 different vectors that straddle the three dichromatic lines of confusion through the gray background [6]. The stimulus is a colored square that moves diagonally within the gray background. The stimulus subtends 1.6°, and the background subtends 2.9° at the 1.4 m viewing distance. Dynamic luminance contrast noise is added to both the stimulus and background. The small individual squares making up the background and the stimulus change their luminance every 50 ms so that the display looks as if it is scintillating. The subject’s task is to identify, using a keypad, in which of the four diagonal directions the square had moved. A four-alternative forced-choice procedure is used to determine the observer’s chromatic threshold. The 16 colors are presented randomly.

The CAD unit, supplied by City Occupational Ltd (London), was version 2.3.3. It was installed on a Toshiba laptop (model number TECRA R950-1EJ) with the Windows 8 Pro operating system. Stimuli were presented on a NEC monitor (242 W-BK). The average luminance of the gray background is ${24}\;{{\rm cd}/{\rm m}^2}$. An LMT photometer (GOSSEN, Germany) was used to calibrate the monitor every 30 days. The test was viewed binocularly.

The Cambridge Color Vision Test (CCVT) measures chromatic thresholds within a gray background [7]. There are two test protocols. The first is the TriVector (CCVTTri), which measures thresholds along protanopic, deuteranopic, and tritanopic lines of confusion through the gray background. The other test (CCVTEll) measures thresholds from the gray background in equally spaced intervals within the ${u^{\prime}v^{\prime}}$ chromaticity. The program then calculates the best-fit discrimination ellipse. The stimulus is a Landolt ring. The subject’s task is to identify the location of the gap. Both the figure and background color vary randomly in luminance to ensure that only differences in hue are used to identify the target. A four-alternative forced-choice staircase procedure is used to determine the threshold.

Subjects viewed the monitor from a distance of 313 cm. At this distance, the Landolt ring subtended 4.3°, and the gap subtended 1.0°. The luminance noise ranged from 8 to ${18}\;{{\rm cd}/{\rm m}^2}$. The Landolt ring was presented for up to 4 s. The CCVTTri test was viewed monocularly with the right eye tested first, and the CCVTEll was viewed binocularly. Eight different vectors were selected for the CCVTEll test. The CCVTTri was performed before the CCVTEll. The program (ver. 2.3; Cambridge Research Systems, Ltd) was installed on a PC (ASUS Intel Pentium 4) with the Windows XP operating system. The test figure was presented on a 53.3 cm cathode ray tube (CRT) monitor (SONY model GDM-F520). The monitor was calibrated using a ColorCAL colorimeter (Konica Minolta CO. LTD) every 30 days.

The next color vision test was part of the Operational Based Vision Assessment program developed by the United States Air Force. It is a laboratory-based cone contrast test (OCCT) and a prototype of the Konan Medical ColorDx CCT-HD (Irvine, CA). The stimuli are Landolt rings, which subtend 1.4° with a gap of 0.3° at 1 m viewing distance. The subject’s task is to identify the location of the gap using the keyboard. Thresholds for each cone mechanism are determined using a 4-AFC with the Ψ adaptive method [8]. The Ψ adaptive method can estimate the threshold or estimate the threshold and the slope of the psychometric function.

The test was administered monocularly [OCCT(M)] and binocularly [OCCT(B)]. For the OCCT(M), the slope of the psychometric function was fixed at 2.6 for the L and M cone thresholds and 1.9 for the S-cone threshold. These values were based on data from the United States Air Force Operational Based Vision Assessment Laboratory. The monocular trials began with a practice session where suprathreshold stimuli were presented. There were eight presentations for each cone stimulus. Next, the contrast threshold for each cone was calculated after 20 additional presentations. The right eye was tested first. For the OCCT(B), the threshold and slope were calculated after 30 presentations. Monocular trials were performed before the binocular trials. The three different cone stimuli were presented randomly.

The OCCT program (ver. 1.1.0) was run on a desktop (Lenovo Intel Core i5) with the Windows 7 Professional operating system. The stimulus was presented on a NEC monitor (model 232 W-BK). The luminance of the gray background was ${69}\;{{\rm cd}/{\rm m}^2}$. The monitor was calibrated using X-Rite (ver. EODIS3 i1) Display pro colorimeter every 30 days.

The Rabin Cone Contrast Test (RCCT) was the commercial version 16.02.0 supplied by INNOVA Systems (Burr Ridge, IL). The RCCT estimates the cone contrast sensitivity of each of the L, M, and S cone mechanisms [9]. The stimuli are Sloan letters that subtend 0.4° at the 60 cm viewing distance. The letters are presented individually in the center of the computer screen, and the subject indicates which letter is presented by using a mouse to select the letter from the key displayed on the monitor. Cone contrast sensitivity is based on the number of correct responses. Each eye was tested separately, with the right eye tested first.

The RCCT test was displayed on a 27.9 cm Acer laptop using the Windows 7 operating system. This version of the program required that the monitor was calibrated weekly using a Spyder colorimeter (Express ver. 4.5.4). The white reference had a correlated color temperature of 6500 K. The luminance of the gray background was ${19}\;{{\rm cd}/{\rm m}^2}$.

The monitor warm-up time before collecting data or calibration was 15 min for the LCD/LED displays and 20 min for the CRT monitor.

The F-D15 started with all the loose caps removed from the box and arranged randomly on the table in front of the subject. Subjects were asked to place the colored cap in the set that was most similar to the previous one placed in the box. They were allowed to rearrange the caps once they were placed in the box. The test was administered 3 times without any feedback. Scoring was based on the Color Difference Vector analysis [10]. The subjects had to pass two out of the three trials for an overall passing performance. The light source was an Illuminant C fluorescent lamp (X-Rite, Grand Rapids, MI) with an average illuminance of 1400 lx on the plane of the table. The current Canadian standards do not specify a time limit. Accordingly, there was no time limit for completing either the KWC-D15 or F-D15. Nevertheless, subjects were encouraged to finish a trial within 5 min.

The HWA was viewed from 6 m. The test was illuminated at 180 lx in a plane parallel to the floor at table height [11]. The test started by showing examples of red, green, and white light at the DEMO brightness setting. Subjects could review the lights if they wished. The brightness was then changed to high, and 27 pairs of the test lights were presented. The starting positions for the second and third runs were randomly varied. There was no time limit for the presentation, but the subjects were encouraged to respond within 10 s. Misnaming any single light was counted as an error, and a failure was more than 2 errors on 27 pairs [12]. Room illumination for the computer-based tests was 1 lx (Konica Minolta T-1 Illuminance Meter, Ramsey, NJ) measured 1 m from the floor in a plane parallel to the floor.

The order of testing was determined using a random block design. For the monocular trials, the right eye was always tested first so that the sequence was consistent with the RCCT and OCCT monocular testing protocols. Time to complete each test was also recorded using a stopwatch or cell phone application. The time to complete did not include the instructions or any practice session. The time to complete a test could be a secondary factor to consider in selecting a test.

B. Participants

The study received ethics clearance through the Office of Research Ethics at the University of Waterloo (ORE 20996) and the Defence Research and Development Canada Human Research Ethics Committee (Protocol 2014-044). Sixty-eight civilian participants with congenital red–green CVD participated in the study. They were recruited through posters, social media, and newsletter advertisements. The average age for the CVD group was 27.6 years (${\rm sd} {\pm 10.6}$). Color vision was classified according to the Rayleigh color match using the HMC Oculus anomaloscope (Oculus Optikgeräte GmbH Wetzlar, Germany) in the neutral adaptation mode. Eight deuteranopes, 32 deuteranomalous, 19 protanopes, and 9 protanomalous participated in the study. The CVD group was predominantly male (89.7% males and 10.3% females). Dichromacy was based on matching the entire range of red–green settings. Included in the deuteranomalous group was one subject who would be considered as “pigmentfarbeamblopie” based on his normal settings on the anomaloscope, but failing all the other color vision tests used in this study [13]. He was placed in the deuteranomalous group because his results on the other tests were typical of a deutan defect.

Tables Icon

Table 1. Agreement Analyses for the Comparison between the F-D15 and Selected Tests

Fifty-five subjects with CVN also participated. Only their time to complete each test is presented. Their data are used as a benchmark for the CVD results because they would not be subject to the second level of color vision testing. The CVN participants were 50% females and 50% males, and the average age was 26.3 years (${\rm sd} \pm {9.4}$).

Tinted contact lenses or spectacles were not allowed. Ocular diseases were ruled out using a short questionnaire. Corrected or uncorrected visual acuities had to be at least 6/6 in the better eye and 6/9 in the other eye at 6 m, and 0.8 M in the better eye and 1.0 M in the other eye at 40 cm. The acuity criteria were based on the RCAF requirements, but the civilian Category 1 license criteria (i.e., commercial) are similar, with a minimum acuity of 6/9 in each eye and 6/6 binocularly.

C. Data Analysis

The computer test parameters evaluated were as follows: the KWC-D15 C-index; CAD red–green threshold, highest of the CCVTTri red–green thresholds and the average of the protan/deutan thresholds, CCVT(Ell) elliptical area, highest of the L or M cone thresholds and the average of the L and M cone thresholds for both the OCCT(M) and OCCT(B), and the lowest of the RCCT L- or M-cone sensitivities and the average of the L- and M-cone sensitivities. The reason for examining the individual L- and M-cone contrast results and the average of the L and M values was to be consistent with CAD red–green Standard Normal Unit (SNU). The SNU is the average of the thresholds for the protan and deutan vectors. The monocular data from the CCVTTri, OCCT(M), and RCCT were averaged between eyes.

Except for the KWC-D15, Receiver-Operator-Characteristic (ROC) analyses (Sigmaplot ver. 11.0; Systat Software Inc, Chicago, IL) were performed using each test parameter to determine the cutoff score that would give the maximum of the sum of the sensitivity and specificity with the F-D15 and HWA. If there were multiple criteria with the same sum, the cutoff with the highest sensitivity was selected. The C index was used to determine whether the subject passed or failed both D15 tests. A C index $ {\gt} {1.78}$ was a failure [10]. This value corresponds to more than one major crossing.

In order to compare the agreement with the F-D15 and HWA across the computer tests, the AC1 coefficient of the agreement and predictive pass/fail were calculated. Similar to the kappa coefficient of agreement, the AC1 values can vary from $ - {1}$ to 1, with $ - {1}$ indicating complete disagreement, 0 meaning that any agreement is due to chance, and 1 indicating perfect agreement. One of the differences between the AC1 and kappa coefficient is that the AC1 value is less affected by large asymmetries in the marginal totals [14,15]. The agreement values were calculated using Agree Stat ver. 2013.2 (Advanced Analytics, Gaithersburg, MD).

In this comparison with the computer-based tests, the F-D15 and the HWA are considered the “standard” test separately. Although sensitivity (i.e., the proportion who failed the standard test and also failed the computer-based test) and specificity (i.e., the proportion who passed the standard test and also passed the computer-based test) can be used in comparing the computer tests with each of the current tests; the predictive values of the computer tests are more useful to clinicians. The predictive value for passing (PreP) indicates how well a pass on the computer-based test identifies those who also passed the standard test (i.e., either the F-D15 or HWA). It is the proportion of individuals who passed the computer-based test and passed the standard test. The predictive value for failing (PreF) indicates how well a failure on the computer-based test identifies those failed, either the F-D15 or HWA. It is the proportion of individuals who failed the computer-based test and also failed the standard test.

3. RESULTS

A. Comparison with the F-D15

Table 1 lists the comparison indices with the various tests. The results are arranged in descending order according to the AC1 agreement value. The area under the ROC curve is also included as an indication of the relative accuracy of the various tests independent of the pass/fail criterion [16]. Given that both the ROC area and the AC1 index reflect the ability of the test to separate the CVD subjects who pass and fail the F-D15, the values follow a similar trend. The AC1 agreement values show that all the tests separated CVD subjects who passed or failed the F-D15 at a level better than chance. Nevertheless, the KWC-D15 was significantly better than all the rest of the tests based on the 95% confidence interval. The CAD was second best. The remaining tests had similar results, with moderate levels of agreement.

In general, the PreF values were good-to-excellent for tests, with the KWC-D15 having perfect and significantly higher PreF than the rest of the tests. This result means that if the person fails the KWC-D15, he or she would almost certainly fail the F-D15. High PreF values also indicate that the specificity of the test was excellent. The PreP was highest for the KWC-D15. Based on the 95% confidence interval, KWC-D15 PreP was significantly higher than the values for the OCCT(B) average of L- and M-cone thresholds, CCVTEll, and the monocular test values. The KWC-D15 PreP, however, was not statistically different from the CAD PreP and the PreP values for the OCCT(M) and OCCT(B) maximum thresholds.

Nevertheless, our results indicate that the KWC-D15 was slightly less sensitive using the same failure criterion on both tests. The agreement values can be improved slightly by changing the KWC-D15 failure criterion to $ {\gt} {1.6}$. This increases the AC1 agreement value to 0.91 (95% CI, 0.82–1.0), the PreP to 0.90 (95% CI, 0.71–0.95) and the PreF remains at 1.0 (95% CI, 0.90–1.0). Lowering the KWC-D15 C index further resulted in a reduction of the PreF value, with a slight increase in the number of between-test discrepancies.

The lower levels of agreement combined with the lower PreF and PreP indicate that the chromatic threshold distributions of those subjects who passed, or failed, the F-D15 overlap to varying degrees. Contributing to this overlap is the fact that we pooled the protans and deutans into one group. There were two reasons for pooling the groups. The first was to be consistent with current requirements that are based only on a pass/fail result without considering the type of defect. The second reason was that when the subjects were subdivided, the area under the ROC curve for the protan group was not significantly better than chance for many of the tests. However, that was not the case for the CAD and OCCT(B). The area under the ROC was significantly greater than chance for both the protans and deutans on each test.

Figures 13 show the dot histograms of the CAD, OCCT(B), and RCCT scores for those who passed and failed the F-D15. The RCCT test was selected to illustrate the results where the area under the protan ROC would not have been better than chance. In these figures, the type of defect is based on the outcome of the individual tests with color-normal test results and unclassified defects categorized as deutans. This classification system resulted in 100% agreement with the anomaloscope’s classification for each of these three tests.

 figure: Fig. 1.

Fig. 1. Dot histogram showing the distribution of CAD threshold values for those protan and deutan subjects who passed or failed the Farnsworth D15. The solid line is the CAD score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group. The short dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the protans, and the long dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the deutans.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Dot histogram showing the distribution of OCCT(B) maximum (Max) of either the L- or M-cone threshold for those subjects who passed (solid circles and triangles) or failed (open circles and triangles) the Farnsworth D15. The solid line is the OCCT(B) score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group. The short dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the protans, and the long dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the deutans.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Dot histogram showing the distribution of lowest sensitivity value of either the RCCT L or M cone for those subjects who passed (solid circles and triangles) or failed (open circles and triangles) the Farnsworth D15. The values are the averages of the individual monocular sensitivities. The solid line is the RCCT score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group.

Download Full Size | PDF

Tables Icon

Table 2. Agreement Analyses for the Comparison between the F-D15 and Selected Tests Using Separate Cutoff Values for Protans and Deutans

Tables Icon

Table 3. Agreement Analyses for the Comparison between the HWA and Selected Tests

The figures show several general findings that are representative of the other test results. First, regardless of the type of defect, the majority of threshold values for those who failed F-D15 clustered at the higher values. In contrast, the threshold values for those who passed the F-D15 covered a larger range, and several individuals overlapped with the values of those who failed, especially on the RCCT. Second, the protan thresholds clustered at higher threshold values (lower sensitivity) on all three tests.

The short dashed lines in Figs. 1 and 2 are the cutoff scores for the protans based on maximizing the sum of the sensitivity and specificity within each CVD group for each test. The long dashed lines are values for the deutans, and the short dashed lines are the values for the protans. Table 2 shows the agreement indices for the separate cutoff scores. There was a marginal increase in the AC1 coefficient of agreement, an increase in the PreP values, and a corresponding decrease in the PreF values. The increase in the PreP values was statistically significant based on the values in Table 2 falling outside the 95% confidence intervals listed in Table 1. The increase in the PreP was due to a decrease in the false negatives, who were all deutans. There was also a relatively smaller increase in the false positives that reduced the PreF fail values.

B. Comparison with the HWA

Table 3 lists the comparisons indices of each test relative to the HWA. The F-D15 is included to show the level of agreement between the two standard tests used by the civil aviation authorities. The comparisons with the D15 tests were based on the failure criterion of a C index $ {\gt} {1.78}$. Although only 7.3% (${n}={5}$) CVD subjects passed the HWA, the levels of agreement were statistically indistinguishable from 1.0 for most of the computer tests. Separate analysis for the protans and deutans was not done because none of the protans passed the lantern. The agreement values for the D15 tests were significantly lower than the other values, but better than chance based on the 95% confidence intervals. The ROC results parallel the AC1 values. The PreP was equal to 1.0 for several tests, although the precision of the value was low because of the small number of subjects who passed the HWA. Because of the low statistical power, only the PreP values for the maximum threshold of the CCVTTri, F-D15, and KWC-D15 were significantly lower. On the other hand, the PreF values were approaching 1.0 for all tests, which indicates that nearly all the people who failed these clinical color vision tests also failed the HWA.

Tables Icon

Table 4. Average Time in Minutes to Complete Each of the Tests for Both CVN and CVD Subjects

C. Completion Times

Table 4 lists the average times to complete each test for both CVN and CVD subjects. The HWA took the shortest time to complete. The two versions of the D15 and the RCCT had similar completion times, whereas the threshold tests took longer to complete. Relative to the CVN group, the CVD group took a similar amount of time to complete the OCCT(M) and CCVTTri. However, the CVD group required relatively less time to complete the OCCT(B) and CCVTEll and more time to complete the HWA, both versions of the D15, and the CAD. Note that the time to complete the D15 tests does not include instructions, but the time does include recording or saving the results of each arrangement. Within the D15 tests, the mean completion times for the CVD subjects who passed the F-D15 and KWC-D15 were 4.54 min (${\rm sd}= \pm {2.32}$) and 4.66 min (${\rm sd}= \pm {2.02}$), respectively. The mean completion times for the CVD subjects who failed the F-D15 and KWC-D15 were 5.33 min (${\rm sd}= \pm {2.53}$) and 6.17 min (${\rm sd}= \pm {2.40}$), respectively. The maximum amount of time required to complete three arrangements of either one of the D15s was 15 min. Two subjects who took this long (one on each D15 test) failed the respective test on all three arrangements.

Analysis of variance with Dunnett T3 post hoc test for unequal variance was performed using IBM SPSS (IBM Corp., Windows ver. 25.0.2. Armonk, NY). The times to complete the test for the CVN subjects, CVD subjects who passed the F-D15, and CVD subjects who failed the F-D15 were significantly different (${{ F}_{2,117}}={25.5}$, ${ p} \gt {0.001}$). Both CVD groups were significantly slower than the CVN group (${ p} = {0.011}$ for the CVD subjects who passed; ${ p} \lt {0.001}$ for the CVD subjects who failed). The difference in completion times between the CVD groups did not reach statistical significance (${ p}={0.084}$). For the KWC-D15, the main effect of the color vision group was statistically significant (${{F}_{2,117}}={15.18}$, ${ p} \gt {0.001}$). The CVD group who failed the KWC-D15 was significantly slower than the CVN group (${ p} \lt {0.001}$) and the CVD who passed (${ p} = { 0.023}$). The difference in completion times between the CVN subjects and the CVD subjects who passed did not reach statistical significance (${ p}={0.34}$). The CVN group took more time to complete the KWC-D15 than to complete the F-D15. This result was the reason that the CVN group completion times were not significantly different from the CVD group who passed the KWC-D15.

4. DISCUSSION

This study aimed to determine which of the newer computerized tests would be a suitable replacement for the F-D15 and HWA tests currently used by the Canadian civilian and military aviation authorities. The F-D15 is currently used at the second testing stage by both civilian and military authorities to determine whether a person with a CVD has adequate color discrimination to operate the aircraft safely [1]. The Canadian civilian aviation authorities use the HWA as an alternative to F-D15. The primary indices for determining whether the newer tests would be suitable replacements was the level of agreement between the newer and current tests and the predictive fail and predictive pass values of the new tests. Assuming that more than one of the newer tests had an equivalent level of agreement with the F-D15 and HWA, then test selection would be based on other factors, such as whether the test could be used for screening purposes and is capable of classifying the severity for occupational purposes, or the time to complete the test [6,9,1719].

A. Agreement with the F-D15

The KWC-D15, which is a computer analog of the F-D15, had the highest level of agreement with the F-D15, the highest PreP, and a PreF that equaled 100%. These results indicate that the digital reproduction of the test colors was excellent, and the two tests are nearly equivalent. Nevertheless, the agreement was not perfect in that the PreP was 0.87 and not 1.0. This PreP indicates that 13% of the CVD subjects who passed the KWC-D15 would fail the F-D15. That is, KWC-D15 is slightly less sensitive than the F-D15. The possible reason for the lower sensitivity is that the circles on the KWC-D15 are twice the angular size of the circles on the F-D15. Some CVD subjects do have improved color discrimination on the D15 and anomaloscope when the field size is increased [2022], although protans are more likely to show improvement on the D15 than deutans [21]. Practice in order to learn how to interpret the information provided by the larger stimuli may play a role in whether improvement occurs [20]. It was not possible to confirm this finding because of the small number of discrepancies. In this study, two deuteranomalous subjects and one protanomalous subject failed the F-D15 and passed the KWC-D15. Another possible reason for the difference in pass rates is the fact that the KW-D15 colors are a metameric match to the Munsell papers, and so the colors may not be a metameric match to the anomalous trichromats. The lower sensitivity of the KWC-D15 could be counterbalanced by lowering the C index for a failure on the KWC-D15 to 1.6. This change marginally increased the agreement and PreP values, but reduced the percentage of CVD subjects who passed the KWC-D15, but failed the F-D15 to 8%.

The C index was used as the parameter of comparison rather than the number of crossings for two reasons. First, there was no examiner bias in interpreting the arrangements when using the C index. Second, the C index takes into account both crossings and transpositions [10], and the value is highly correlated with the number of crossings [23]. However, the Total Error Score (TES) might be the better index to use in cases where there is likely to be more randomness in the arrangements and the scatter index would be reduced. This result could occur with multiple transpositions and only one major crossing. Our results are inconclusive as to which index to use. First, none of the subjects in this study had just one crossing with multiple transpositions. Second, there were no discrepancies between using the C index or TES in scoring the results as pass/fail because the two values were highly correlated with each other in our study (${r}={0.99}$). Nevertheless, if the pass/fail criterion for the D15 tests was any major crossing, then TES may be the better index to use in evaluating results where multiple transpositions occur.

Without separate scores for the protans and deutans, the agreement of the F-D15 with the other computer-based tests ranged from good to moderate, with the CAD having a higher level of agreement. The PreP and PreF of the other computer-based tests showed a corresponding decrease relative to the KWC-D15 values. Nevertheless, the PreF values for the computer-based tests remained relatively higher than the PreP values, indicating that the computer-based tests were better at predicting who would fail the F-D15 than pass the F-D15. Between 18% and 45% of the candidates who would pass one of these computer tests could fail the F-D15, and between 18% and 24% who failed the computer-based test could pass the F-D15.

The increase in the PreP values and reduction in the number of false negatives to 2.5% for both the CAD and OCCT(B) justify having separate cutoff scores for protans and deutans for these two tests. The separate criterion for the protans and deutans maximized the sensitivity of CAD and OCCT(B). The degree of overlap on the other tests between those who passed or failed the F-D15, as illustrated in Fig. 3 for the RCCT, indicates that there is no advantage to using separate cutoff scores on these tests.

We are uncertain as to why the protan thresholds were higher on the tests. It could be a result of not recruiting a sufficient number of protanomalous individuals with a milder defect. However, it is unlikely that luminance artifacts contributed to the differences because the threshold difference is present on the CAD, which masks luminance clues, and the OCCT and RCCT, which do not mask luminance clues.

Of the threshold color vision tests, the CAD had the highest level of agreement with the F-D15. There are several differences between the CAD and the other tests. In particular, the CAD featured dynamic luminance masking, whereas the CCVT used only spatial masking, and the OCCT had no masking. Another key difference is the number and location of the colored vectors that were sampled. The CAD measured thresholds in 12 different directions near the red–green discrimination axis, whereas the others only sampled in two directions, or the directions were at equal intervals in a color-normal space, as was the case for the CCVTEll. The CAD threshold values for the 12 red–green vectors were averaged to give the final score for the red–green discrimination threshold. The agreement indices in Table 1 show that averaging the thresholds from just two directions, or directions based on CVN, lowers the agreement with the F-D15. This result is expected because averaging compresses the range of threshold values and increases the overlap in values between individuals who passed or failed the F-D15.

Despite using the average red–green threshold, there was less overlap in the CAD threshold values for the two F-D15 outcomes relative to the other tests. This result likely occurred because the CAD sampled multiple vectors near the red–green discrimination axis and, therefore, was more likely to measure thresholds along directions in color space where an individual’s maximum threshold occurs, rather than in only two directions that were based on an assumed average observer and are not necessarily located in a direction where a given individual’s color discrimination is the worst. Also, there could be a ceiling effect on the OCCT tests due to luminance artifacts at the higher cone contrasts [24] that compressed the threshold range for individuals with more severe defects. A ceiling effect could explain the cluster of points in Fig. 2 for the subjects who failed the F-D15 and had a cone contrast threshold near $ - {0.8}$ log units.

The combined results suggest that when using threshold tests to determine one’s ability to perform other color discrimination tasks, it is better to measure thresholds in several directions near the dichromatic lines of confusion and not rely on a value based on only two vectors in color space. It is difficult to determine the optimum number of vectors from this study, but based on the CAD performance relative to the cone contrast tests, the number of directions could be around 12. Of course, the trade-off for more accurate measurements is an increase in the time to complete the tests. The CAD takes more than twice the amount of time to complete compared with the other tests.

The CAD, OCCT, and RCCT cutoff values in Table 1 that resulted in good agreement with the F-D15 were more liberal (higher threshold or lower sensitivity) than the cutoff values accompanying each test for use in military or civilian aviation. The CAD cutoff values for civilian aviation are greater than 6 SNUs for deutans and 12 SNUs for protans; the commercial version of the OCCT uses a cutoff contrast of greater than $ - {1.30}\;{\rm log}$ contrast and the RCCT cutoff is less than 55 for the United States Air Force and Navy pilots. Figures 1 and 2 show that this result was primarily due to pooling the higher threshold values of the protans with the deutans. Nevertheless, the CAD values in Table 2 are still higher than the CAD civilian aviation criterion when there are different cutoff values. This difference between the scores that produce good agreement with F-D15 and the corresponding values used by other agencies indicates that the Canadian color vision requirement for civilian and military pilots of passing the F-D15 is more liberal than the other requirements.

Interestingly, the deutan cutoff score of the OCCT(B) is close to the $ - {1.30}$ log contrast score currently used by the United States Air Force and Navy in the commercial version of the OCCT. However, their $ - {1.30}$ value is based on converting the RCCT monocular data to log contrast values, and our value is based on binocular viewing so that the two cutoff scores are not for equivalent viewing conditions. If the $ - {1.30}$ cutoff score is applied to our monocular data, the number of false positives (i.e., fail the OCCT, but pass the F-D15) increases to 82%.

Regardless of whether the separate cutoff scores were used for protans or deutans, one would expect that the monocular OCCT agreement indices should have been similar to binocular OCCT values for the pooled CVD subjects. The cutoff score may have been different due to binocular summation effects. However, the values in Table 1 show that the binocular trial performed significantly better than the monocular trials. We are uncertain as to why the OCCT(B) performed better than the OCCT(M). Likely, there were multiple factors involved that included the lower numbers of presentation for the OCCT(M), using the fixed slope for the psychometric function in OCCT(M), which was based on CVN results, OCCT(M) being always performed first, monocular versus binocular viewing, or some combination of these factors.

The basis for the United States Air Force and Navy RCCT cutoff score of 55 is a task simulation study using displays encountered both outside and inside the cockpit [25]. If the RCAF elected to use RCCT and harmonize their color vision standards with the United States Navy and Air Force, then 35% of the CVD pilot candidates who pass the F-D15 would be disqualified from the RCAF using RCCT. The percentages of these candidates who would be disqualified based on the cutoff scores in Table 2 is 61% for the OCCT(B) and 28% for the CAD.

Of the threshold tests, the CAD would be the best option to replace the F-D15; however, the test takes about twice as long to administer as three arrangements of the F-D15. Interestingly, the CVD group required less time to complete the OCCT than the CVN subjects. Their faster times could be a result of quickly entering a response when they could not easily resolve the gap. The reduced completion times could also be due to possible luminance artifacts in the higher contrast stimuli that may have reduced their response times.

In terms of the D15 tests, the current Canadian standards do not specify a time limit for completing an arrangement, and this study was near completion when the recommendation of restricting the time for an arrangement to Farnsworth’s 2 min limit instead of 5 min was published [26]. The completion times in Table 4 include the time required to record the results and set up the test for the next arrangement, and so we do not have the actual times for each arrangement. Nevertheless, even if the additional time to record and set up is included, the average time to complete an F-D15 arrangement for CVD who passed the test was 1.5 min. This result supports the Farnsworth’s 2 min limit and indicates a 5 min limit is too generous. The result that the KWC-D15 took longer to complete, particularly for the CVN group, was likely due to the time required to set up the computer for the subject’s next arrangement. The program did not allow multiple arrangements within the same session, and so the subject’s information had to be re-entered before each attempt.

B. Agreement with the HWA

The HWA was designed to determine whether CVD individuals can identify aviation and maritime signal lights viewed from 1 nautical mile [11,27]. The test is challenging for most CVD subjects due to the combination of the test colors, small angular subtense of the test lights, and the intensity of the test lights. The low pass rate for the CVD group found in this study is consistent with previous studies showing that only individuals with mild defects are likely to pass the test, especially if multiple runs of the nine pairs of lights are presented [12,18,28,29].

The finding that the cutoff values for the CAD, OCCT, and RCCT (Table 3) were more conservative than the values used for pilots further supports the conclusion that only individuals with very mild or mild CVDs can pass the HWA. The CAD cutoff of 3.12 found in this study was reasonably close to the cutoff value of $ {\gt} {4}$ SNU recommended by Rodriguez-Carmona and Barbur and $ {\gt} {2.35}$ SNU requirement adopted by the United Kingdom for air traffic controllers [18,30].

Note that the RCCT cutoff value of 66 was slightly less than the 75 used as a minimum score for CVN [9,17,31]. If the RCCT cutoff to 75, then the outcome for only two deuteranomalous subjects is changed. One failed both tests with the stricter cutoff instead of just the HWA, and the other failed the RCCT but passed the HWA. This change in the cutoff score had only a marginal effect on the agreement and predictive values.

The small number of individuals who passed the HWA could be a result of our recruiting process. It is possible that individuals with a mild defect were either unaware of their CVD or were not interested in participating in the study. Nevertheless, the 7% pass rate in our study was within the 6% to 15% rate reported by others [12,18,28,29]. One of the reasons for the range in the pass rates across studies could be the relative proportions of dichromats and anomalous trichromats in the various studies. Another reason for the range of pass rate is different testing and scoring procedures. The original procedure stops the test if the person obtains a perfect performance on the first series of nine pairs of test lights [11]. Several of the CVD subjects who have a perfect performance on the first series cannot repeat this result either within a session or on a different day [12]. If multiple runs of the nine pairs are administered, and the outcome is based on the total errors, then the pass rate drops to less than 10% of the CVD subjects, but the between-visits repeatability improves [12,18,29].

The F-D15 was designed to separate individuals with a mild CVD from those with a more severe CVD. As such, approximately 53% of the CVD subjects pass if one major crossing is allowed [32], which would be equivalent to a C index $ {\gt} {1.78}$. Thus, the test is not as sensitive as the HWA, and accordingly, one would expect the agreement between the D15 test and the HWA tests to be lower, as was evident in our results. Because the F-D15 is not as sensitive, the F-D15 PreP should be relatively low. The low PreP found in this study agreed with the values in Cole and Vingrys’ study [5]. The KWC-D15 would not offer any improvement because it is comparable to the F-D15. Nevertheless, a failure on either D15 test indicates that the individual is nearly certain to fail the HWA.

The HWA test is a legacy test for the Canadian civilian aviation authorities and is not widely used because of limited availability. Nevertheless, candidates who have failed the F-D15 consider that attempting HWA provides a possible avenue to obtaining an unrestricted pilot’s license. Although they may be highly motivated to take the HWA, clinicians should advise these CVD individuals that the probability of passing the HWA is extremely low so that the candidate’s expectations about passing the HWA are realistic.

5. CONCLUSION

There are several factors to consider if the RCAF and civilian aviation authorities wish to replace the F-D15 with one of the computer-based tests. Based on the level of agreement and predictive values, the KWC-D15 would be the first choice. However, the KWC-D15 is no longer available, and the newer version (by Waggoner Diagnostics) has not been evaluated, to our knowledge. The CAD, CCVT, and the commercial version of the OCCT are capable of measuring chromatic thresholds for CVN and CVD individuals, which may be useful in monitoring pilots’ visual health throughout their careers. Of these three computer-based tests, the CAD would be the best option. However, it should be noted that the cutoff scores of 13.5 for the deutan and 20.7 for the protans are more liberal than the pass/fail criterion used by the United Kingdom civil aviation authorities. Although this criterion maximized agreement with the F-D15 and minimized the number of false negatives, it would disqualify 28% of the currently acceptable pilot candidates from the RCAF and CVD who would qualify for an unrestricted civilian pilot’s license.

The comparison with the HWA confirmed that the lantern test was more challenging than was the F-D15, and only individuals with a very mild color defect could pass this lantern test. Failing the F-D15 essentially guarantees that one would also fail the HWA. The newer computer tests had a reasonable level of agreement and high predictive values for failing the HWA because approximately 93% of the CVD subjects identified as such by the computer tests failed the lantern test.

Funding

Defence Research and Development Canada and the Canadian Institute for Military and Veteran Health Research (W7714-145967).

Acknowledgment

Dr. Almustanyir was supported by a scholarship from the College of Applied Medical Science Research Center and the Deanship of Scientific Research at King Saud University.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. International Civil Aviation Organization Convention on International Civil Aviation, International Standards and Recommended Practices–Personnel Licensing, 11th ed., Annex 1, Standard 6.2.4.2 (ICAO, 2017).

2. International Civil Aviation Organization, Manual of Civil Aviation Medicine, 3rd ed., Doc 8984 AN/895 (ICAO, 2012).

3. D. B. Watson, “Lack of international uniformity in assessing color vision deficiency in professional pilots,” Aviat. Space Environ. Med. 85, 148–159 (2014). [CrossRef]  

4. J. S. Ng and W. A. Morton, “Case report: invalidation of the Farnsworth D15 test in dichromacy secondary to practice,” Optometry Vision Sci. 95, 272–274 (2018). [CrossRef]  

5. B. L. Cole and A. J. Vingrys, “Who fails lantern tests?” Doc. Ophthalmol. 55, 157–175 (1983). [CrossRef]  

6. J. L. Barbur, M. Rodriguez-Carmona, and A. Harlow, “Establishing the statistical limits of ‘normal’ chromatic sensitivity,” in CIE Expert Symposium, CIE Proceedings 75 Years of the Standard Colorimetric Observer (2006).

7. B. C. Regan, J. P. Reffin, and J. D. Mollon, “Luminance noise and the rapid determination of discrimination ellipses in colour deficiency,” Vis. Res. 34, 1279–1299 (1994).

8. L. L. Kontsevich and C. W. Tyler, “Bayesian adaptive estimation of psychometric slope and threshold,” Vis. Res. 39, 2729–2737 (1999). [CrossRef]  

9. J. Rabin, “Quantification of color vision with cone contrast sensitivity,” Vis. Neurosci. 21, 483–485 (2004). [CrossRef]  

10. A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Vis. Sci. 29, 50–63 (1988).

11. J. Holmes and W. Wright, “A new color-perception lantern,” Color Res. Appl. 7, 82–88 (1982). [CrossRef]  

12. J. K. Hovis, “Repeatability of the Holmes-Wright type A lantern color vision test,” Aviat. Space Environ. Med. 79, 1028–1033 (2008). [CrossRef]  

13. J. Pokorny, V. C. Smith, G. Verriest, and A. J. L. G. Pinckers, Congenital and Acquired Color Vision Defects (Grune and Stratton, 1979), Chap. 7.

14. K. L. Gwet, “Computing inter-rater reliability and its variance in the presence of high agreement,” British J. Math. Statist. Psychl. 61, 29–48 (2008). [CrossRef]  

15. N. Wongpakaran, T. Wongpakaran, D. Wedding, and K. L. Gwet, “A comparison of Cohen’s Kappa and Gwet’s AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples,” BMC Med. Res. Methodol. 13, 61 (2013). [CrossRef]  

16. M. H. Zweig and G. Campbell, “Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine,” Clin. Chem. 39, 561–577 (1993).

17. I. W. Chay, S. W. Y. Lim, and B. B. C. Tan, “Cone contrast test for color vision deficiency screening among a cohort of military aircrew applicants,” Aerosp. Med. Hum. Perform. 90, 71–76 (2019). [CrossRef]  

18. M. Rodriguez-Carmona and J. L. Barbur, “Colour vision requirements in visually demanding occupations,” Br. Med. Bull. 122(1), 51–77 (2017). [CrossRef]  

19. D. V. Walsh, J. Robinson, G. M. Jurek, J. E. Capo-Aponte, D. W. Riggs, and L. A. Temme, “A performance comparison of color vision tests for military screening,” Aerosp. Med. Hum. Perform. 87, 382–387 (2016). [CrossRef]  

20. M. E. Breton and B. W. Tansley, “Improved color test results with large-field viewing in dichromats,” Arch. Ophthalmol. 103, 1490–1495 (1985). [CrossRef]  

21. J. M. Steward and B. L. Cole, “The effect of object size on the performance of colour ordering and discrimination tasks,” in Colour Vision Deficiencies IX: Proceedings of the Ninth Symposium of the International Research Group on Colour Vision Deficiencies, B. Drum and G. Verriest, eds. (Springer, 1989), pp. 79–88.

22. T. Motohashi, Y. Ohta, A. Hanabusa, and H. Shiraishi, “Comparative study between test results of 8-deg. large-field anomaloscope and large-size panel D15 test on dichromats,” in Colour Vision Deficiencies IX: Proceedings of the Ninth Symposium of the International Research Group on Colour Vision Deficiencies, B. Drum and G. Verriest, eds. (Springer, 1989), pp. 543–554.

23. D. A. Atchison, K. J. Bowman, and A. J. Vingrys, “Quantitative scoring methods for D15 panel tests in the diagnosis of congenital color vision deficiencies,” Optom. Vis. Sci. 68, 41–48 (1991). [CrossRef]  

24. J. Rabin, “Cone-specific measures of human color vision,” Invest. Ophthalmol. Vis. Sci. 37, 2771–2774 (1996).

25. H. Gao, M. D. Reddix, and C. D. Kirkendall, “Can operationally-relevant accuracy and reaction-time metrics guide the development of color-vision standards?” Aerosp. Med. Hum. Perform. 86, 265 (2015).

26. S. J. Dain, D. A. Atchison, and J. K. Hovis, “Limitations and precautions in the use of the Farnsworth-Munsell Dichotomous D-15 test,” Optometry Vis. Sci. 96, 695–705 (2019). [CrossRef]  

27. B. L. Cole and A. J. Vingrys, “A survey and evaluation of lantern tests of color vision,” Am. J. Optom. Physiol. Opt. 59, 346–374 (1982). [CrossRef]  

28. A. Vingrys and B. Cole, “Validation of the Holmes-Wright lanterns for testing color vision,” Ophthalmic Physiolog. Opt. 3, 137–152 (1983). [CrossRef]  

29. J. Birch, “Performance of color-deficient people on the Holmes-Wright lantern (type A): consistency of occupational color vision standards in aviation,” Ophthalmic Physiolog. Opt. 28, 253–258 (2008). [CrossRef]  

30. “UK CAA policy statement: colour vision in air traffic controllers,” 2017, https://www.caa.co.uk/WorkArea/DownloadAsset.aspx?id=4294985496.

31. J. Rabin, J. Gooch, and D. Ivan, “Rapid quantification of color vision: the cone contrast test,” Invest. Ophthalmol. Vis. Sci. 52, 816–820 (2011). [CrossRef]  

32. J. Birch, “Pass rates for the Farnsworth D15 colour vision test,” Ophthalmic Physiolog. Opt. 28, 259–264 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. Dot histogram showing the distribution of CAD threshold values for those protan and deutan subjects who passed or failed the Farnsworth D15. The solid line is the CAD score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group. The short dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the protans, and the long dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the deutans.
Fig. 2.
Fig. 2. Dot histogram showing the distribution of OCCT(B) maximum (Max) of either the L- or M-cone threshold for those subjects who passed (solid circles and triangles) or failed (open circles and triangles) the Farnsworth D15. The solid line is the OCCT(B) score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group. The short dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the protans, and the long dashed line is the score that maximizes the sensitivity with a specificity value greater than 0 for the deutans.
Fig. 3.
Fig. 3. Dot histogram showing the distribution of lowest sensitivity value of either the RCCT L or M cone for those subjects who passed (solid circles and triangles) or failed (open circles and triangles) the Farnsworth D15. The values are the averages of the individual monocular sensitivities. The solid line is the RCCT score, which maximizes the sum of the sensitivity and specificity when the subjects are pooled into one group.

Tables (4)

Tables Icon

Table 1. Agreement Analyses for the Comparison between the F-D15 and Selected Tests

Tables Icon

Table 2. Agreement Analyses for the Comparison between the F-D15 and Selected Tests Using Separate Cutoff Values for Protans and Deutans

Tables Icon

Table 3. Agreement Analyses for the Comparison between the HWA and Selected Tests

Tables Icon

Table 4. Average Time in Minutes to Complete Each of the Tests for Both CVN and CVD Subjects

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.