Abstract

This study presents an approach for enhancing color images for color vision deficiencies. The proposed approach is separated into three stages. First, the type and severity of a color deficient observer (CDO) were evaluated. Following that, the perceived color gamut was assessed using a physiologically-based color deficiency simulation model. Finally, images prepared for color normal observers (CNOs) were re-colored using a gamut mapping method to map colors from the gamut of a CNO to that of a CDO. Two psychophysical experiments were carried out to validate this method, and the results suggest that it is a promising solution for the CDOs. The unique feature of the present method is to include a gamut mapping method to enhance the color discrimination by preserving the perceived hue.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Color perception of a CNO is dependent on three types of photoreceptors, namely the L, M, and S cones, which are distinguished by their differing peak wavelengths [1]. Different types of photopigments were found in cones, and they all contribute to the corresponding cone responses [2]. However, these photoreceptors of a CDO differ from those of a CNO in some way, either inherited or acquired. Protanomals, deuteranomals, and tritanomals are three types of CDOs that are connected with their corresponding defective cone, i.e., the L, M, or S cone, respectively. Furthermore, if any of the cones is lacking, the observer is referred to as having a dichromacy, or more particularly, protanopia, deuteranopia, or tritanopia depending on the lacking of L, M or S cone, respectively.

According to prior research [3] roughly 7.9 percent of males and 0.42 percent of females suffer from color vision abnormalities, totaling more than 200 million persons worldwide. As a result, it has a significant impact on those CDOs if one can assist in re-building their color perceptions.

Many researchers worked on this problem and developed numerous color enhancement methods [412] for CDOs when viewing images on a monitor. According to the CIE technical report [13], these methods can be classified into three broad categories, i.e., recoloring, edge enhancement, and pattern superposition.

The most popular technique in this area is recoloring. Its goal is to replace readily mistaken colors with more distinguishing ones, hence assisting CDOs in color discrimination. Lau et al. [14] created a smartphone software that allows CDOs to identify troublesome color contrasts. They used a color transform that shears the data along lines parallel to the dimension corresponding to the user's impacted cone sensitivity and stated that this software can help CDOs discern between colors like red and green apples. When evaluating images, this method is highly successful and can provide more freedom to fine-tune the image. However, with this type of technology, additional manual adjustments are unavoidable, and it cannot be expanded to other applications such as video color enhancement. Sakamoto [15] presented a color enhancement approach that employs a color palette created specifically for protan and deutan deficiencies. In his work, he designed twenty distinct colors that are well perceived by the CDOs. Furthermore, the protan and deutan confusion colors can be readily substituted with those safe colors for any input color image. This strategy is appropriate in some cases where only a few colors are included. However, for a typical color image, such alteration will undoubtedly degrade overall image quality. Machado et al. [16] introduced an image enhancement algorithm that enlarges the color contrast of an image. The main goal of their method was to discover a lightness-chroma plane that minimized contrast loss, which may be easily accomplished by using the concept of principle component analysis (PCA). All colors in an image are projected to that plane first, then rotated to the dichromat's perceptual color gamut, which is also a lightness-chroma plane but has a different hue angle. Although their method was plausible when applied to a single image, it did not guarantee consistent color perception when a particular object appearing in multiple photographs. This means that a single color will be mapped into several colors in distinct images. As a result, observers have a difficulty to realize real objects with their memory colors.

The motivation behind edge enhancement techniques varies. It is not intended for CDOs to distinguish colors, but rather to define color boundaries. Hence, it is limited for certain purposes, such as map reading. Sakamoto [17] provided a representative example of this type of method. By limiting changes to such a small area, colors in the surrounding areas remain fairly steady.

Pattern superposition [18] is similar to the edge enhancement methods in certain ways. It does not stress the distinction between two similar colors, but rather adds additional information beyond colors. Different patterns will be applied to the original image colors, and CDOs will be able to quickly detect them. As a result, this technology was mainly designed to communicate color information, specifically for office papers and scientific illustrations.

According to the results of the preceding investigation, most of the algorithms in the literature are only applicable in a few cases. And the majority of them are specifically intended for dichromats. Anomaly trichromats, on the other hand, can occur in varying degrees, ranging from mild, when color perception is similar to that of a color-normal observer, to severe, when color perception is similar to that of a dichromat. As a result, the perceived color gamuts of these various types of CDOs varies. Algorithms for dichromacies will almost certainly limit the available colors and affect the color perceptions of anomalous trichromats. Therefore, it is important to analyze their severity levels before developing an image enhancing technique.

In this study, a novel image enhancement method was proposed to address the aforementioned issues. Firstly, the severity of a CDO was investigated to identify the observer to have a mild, moderate or server color deficiency. The perceived color gamut can then be determined by the adoption of a simulation model for color deficiencies. This is to examine those perceived colors to make the best use of the perceived gamut. Following that, the image enhancement problem can be considered as a gamut mapping problem, i.e., transferring colors from a CNO's gamut to a CDO's gamut. As a result, a gamut mapping method was appropriated to ensure a minimal color shift. In other words, the hue is preserved to not affect CDO’s memory colors.

2. Gamut estimation

Estimating the color gamut of a CDO is an essential yet difficult issue. An exact evaluation usually necessitates a thorough comprehension of the color perception system as well as an accurate measurement of the visual reactions. To understand how CDOs perceive colors, a physiologically-based model [19] was used in this study. Based on the stage theory [20], this model can handle normal color vision, anomalous trichromacy, and dichromacy in a generic way. It replicates color vision by integrating a photoreceptor-spectral-response stage and an opponent-color stage determined by electrophysiological data [21].

To model the vision of a CDO, two physical parameters are required: the type of color deficiency and the wavelength shift of the corresponding cone spectral function. The color deficiency was explained in the simulation model by a shift in cone spectral functions. The L cone's spectral sensitivity is typically shifted to shorter wavelengths in comparison to the color-normal observer, resulting in a protanomaly observer. Similarly, the M cone's spectral sensitivity is typically shifted to longer wavelengths, resulting in a deuteranomaly observer. The tritanomaly observer can alternatively be interpreted as a shift of the S cone towards longer wavelengths. It should be mentioned that this is disputed by certain researchers [2,22] for the tritanomaly observers. Some researchers believe that they have an S cone with lower spectral sensitivity than the color-normal observer. However, this is beyond the scope of this article, and tritanopia affects only around 0.003 percent of the Caucasian population, according to data accessible [2,3]. A longer wavelength shift clearly shows a server color deficiency. As a result, we can simulate the color vison of CDOs using a correct wavelength shift of the cone function.

The spectral power distributions of the display primaries, in addition to the physical factors, play a vital impact in color perception. This makes sense because color perception is a reaction to the display's spectrum radiance. The simulation model is completed once all the three parameters have been identified. We can learn how a CDO perceives colors by using this method.

An introduction of the simulation model is given to ensure the completeness of this study. However, readers are still recommended to refer to the original paper [19] for more detailed instructions. As suggested by the stage theory [20], the trichromatic theory should be valid at the photoreceptor level, but the resulting signals should be further processed in a later stage according to the opponent-color theory, which can be described by the suprathreshold transformation proposed by Ingling and Tsou in Eq. (1),

$$\left[ {\begin{array}{{c}} {WS(\lambda )}\\ {YB(\lambda )}\\ {RG(\lambda )} \end{array}} \right] = {T_{LM{S_2}Opp}}\left[ {\begin{array}{{c}} {L(\lambda )}\\ {M(\lambda )}\\ {S(\lambda )} \end{array}} \right]$$
where the $L(\lambda )$, $M(\lambda )$ and $S(\lambda )$ represent the spectral sensitivity functions of the L, M and cones, respectively; The $WS(\lambda )$, $YB(\lambda )$ and $RG(\lambda )$ represent the luminance channel, and the two opponent chromatic channels, i.e., Yellow-Blue and Red-Green channels, respectively. The ${T_{LM{S_2}Opp}}$ was given in Ref. [21].

As discussed above, color deficiency was explained by a shift in cone spectral functions, which happens at the retinal level. Hence, the simulation model assumes that the neural connections that link the photoreceptors themselves to the rest of the visual system are not affected. Hence, Eq. (1) applies to both CDOs and CNOs except for a change of the $L(\lambda )$, $M(\lambda )$ and $S(\lambda )$ functions according to the type and severity of color deficiency.

Afterwards, the transformation from an RGB color space to an opponent-color space can be achieved by projecting the spectral power distributions ${\varphi _R}(\lambda )$, ${\varphi _G}(\lambda )$ and ${\varphi _B}(\lambda )$ of the RGB primaries onto the set of basis functions $WS(\lambda )$, $YB(\lambda )$ and $RG(\lambda )$ of the opponent color space, which is described in Eq. (2).

$$\begin{array}{l} W{S_R}\textrm{ = }{\rho _{WS}}\int {{\varphi _R}} (\lambda )WS(\lambda )d\lambda ,\\ W{S_G}\textrm{ = }{\rho _{WS}}\int {{\varphi _G}} (\lambda )WS(\lambda )d\lambda ,\\ W{S_B}\textrm{ = }{\rho _{WS}}\int {{\varphi _B}} (\lambda )WS(\lambda )d\lambda ,\\ Y{B_R}\textrm{ = }{\rho _{YB}}\int {{\varphi _R}} (\lambda )YB(\lambda )d\lambda ,\\ Y{B_G}\textrm{ = }{\rho _{YB}}\int {{\varphi _G}} (\lambda )YB(\lambda )d\lambda ,\\ Y{B_B}\textrm{ = }{\rho _{YB}}\int {{\varphi _B}} (\lambda )YB(\lambda )d\lambda ,\\ R{G_R}\textrm{ = }{\rho _{RG}}\int {{\varphi _R}} (\lambda )RG(\lambda )d\lambda ,\\ R{G_G}\textrm{ = }{\rho _{RG}}\int {{\varphi _G}} (\lambda )RG(\lambda )d\lambda ,\\ R{G_B}\textrm{ = }{\rho _{RG}}\int {{\varphi _B}} (\lambda )RG(\lambda )d\lambda \end{array}$$
where ${\rho _{WS}}$, ${\rho _{YB}}$ and ${\rho _{RG}}$ are defined as the normalization factors to guarantee that the achromatic colors have the exact same coordinates both in RGB as well as in all possible versions of the opponent-color spaces. This is key for the simulation algorithm.

As a result, the transformation from an RGB color space to an opponent-color space can be simply defined as

$$\left[ {\begin{array}{c} {WS}\\ {YB}\\ {RG} \end{array}} \right] = {\tau _{3 \times 3}}\left[ {\begin{array}{c} R\\ G\\ B \end{array}} \right]$$
$${\tau _{3 \times 3}} = \left[ \begin{array}{ccc} {W{S_R}}&{W{S_G}}&{W{S_B}}\\ {Y{B_R}}&{Y{B_G}}&{Y{B_B}}\\ {R{G_R}}&{R{G_G}}&{R{G_B}} \end{array} \right]$$

The ${\tau _{3 \times 3}}$ is fixed once the parameters, i.e., the type and severity of an observer, are given. Hence, the simulation model can be simplified to a three-by-three matrix, as illustrated in Eq. (5). It converts colors from the perspective of a CNO to the perspective of a CDO. And we can also have the reverse version as shown in Eq. (6). Both the inputs and outputs of the model are given in linear RGB values. As a result, once the input RGB gamut has been identified, the output RGB gamut can be calculated using this equation.

It is also stressed in the original paper that the simulation model is not tied to any particular stage model. The authors also developed a model at the retinal photopigment stage (LMS) and a model based on the three-stage theory proposed by Muller [23]. But it was found that this two-stage model gave the best predication precision. The main reason lies in the normalization factors to guarantee that the achromatic colors appear neutral.

$${\left[ {\begin{array}{{c}} R\\ G\\ B \end{array}} \right]_{CDO}} = \tau _{CDO}^{\textrm{ - }1}{\tau _{CNO}}{\left[ {\begin{array}{{c}} R\\ G\\ B \end{array}} \right]_{CNO}}$$
$${\left[ {\begin{array}{{c}} R\\ G\\ B \end{array}} \right]_{CNO}} = \tau _{CNO}^{\textrm{ - }1}{\tau _{CDO}}{\left[ {\begin{array}{{c}} R\\ G\\ B \end{array}} \right]_{CDO}}$$

Figure 1 depicts the workflow for creating the perceived color gamut of a CDO. Firstly, the input display gamut was tightly sampled in the RGB space, which in this case was a 17*17*17 RGB cube. Following that, the simulation model was used to convert all of these RGB values into the perceived color gamut of a CDO. After that, all of the colors in the RGB domain were transformed to XYZ values using the display colorimetric characterization model, followed by the color appearance attributes. The CIELAB color space was adopted in this study. However, more advanced color spaces, such as CAM02-UCS [24] and Jzazbz [25] for HDR applications were welcomed to provide a better result. Finally, the Segment Maximum Gamut Boundary Descriptor (SMGBD) algorithm [26] was used to generate gamut boundary descriptors.

 figure: Fig. 1.

Fig. 1. The workflow to generate the perceived gamut of CNOs and CDOs.

Download Full Size | PPT Slide | PDF

Figure 2 depicts an example comparing the gamuts of CDOs and CNOs. There are two CDOs’ gamuts corresponding to wavelength shifts of 10 nm and 15 nm respectively. It can be seen there is a noticeable gamut discrepancy in most of the constant hue planes. And the distinction is determined by the hue planes.

 figure: Fig. 2.

Fig. 2. The gamuts comparison between CNOs and CDOs. Two CDOs of protanomalous observers having 10 nm and 15 nm wavelength shifts are included.

Download Full Size | PPT Slide | PDF

It should be noted that the gamut obtained from the simulation model may be outside of the CNO's gamut. The latter corresponds to the display gamut. This means that when viewed by a CNO, such colors cannot be replicated by the display. It is caused by either the inaccuracy of the simulation model or the hue shift of the color perception of a CDO. Hence, they are currently discarded. As a result, the gamut of CDO is confined to being inside the gamut of the CNO, as shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. The correction for gamuts obtained using the simulation model. The original gamut is in the solid line, the gamut acquired from the simulation model is in the dashed line and the corrected gamut is in the star-dashed line.

Download Full Size | PPT Slide | PDF

The determination of the wavelength shift is a challenge. In this study, it was arbitrarily divided into three categories, mild, moderate and severe, respectively. It is defined from color-vision test result, e.g., the Ishihara vision test [27]. Each of these three categories was assigned a fixed wavelength shift, i.e., 5 nm 10 nm and 15 nm respectively. This was to define the severities of deficiencies and was decided by the observation on the simulated images.

3. Gamut mapping

As long as the perceived gamut is defined, an enhanced image is produced by executing a gamut mapping between the CNO and CDO gamuts. The gamut compression algorithm used in this experiment is a simplified version of the well-known SGCK (Chroma-Dependent Sigmoidal Lightness Mapping Followed by Knee Scaling Toward the CUSP) algorithm [26].

The gamut compression algorithm used in this study is depicted in Fig. 4. A nonlinear knee function was applied to all source colors. If the source color was within 90% region of the destination gamut (called the core region), it would be preserved otherwise mapped towards the focal point E, which has the same lightness of the original color. In other words, the line segment EPo was mapped to the line segment EPd, while colors in the core region remained unchanged. The function adopted to perform such mapping is given in Eq. (7).

$$\overline {EP^{\prime}} = \left\{ {\begin{array}{{c}} {\overline {EP} ;\; \overline {EP} \le 0.9\ast \overline {E{P_d}} }\\ {0.9\ast \overline {E{P_d}} + \; \frac{{\overline {EP} \; - 0.9\ast \overline {E{P_d}} }}{{\overline {E{P_o}} \; - \; 0.9\ast \overline {E{P_d}} }}\ast \; \frac{{\overline {E{P_d}} }}{{10}};\; \overline {EP} > 0.9\ast \overline {E{P_d}} } \end{array}\; } \right.$$

In Eq. (7), E is the focal point; P is the source color in the original gamut; and P is the mapping output in the destination gamut. This nonlinear lightness mapping is quite similar to the final stage of SGCK, which employs the same knee function as indicated above.

 figure: Fig. 4.

Fig. 4. Mapping towards the lightness axis. Po is the original color in the source gamut boundary, and Pd is the mapped color in the destination gamut boundary. E is the mapping centre on the lightness axis that has the same lightness value as point Po. The length of EPs equals to 90% the length of EPd.

Download Full Size | PPT Slide | PDF

Once the gamut mapping was performed, all colors were inside the destination gamut, indicating they can be accurately perceived by a CDO. This output image, however, is from the perspective of a CDO, and it has to be inversely transformed to make a copy from the perspective of a CNO. This is simply accomplished by inverting the simulation model of Eq. (6). An example is given in Fig. 5 including both perspectives. As is shown, the original image is on the left-top and its simulated version is on the left-bottom. The right-bottom is the gamut mapped reproduction and it is expected to offer a better color fidelity than the simulated one (left-bottom). The top-right is the enhanced reproduction by using the invert version of the simulation model. It is specially generated for the CDOs to preserve the same color appearance as compared with the gamut mapped reproduction (right-bottom). In other words, the color perception of CNOs on the gamut mapped reproduction corresponds to that of CDOs on the enhanced reproduction.

 figure: Fig. 5.

Fig. 5. The comparison between the original image and its enhanced one. The wavelength shift is set at 10 nm and the CDO is a protanomalous observer. The top row represents the views from the CNO and the bottom row simulates the views from the CDO

Download Full Size | PPT Slide | PDF

4. Experiments

Psychophysical tests were carried out to validate the effectiveness of this method. They can be divided into two categories in general. The first uses CNOs to evaluate the simulated images, while the second uses CDOs. Both experiments were performed on a well-characterized NEC PA302w display with a reference white of D65 @ 100 cd/m2 under CIE 1964 standard colorimetric observer. To ensure consistent white adaptation, the background is set to a neutral grey with Lab = [50 0 0]. All of these measurements were made with a Konica Minolta CS2000 tele-Spectroradiometer.

In this study, three sets of test images were used. The first set included 14 images chosen from the Ishihara test book (see Fig. 6). All of them were carefully selected to contain colors in a confusion line and were used to determine whether the proposed method would preserve color discrimination. The second set, as shown in Fig. 7, was composed of ten normal images and can be divided into two sub-sets: natural images and scientific visualization images. They were designed to represent the typical application environment. All of these images were processed using the same procedure described in the last paragraph of section 3 with a worked example given in Fig. 5.

 figure: Fig. 6.

Fig. 6. The Ishihara test images. All these images contain easily confused colors.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. The normal images, including natural images and scientific visualization images. Image(2) is actually a Ishihara test image.

Download Full Size | PPT Slide | PDF

The third set of test images is illustrated in Fig. 8. It consisted of a series of color patches adapted from the well-known FM-100 hue test [28] and denoted the ZJU50 hue test. It was based on CAM16-UCS [29], one of the most uniform color spaces. The samples had a fixed chroma (20) and lightness (60) value. Each neighboring patch pair had a 4.5°hue difference. These color patches were divided into two separate panels, ranging from orange to yellow-green and from cyan to purple. Observers were asked to arrange these color patches in the order of their hues. The concept is that color patches from these two panels will never be mismatched because they have distinct color differences. As a result, each panel was sorted independently by observers. The arrangements of these patches will be analyzed using a quantitative scoring technique [30], known as the C-index, to reveal an observer's ability on color discrimination. A higher C-index indicates poor observer performance, while a unit indicates a perfect arrangement.

 figure: Fig. 8.

Fig. 8. Illustration of the abbreviated samples of ZJU 50-hue Test.

Download Full Size | PPT Slide | PDF

4.1 Experiment by CNO

Color perception of CDOs can be well simulated by the color deficiency simulation model. This indicates that perception of CNOs can be used to simulate the perception of CDOs of various types and severities. Hence, if CNOs judge the gamut mapped image, e.g., the right-bottom reproduction in Fig. 5, to be better than the simulated reproduction by the model, e.g., the left-bottom image in Fig. 5, we can infer that CDOs will judge the enhanced reproduction, e.g., the right-top image in Fig. 5, to be better than the original image, e.g., the left-top image in Fig. 5. Therefore, the inclusion of CNOs can be regarded as a strength to the experimental results by CDOs and both results can also be compared. The results report the accuracy of the color deficiency simulation model. This is the unique feature in the present method.

The first experiment used simulated images and were evaluated by CNOs. This experiment included ten CNOs, all of whom passed the Ishihara color vision test. The paired-comparison method was adopted in this experiment and the experimental setup for the normal images is shown in Fig. 9. Three images were shown side by side on a screen, with the original image in the center and two reproductions, a gamut mapped one and a simulated one, on each side. Observers were asked to vote on which reproduction had a closer appearance to the original.

 figure: Fig. 9.

Fig. 9. The experimental setup for the normal images. The original image is in the center and two reproductions on each side randomly.

Download Full Size | PPT Slide | PDF

The entire experimental setup for the Ishihara images was similar except for the removal of the original image. Observers were asked to determine which reproduction provided the best color discrimination.

Observers were asked to repeat the color ordering experiment twice. The first employs simulated images, while the second employs gamut-mapped images. The decrease in C-index for each experiment represents an improvement in color discrimination.

All above experiments were carried out for two types of color deficiencies, protan and deutan, at three levels, mild (5 nm), moderate (10 nm), and severe (15 nm). Half of the observers were treated as protan and the other half as deutan in the color ordering experiment. Note that a pilot study was carried out to determine the severity of color deficiencies according to wavelength shift. The results showed that the above three seem to be sufficient up to the task required here. More experiments should be conducted to further verify the results.

4.2 Experiment by CDO

Although simulation on display can demonstrate the superiority of the proposed method, it is the responses of the CDOs that truly determine the applicability of this method. The Ishihara vision test was used to evaluate all five CDOs who took part in this experiment. Three of them were deemed moderate, while the remaining two were deemed severe. All of them were deuteranomalous trichromats.

There were two experiments included. One experiment used the same experimental setup as the Ishihara image test in Section 4.1, and observers were asked to judge which reproduction provided better color discrimination. In this case, only the original and enhanced images were compared. One thing should be noted that, only five selected images were used here, i.e., the images from Figs.7(1) to 7(5).

The second experiment was color ordering experiment. Similarly, observers were asked to sort both the original and enhanced color patches.

5. Results

5.1 Results for the CNO experiment

Table .1 shows the results of testing with the Ishihara images. The percentage is calculated by dividing the number of the gamut mapped image choices by that of the simulated image choices, representing the possibility that a CDO judge the enhanced images to be more discriminated than the original ones. The results showed that almost all of the observers thought the gamut mapped images were superior to the simulated ones, indicating the superiority of the proposed methods. The experiment with normal images revealed the same trend as the Ishihara images, and its results were summarized in Table 2. In most cases, images processed with the gamut mapping algorithm were thought to look more like the originals than their simulated reproductions. Both of these experiments confirmed the effectiveness of the proposed method.

Tables Icon

Table 1. Results using Ishihara images. The percentage is calculated by dividing the number of gamut-mapped image choices by that of simulated image choices. PA means the protanomalous trichromats and DA means the deuteranomalous trichromats.

Tables Icon

Table 2. Results using normal images. The percentage has a same meaning as that in Table 1.

It should be noted that the result for Image 5 was slightly worse than the other images tested. As a result, its reproductions for DA with mild deficiency are shown in Fig. 10. As can be seen, the gamut mapped reproduction has higher hue fidelity, whereas the simulated is more chromatic. But since we asked the observers to choose which reproduction looked closest to the original, they did not pay much attention to color discrimination within an image, resulting in a preference for the simulated reproduction.

 figure: Fig. 10.

Fig. 10. The comparison between the simulated reproduction and gamut mapped reproduction of Fig. 7(5).

Download Full Size | PPT Slide | PDF

The color patch arrangement experiment results are shown in Fig. 11 and Fig. 12. The C-index was fixed at unity when there was no wavelength shift, representing the result of a perfect arrangement. The C-index for the gamut mapped color patches was always lower than the simulated ones, indicating that the proposed image enhancement method improved observers’ color discrimination. Meanwhile, the length of the error bar decreased dramatically, indicating that observers had more consistent performance for the gamut mapped color patches.

 figure: Fig. 11.

Fig. 11. The result between the simulated images and the gamut mapped reproductions for PA.

Download Full Size | PPT Slide | PDF

 figure: Fig. 12.

Fig. 12. The result between the simulated images and its enhanced (gamut mapped) reproductions for DA. The error bar represents a standard deviation.

Download Full Size | PPT Slide | PDF

5.2 Results for the CDO experiment

Table 3 summarizes the results of the CDO experiments. The first observer did not participate in the color ordering experiment and his result was omitted. It should be noted that the C-index decreased significantly, demonstrating that the proposed method is effective for all severity levels. Although the image results were still considered as acceptable, they were slightly worse than the simulation experiments. This could be due to inaccuracy in the color deficiency simulation model or observer preference on images, and more researches should be conducted to validate the proposed methods.

Tables Icon

Table 3. Results for the CDO experiment. A check mark means the enhanced image was selected.a

6. Discussion

Although the proposed method was found to be effective by the observers who took part in this study, its limitations should be discussed before conducting a large-scale experiment.

The color deficiency simulation model serves as the foundation for the proposed image enhancement method for CDOs. Its accuracy has an impact on the enhancement performance. More recent models can also be adopted to perform a further investigation [31]. Further, an accurate gamut estimation necessitates a precise measurement of the wavelength shift. It was roughly determined from the Ishihara test in the current study, and a quantitative scoring technique, such as Farnsworth D-15 [32], FM100 [28] or anomaloscope [33] was preferred. Moreover, selecting an appropriate gamut mapping algorithm is critical. Different gamut mapping algorithms were developed for various purposes, and the best one should be chosen based on the actual needs. We also proposed advanced gamut mapping algorithms [34,35] to achieve a vivid image appearance. Last but not least, additional experiments should be carried out to further validate the proposed method. Present studies only include five deuteranomalous trichromats. More color deficiencies of all types are required to fully evaluate the proposed approach.

The other limitation of this approach is caused by the severity of color deficiency. It should be mentioned that CDOs having different severities of color deficiency have different perceived color gamuts. Colors lie outside the CDO's gamut will be compressed less chromatic. This is the nature of the color deficiency and can be effectively mitigated by a well-designed color gamut mapping algorithm. And for the color normal observers, the enhanced images will exhibit somewhat more colorful except for the neutral colors.

It is also worth noting that the proposed method is by no means perfect and can be improved with a better color deficiency simulation model or a better gamut mapping algorithm. As a result, it is not necessary to strictly adhere to all of the steps outlined in the paper.

7. Conclusion

This paper proposed an effective image enhancement method for color vision deficiencies. It had a unique feature to apply gamut mapping technique. Two separate experiments using both CNOs and CDOs were carried out to verify its performance. Both experiments yielded similar results, confirming the effectiveness of this method. Despite the present method has proven to be promising, more experiments are still needed to further evaluate and improved its performance. More robust metric can further improve its performance.

Funding

Fundamental Research Funds for the Provincial Universities of Zhejiang (GK219909299001-019); National Natural Science Foundation of China (61775190).

Acknowledgement

The authors would like to acknowledge supports from OPPO Guangdong Mobile Communications Co. Ltd.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017). [CrossRef]  

2. M. Rodriguez-Carmona, Genetics of Photoreceptors, Genetics and Color Vision Deficiencies, Genes of Cone Photopigments, Genes and Cones (Springer, 2015).

3. C. Rigden, “The eye of the beholder-designing for colour blind,” British Telecommunications Engineering 17, 291–295 (1999).

4. J. I. You and K. Park, “Image processing with color compensation using LCD display for color vision deficiency,” J. Display Technol. 12(6), 562–566 (2016). [CrossRef]  

5. G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008). [CrossRef]  

6. M. G. Ribeiro and A. J. P. Gomes, “A skillet-based recoloring algorithm for dichromats,” in 15th International Conference on e-Health Networking(2013), pp. 702–706.

7. M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification for protanopia and deuteranopia,” in International Symposium on Intelligent Signal Processing and Communication Systems(2016), pp. 1–6.

8. C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011). [CrossRef]  

9. D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019). [CrossRef]  

10. G. M. Culp, “Increasing accessibility for map readers with acquired and inherited colour vision deficiencies: a re-colouring algorithm for maps,” The Cartographic Journal 49(4), 302–311 (2012). [CrossRef]  

11. M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017). [CrossRef]  

12. J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016). [CrossRef]  

13. CIE, “Enhancement of images for colour-deficient observers,” in CIE240:2020(2020).

14. C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

15. T. Sakamoto, “Image color reduction method for color-defective observers using a color palette composed of 20 particular colors,” in The International Society for Optical Engineering(2015), pp. 1–6.

16. G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010). [CrossRef]  

17. T. Sakamoto, “Edge enhancement filter for people with protanopic and deuteranopic vision,” in Proceedings of AIC (2012), pp. 406–409.

18. P. Hung and N. Hiramatsu, “A colour conversion method which allows colourblind and normal-vision people share documents with colour content,” in Proceedings of the 27th Session of the CIE(2011), pp. 229–239.

19. G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009). [CrossRef]  

20. D. B. Judd, “Fundamental studies of color vision from 1860 To 1960,” in Proceedings of the National Academy of Sciences of the United States of America(1966), pp. 1313–1330.

21. C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977). [CrossRef]  

22. T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996). [CrossRef]  

23. D. B. Judd, “Response functions for types of vision according to the Muller,” J. Res. Natl. Bur. Std. 42(1), 1–16 (1949). [CrossRef]  

24. M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]  

25. M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017). [CrossRef]  

26. CIE, “Guidelines for the evaluation of gamut mapping algorithms,” in CIE Pub.156(2003).

27. E. J. S. Ishihara, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918). [CrossRef]  

28. D. Farnsworth, “The Farnsworth-Munsell 100-hue and dichotomous tests for color vision,” J. Opt. Soc. Am. 33(10), 568–578 (1943). [CrossRef]  

29. C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017). [CrossRef]  

30. A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).

31. H. Yaguchi, J. Luo, M. Kato, and Y. Mizokami, “Computerized simulation of color appearance for anomalous trichromats using the multispectral image,” J. Opt. Soc. Am. A 35(4), B278–B286 (2018). [CrossRef]  

32. B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011). [CrossRef]  

33. S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004). [CrossRef]  

34. L. Xu, B. Zhao, and M. R. Luo, “Colour gamut mapping between small and large colour gamuts: Part I. gamut compression,” Opt. Express 26(9), 11481–11495 (2018). [CrossRef]  

35. L. Xu, B. Zhao, and M. R. Luo, “Color gamut mapping between small and large color gamuts: part II. gamut extension,” Opt. Express 26(13), 17335–17349 (2018). [CrossRef]  

References

  • View by:

  1. T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017).
    [Crossref]
  2. M. Rodriguez-Carmona, Genetics of Photoreceptors, Genetics and Color Vision Deficiencies, Genes of Cone Photopigments, Genes and Cones (Springer, 2015).
  3. C. Rigden, “The eye of the beholder-designing for colour blind,” British Telecommunications Engineering 17, 291–295 (1999).
  4. J. I. You and K. Park, “Image processing with color compensation using LCD display for color vision deficiency,” J. Display Technol. 12(6), 562–566 (2016).
    [Crossref]
  5. G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
    [Crossref]
  6. M. G. Ribeiro and A. J. P. Gomes, “A skillet-based recoloring algorithm for dichromats,” in 15th International Conference on e-Health Networking(2013), pp. 702–706.
  7. M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification for protanopia and deuteranopia,” in International Symposium on Intelligent Signal Processing and Communication Systems(2016), pp. 1–6.
  8. C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
    [Crossref]
  9. D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
    [Crossref]
  10. G. M. Culp, “Increasing accessibility for map readers with acquired and inherited colour vision deficiencies: a re-colouring algorithm for maps,” The Cartographic Journal 49(4), 302–311 (2012).
    [Crossref]
  11. M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017).
    [Crossref]
  12. J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016).
    [Crossref]
  13. CIE, “Enhancement of images for colour-deficient observers,” in CIE240:2020(2020).
  14. C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.
  15. T. Sakamoto, “Image color reduction method for color-defective observers using a color palette composed of 20 particular colors,” in The International Society for Optical Engineering(2015), pp. 1–6.
  16. G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010).
    [Crossref]
  17. T. Sakamoto, “Edge enhancement filter for people with protanopic and deuteranopic vision,” in Proceedings of AIC (2012), pp. 406–409.
  18. P. Hung and N. Hiramatsu, “A colour conversion method which allows colourblind and normal-vision people share documents with colour content,” in Proceedings of the 27th Session of the CIE(2011), pp. 229–239.
  19. G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
    [Crossref]
  20. D. B. Judd, “Fundamental studies of color vision from 1860 To 1960,” in Proceedings of the National Academy of Sciences of the United States of America(1966), pp. 1313–1330.
  21. C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977).
    [Crossref]
  22. T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
    [Crossref]
  23. D. B. Judd, “Response functions for types of vision according to the Muller,” J. Res. Natl. Bur. Std. 42(1), 1–16 (1949).
    [Crossref]
  24. M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006).
    [Crossref]
  25. M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017).
    [Crossref]
  26. CIE, “Guidelines for the evaluation of gamut mapping algorithms,” in CIE Pub.156(2003).
  27. S. Ishihara and E. J, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918).
    [Crossref]
  28. D. Farnsworth, “The Farnsworth-Munsell 100-hue and dichotomous tests for color vision,” J. Opt. Soc. Am. 33(10), 568–578 (1943).
    [Crossref]
  29. C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
    [Crossref]
  30. A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).
  31. H. Yaguchi, J. Luo, M. Kato, and Y. Mizokami, “Computerized simulation of color appearance for anomalous trichromats using the multispectral image,” J. Opt. Soc. Am. A 35(4), B278–B286 (2018).
    [Crossref]
  32. B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
    [Crossref]
  33. S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004).
    [Crossref]
  34. L. Xu, B. Zhao, and M. R. Luo, “Colour gamut mapping between small and large colour gamuts: Part I. gamut compression,” Opt. Express 26(9), 11481–11495 (2018).
    [Crossref]
  35. L. Xu, B. Zhao, and M. R. Luo, “Color gamut mapping between small and large color gamuts: part II. gamut extension,” Opt. Express 26(13), 17335–17349 (2018).
    [Crossref]

2019 (1)

D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
[Crossref]

2018 (3)

2017 (4)

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017).
[Crossref]

T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017).
[Crossref]

M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017).
[Crossref]

2016 (2)

J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016).
[Crossref]

J. I. You and K. Park, “Image processing with color compensation using LCD display for color vision deficiency,” J. Display Technol. 12(6), 562–566 (2016).
[Crossref]

2012 (1)

G. M. Culp, “Increasing accessibility for map readers with acquired and inherited colour vision deficiencies: a re-colouring algorithm for maps,” The Cartographic Journal 49(4), 302–311 (2012).
[Crossref]

2011 (2)

C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
[Crossref]

B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
[Crossref]

2010 (1)

G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010).
[Crossref]

2009 (1)

G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
[Crossref]

2008 (1)

G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
[Crossref]

2006 (1)

M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006).
[Crossref]

2004 (1)

S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004).
[Crossref]

1999 (1)

C. Rigden, “The eye of the beholder-designing for colour blind,” British Telecommunications Engineering 17, 291–295 (1999).

1996 (1)

T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
[Crossref]

1988 (1)

A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).

1977 (1)

C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977).
[Crossref]

1949 (1)

D. B. Judd, “Response functions for types of vision according to the Muller,” J. Res. Natl. Bur. Std. 42(1), 1–16 (1949).
[Crossref]

1943 (1)

1918 (1)

S. Ishihara and E. J, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918).
[Crossref]

Berendschot, T. T.

T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
[Crossref]

Brill, M. H.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Chen, C.

C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
[Crossref]

Chiu, K.

C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
[Crossref]

Cui, G.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017).
[Crossref]

M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006).
[Crossref]

Culp, G. M.

G. M. Culp, “Increasing accessibility for map readers with acquired and inherited colour vision deficiencies: a re-colouring algorithm for maps,” The Cartographic Journal 49(4), 302–311 (2012).
[Crossref]

Dain, S.

S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004).
[Crossref]

Farnsworth, D.

Farup, I.

J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016).
[Crossref]

Fernandes, L.

G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
[Crossref]

Fernandes, L. A. F.

G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
[Crossref]

Fmsa, S.

S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004).
[Crossref]

Foutch, B.

B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
[Crossref]

Gomes, A. J. P.

M. G. Ribeiro and A. J. P. Gomes, “A skillet-based recoloring algorithm for dichromats,” in 15th International Conference on e-Health Networking(2013), pp. 702–706.

Hassan, M. F.

M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017).
[Crossref]

Hiramatsu, N.

P. Hung and N. Hiramatsu, “A colour conversion method which allows colourblind and normal-vision people share documents with colour content,” in Proceedings of the 27th Session of the CIE(2011), pp. 229–239.

Hiura, S.

D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
[Crossref]

Huang, C.

C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
[Crossref]

Hung, P.

P. Hung and N. Hiramatsu, “A colour conversion method which allows colourblind and normal-vision people share documents with colour content,” in Proceedings of the 27th Session of the CIE(2011), pp. 229–239.

Huong-Peng-Tsou, B.

C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977).
[Crossref]

Ingling, C.

C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977).
[Crossref]

Ishihara, S.

S. Ishihara and E. J, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918).
[Crossref]

J, E.

S. Ishihara and E. J, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918).
[Crossref]

Judd, D. B.

D. B. Judd, “Response functions for types of vision according to the Muller,” J. Res. Natl. Bur. Std. 42(1), 1–16 (1949).
[Crossref]

D. B. Judd, “Fundamental studies of color vision from 1860 To 1960,” in Proceedings of the National Academy of Sciences of the United States of America(1966), pp. 1313–1330.

Kato, M.

Kim, Y. J.

King-Smith, P. E.

A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).

Kuhn, G. R.

G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
[Crossref]

Lakshminarayanan, V.

B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
[Crossref]

Lau, C.

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

Li, C.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006).
[Crossref]

Li, Z.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Luo, J.

Luo, M. R.

Machado, G.

G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010).
[Crossref]

G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
[Crossref]

Melgosa, M.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Meng, M.

M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification for protanopia and deuteranopia,” in International Symposium on Intelligent Signal Processing and Communication Systems(2016), pp. 1–6.

Miyazaki, D.

D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
[Crossref]

Mizokami, Y.

Oliveira, M.

G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010).
[Crossref]

G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
[Crossref]

Oliveira, M. M.

G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
[Crossref]

Paramesran, R.

M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017).
[Crossref]

Park, K.

Perdu, N.

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

Pointer, M.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Prince, S.

T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017).
[Crossref]

Rajalakshmi, T.

T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017).
[Crossref]

Ribeiro, M. G.

M. G. Ribeiro and A. J. P. Gomes, “A skillet-based recoloring algorithm for dichromats,” in 15th International Conference on e-Health Networking(2013), pp. 702–706.

Rigden, C.

C. Rigden, “The eye of the beholder-designing for colour blind,” British Telecommunications Engineering 17, 291–295 (1999).

Rodriguez-Carmona, M.

M. Rodriguez-Carmona, Genetics of Photoreceptors, Genetics and Color Vision Deficiencies, Genes of Cone Photopigments, Genes and Cones (Springer, 2015).

Rodríguez-Pardo, C.

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

Safdar, M.

Sakamoto, T.

T. Sakamoto, “Image color reduction method for color-defective observers using a color palette composed of 20 particular colors,” in The International Society for Optical Engineering(2015), pp. 1–6.

T. Sakamoto, “Edge enhancement filter for people with protanopic and deuteranopic vision,” in Proceedings of AIC (2012), pp. 406–409.

Sharma, G.

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

Simon-Liedtke, J. T.

J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016).
[Crossref]

Stringham, J.

B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
[Crossref]

Süsstrunk, S.

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

Tanaka, G.

M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification for protanopia and deuteranopia,” in International Symposium on Intelligent Signal Processing and Communication Systems(2016), pp. 1–6.

Taomoto, S.

D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
[Crossref]

van de Kraats, J.

T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
[Crossref]

van Norren, D.

T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
[Crossref]

Vingrys, A. J.

A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).

Wang, Z.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Xu, L.

Xu, Y.

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Yaguchi, H.

You, J. I.

Zhao, B.

Am. J. Ophthalmol. (1)

S. Ishihara and E. J, “Tests for Color Blindness,” Am. J. Ophthalmol. 1(5), 376 (1918).
[Crossref]

British Telecommunications Engineering (1)

C. Rigden, “The eye of the beholder-designing for colour blind,” British Telecommunications Engineering 17, 291–295 (1999).

Clinical and Experimental Optometry (1)

S. Fmsa and S. Dain, “Clinical colour vision tests,” Clinical and Experimental Optometry 87(4-5), 276–293 (2004).
[Crossref]

Color Res Appl (1)

C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res Appl 42(6), 703–718 (2017).
[Crossref]

Color Res. Appl. (1)

M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006).
[Crossref]

Comp. Graph. Forum (1)

G. Machado and M. Oliveira, “Real-time temporal-coherent color contrast enhancement for dichromats,” Comp. Graph. Forum 29(3), 933–942 (2010).
[Crossref]

IEEE Trans. Multimedia (1)

C. Huang, K. Chiu, and C. Chen, “Temporal color consistency-based video reproduction for dichromats,” IEEE Trans. Multimedia 13(5), 950–960 (2011).
[Crossref]

IEEE Transactions on Visualization and Computer Graphics (2)

G. R. Kuhn, M. M. Oliveira, and L. A. F. Fernandes, “An efficient naturalness-preserving image-recoloring method for dichromats,” IEEE Transactions on Visualization and Computer Graphics 14(6), 1747–1754 (2008).
[Crossref]

G. Machado, M. Oliveira, and L. Fernandes, “A physiologically-based model for simulation of color vision deficiency,” IEEE Transactions on Visualization and Computer Graphics 15(6), 1291–1298 (2009).
[Crossref]

Int. J. Image Grap. (1)

D. Miyazaki, S. Taomoto, and S. Hiura, “Extending the visibility of dichromats using histogram equalization of hue value defined for dichromats,” Int. J. Image Grap. 19(03), 1950016 (2019).
[Crossref]

Invest. Ophthalmol. Visual Sci. (1)

A. J. Vingrys and P. E. King-Smith, “A quantitative scoring technique for panel tests of color vision,” Invest. Ophthalmol. Visual Sci. 29, 50–63 (1988).

J. Display Technol. (1)

J. Mod. Opt. (1)

B. Foutch, J. Stringham, and V. Lakshminarayanan, “A new quantitative technique for grading Farnsworth D-15 color panel tests,” J. Mod. Opt. 58(19-20), 1755–1763 (2011).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (1)

J. Res. Natl. Bur. Std. (1)

D. B. Judd, “Response functions for types of vision according to the Muller,” J. Res. Natl. Bur. Std. 42(1), 1–16 (1949).
[Crossref]

Journal of Visual Communication and Image Representation (1)

J. T. Simon-Liedtke and I. Farup, “Evaluating color vision deficiency daltonization methods using a behavioral visual-search method,” Journal of Visual Communication and Image Representation 35, 236–247 (2016).
[Crossref]

Opt. Express (3)

Proc Inst Mech Eng H (1)

T. Rajalakshmi and S. Prince, “Physiological modeling for detecting degree of perception of a color-deficient person,” Proc Inst Mech Eng H 231(4), 276–285 (2017).
[Crossref]

Signal Processing: Image Communication (1)

M. F. Hassan and R. Paramesran, “Naturalness preserving image recoloring method for people with red–green deficiency,” Signal Processing: Image Communication 57, 126–133 (2017).
[Crossref]

The Cartographic Journal (1)

G. M. Culp, “Increasing accessibility for map readers with acquired and inherited colour vision deficiencies: a re-colouring algorithm for maps,” The Cartographic Journal 49(4), 302–311 (2012).
[Crossref]

The Journal of Physiology (1)

T. T. Berendschot, J. van de Kraats, and D. van Norren, “Foveal cone mosaic and visual pigment density in dichromats,” The Journal of Physiology 492(1), 307–314 (1996).
[Crossref]

Vision Res. (1)

C. Ingling and B. Huong-Peng-Tsou, “Orthogonal combination of the three visual channels,” Vision Res. 17(9), 1075–1082 (1977).
[Crossref]

Other (10)

CIE, “Guidelines for the evaluation of gamut mapping algorithms,” in CIE Pub.156(2003).

M. G. Ribeiro and A. J. P. Gomes, “A skillet-based recoloring algorithm for dichromats,” in 15th International Conference on e-Health Networking(2013), pp. 702–706.

M. Meng and G. Tanaka, “Proposal of minimization problem based lightness modification for protanopia and deuteranopia,” in International Symposium on Intelligent Signal Processing and Communication Systems(2016), pp. 1–6.

M. Rodriguez-Carmona, Genetics of Photoreceptors, Genetics and Color Vision Deficiencies, Genes of Cone Photopigments, Genes and Cones (Springer, 2015).

CIE, “Enhancement of images for colour-deficient observers,” in CIE240:2020(2020).

C. Lau, N. Perdu, C. Rodríguez-Pardo, S. Süsstrunk, and G. Sharma, “An interactive app for color deficient viewers,” in The International Society for Optical Engineering(2015), pp. 1–9.

T. Sakamoto, “Image color reduction method for color-defective observers using a color palette composed of 20 particular colors,” in The International Society for Optical Engineering(2015), pp. 1–6.

D. B. Judd, “Fundamental studies of color vision from 1860 To 1960,” in Proceedings of the National Academy of Sciences of the United States of America(1966), pp. 1313–1330.

T. Sakamoto, “Edge enhancement filter for people with protanopic and deuteranopic vision,” in Proceedings of AIC (2012), pp. 406–409.

P. Hung and N. Hiramatsu, “A colour conversion method which allows colourblind and normal-vision people share documents with colour content,” in Proceedings of the 27th Session of the CIE(2011), pp. 229–239.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. The workflow to generate the perceived gamut of CNOs and CDOs.
Fig. 2.
Fig. 2. The gamuts comparison between CNOs and CDOs. Two CDOs of protanomalous observers having 10 nm and 15 nm wavelength shifts are included.
Fig. 3.
Fig. 3. The correction for gamuts obtained using the simulation model. The original gamut is in the solid line, the gamut acquired from the simulation model is in the dashed line and the corrected gamut is in the star-dashed line.
Fig. 4.
Fig. 4. Mapping towards the lightness axis. Po is the original color in the source gamut boundary, and Pd is the mapped color in the destination gamut boundary. E is the mapping centre on the lightness axis that has the same lightness value as point Po. The length of EPs equals to 90% the length of EPd.
Fig. 5.
Fig. 5. The comparison between the original image and its enhanced one. The wavelength shift is set at 10 nm and the CDO is a protanomalous observer. The top row represents the views from the CNO and the bottom row simulates the views from the CDO
Fig. 6.
Fig. 6. The Ishihara test images. All these images contain easily confused colors.
Fig. 7.
Fig. 7. The normal images, including natural images and scientific visualization images. Image(2) is actually a Ishihara test image.
Fig. 8.
Fig. 8. Illustration of the abbreviated samples of ZJU 50-hue Test.
Fig. 9.
Fig. 9. The experimental setup for the normal images. The original image is in the center and two reproductions on each side randomly.
Fig. 10.
Fig. 10. The comparison between the simulated reproduction and gamut mapped reproduction of Fig. 7(5).
Fig. 11.
Fig. 11. The result between the simulated images and the gamut mapped reproductions for PA.
Fig. 12.
Fig. 12. The result between the simulated images and its enhanced (gamut mapped) reproductions for DA. The error bar represents a standard deviation.

Tables (3)

Tables Icon

Table 1. Results using Ishihara images. The percentage is calculated by dividing the number of gamut-mapped image choices by that of simulated image choices. PA means the protanomalous trichromats and DA means the deuteranomalous trichromats.

Tables Icon

Table 2. Results using normal images. The percentage has a same meaning as that in Table 1.

Tables Icon

Table 3. Results for the CDO experiment. A check mark means the enhanced image was selected.a

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

[ W S ( λ ) Y B ( λ ) R G ( λ ) ] = T L M S 2 O p p [ L ( λ ) M ( λ ) S ( λ ) ]
W S R  =  ρ W S φ R ( λ ) W S ( λ ) d λ , W S G  =  ρ W S φ G ( λ ) W S ( λ ) d λ , W S B  =  ρ W S φ B ( λ ) W S ( λ ) d λ , Y B R  =  ρ Y B φ R ( λ ) Y B ( λ ) d λ , Y B G  =  ρ Y B φ G ( λ ) Y B ( λ ) d λ , Y B B  =  ρ Y B φ B ( λ ) Y B ( λ ) d λ , R G R  =  ρ R G φ R ( λ ) R G ( λ ) d λ , R G G  =  ρ R G φ G ( λ ) R G ( λ ) d λ , R G B  =  ρ R G φ B ( λ ) R G ( λ ) d λ
[ W S Y B R G ] = τ 3 × 3 [ R G B ]
τ 3 × 3 = [ W S R W S G W S B Y B R Y B G Y B B R G R R G G R G B ]
[ R G B ] C D O = τ C D O  -  1 τ C N O [ R G B ] C N O
[ R G B ] C N O = τ C N O  -  1 τ C D O [ R G B ] C D O
E P ¯ = { E P ¯ ; E P ¯ 0.9 E P d ¯ 0.9 E P d ¯ + E P ¯ 0.9 E P d ¯ E P o ¯ 0.9 E P d ¯ E P d ¯ 10 ; E P ¯ > 0.9 E P d ¯