Abstract

Based on an extensive magnitude estimation experiment, a new color appearance model for unrelated self-luminous stimuli, CAM15u, has been designed. With the spectral radiance of the stimulus as unique input, the model predicts the brightness, hue, colorfulness, saturation and amount of white. The main features of the model are the use of the CIE 2006 cone fundamentals, the inclusion of an absolute brightness scale and a very simple calculation procedure. The CAM15u model performs much better than existing models and has been validated by a validation experiment. The model is applicable to unrelated self-luminous stimuli with an angular extent of 10° and a photopic, but non-glare-inducing, luminance level.

© 2015 Optical Society of America

1. Introduction

The appearance of a stimulus as perceived by an observer is the result of complex and multistage processes associated with human color vision, such as the sensitivity of the cones, cone-compression, opponent modulation, non-linearity of the human visual system, the color and intensity of the background and surround, the intensity, size and context of the stimulus, etc. Physical measurements of the stimulus must be combined with measurements of the prevailing viewing conditions and models of human visual perception in order to make reasonable predictions of the perceptual attributes [1]. This is precisely the task a color appearance model (hereafter abbreviated as CAM) is designed for.

Unrelated colors are colors perceived to belong to areas seen in isolation from any other color [2]. A typical example of an unrelated color is a self-luminous stimulus surrounded by a dark background, like a marine or traffic signal light viewed during a dark night. Due to the absence of a real luminous background, the description of the perception of these stimuli can be considered as being rather simple and elementary. More than 15 years ago, a CAM for unrelated colors was designed: CAM97u [3, 4]. However, the model was not tested due to the lack of data [5]. In 2012, based on a visual experiment, some small improvements to CAM97u were introduced leading to the CAMFu model [6].

Recently, it has been shown that the unrelated CAM97u and CAMFu were unable to accurately predict the perceived brightness of unrelated stimuli [7], mainly caused by an underestimation of the Helmholtz-Kohlrausch effect (hereafter abbreviated as the H-K effect). The H-K-effect refers to a perceived increase in brightness as the purity or colorfulness of the stimulus increases, despite keeping its luminance constant [2]. Withouck et al. [8] proposed a modified model, CAM97um, that substantially improved the brightness prediction of CAM97u simply by increasing the weight of the colorfulness contribution to brightness.

In this paper, a new CAM for unrelated self-luminous colors, CAM15u, is presented. The main features of the model are the use of the CIE 2006 cone fundamentals [9], the inclusion of an absolute brightness scale, the inclusion of the Helmholtz-Kohlrausch effect, the amount of white as an alternative perceptual attribute to saturation and colorfulness and a simplified calculation procedure. The model is applicable to unrelated self-luminous stimuli with a field of view (FOV) of 10° and a photopic, but non-glare-inducing luminance level. The model has been developed and validated using data obtained in a magnitude estimation experiment in which twenty observers have rated more than 150 unrelated self-luminous stimuli for three absolute perceptual attributes: brightness, hue and amount of white. This new CAM for unrelated colors, CAM15u, is shown to be accurate and to outperform existing models using the results of a validation experiment.

2. Experimental setup

A viewing room of 3 m wide by 5 m long by 3.5 m high with black walls, a grey ceiling and a greyish black floor carpet (see Fig. 1 (a)) was created to generate the self-luminous unrelated stimuli for the experiment. In the center of one wall and surrounded by a dark surround, a circular stimulus with a diameter of 37 cm was created. Observers were seated at a distance of 211 cm to generate a stimulus field of view of approximately 10°. The stimulus is produced by a number of red, green, blue and white light-emitting diodes (LEDs) mounted inside a white cylindrical cavity covered with a diffusor. By controlling the drive current of each LED the color and luminance of the stimulus can be changed (see Fig. 1 (b)). A heat sink and active cooling ensured a sufficient stable and reproducible light output. The same setup has also been used in previous experiments [7, 8].

 

Fig. 1 (a) Experimental setup. (b) Example of a stimulus under dark viewing conditions [8].

Download Full Size | PPT Slide | PDF

To create and validate a new CAM for unrelated self-luminous colors, a test set of 105 stimuli and a validation set of 52 stimuli were carefully selected (see Fig. 2). These stimuli, spectrally measured using a radiance detection head and a calibrated spectrograph, were chosen to cover a large portion of the chromaticity diagram. Their 10° luminance values were randomly selected from a 6 to 60 cd/m2 luminance range, which provides photopic stimulus viewing conditions while avoiding glare.

 

Fig. 2 CIE 1976 u10, v10 chromaticity coordinates of the 105 test stimuli (a) and 52 validation stimuli (b).

Download Full Size | PPT Slide | PDF

3. Psychophysical experiment

3.1 Visual attributes

Generally, the color appearance of a scene is described by absolute attributes such as brightness, colorfulness, and hue and by relative attributes such as lightness, chroma, and saturation [2]. For self-luminous stimuli, brightness, hue, colorfulness and saturation are most relevant. In a series of preliminary experiments the color terms attributed to the color appearance of unrelated colors have been investigated using a panel of 10 naïve observers. Brightness (Dutch: “helderheid”) and hue (Dutch: “kleurtint”), or synonyms for them, were readily reported as perceptual attributes and scaling these two attributes did not present any difficulties. Observers seemed very unfamiliar with the term colorfulness (Dutch: “kleurigheid”) and were unable to evaluate this attribute without a time consuming training. To increase the relevance of a CAM for practical applications, correlates which can be easily assessed by naïve observers should be used. It turned out that instead of rating colorfulness, observers were more comfortable evaluating the “amount of white versus non-white” in a stimulus. This definition is quite similar to the chromaticness of the Natural Colour System [10]. As a few observers also saw “grey” in some neutral stimuli, the amount of white was later extended to cover the concept “amount of neutral versus colored”.

3.2 Magnitude estimation method

Although several scaling methods can be used to evaluate a stimulus, only the magnitude estimation method permits observers to give simple, absolute values to familiar color attributes. In addition, these absolute perceptual values can be used directly to test various existing color models or to develop a completely new CAM [11]. In a magnitude estimation experiment observers are asked to quantify (e.g. numerically or graphically) their magnitude estimate of one or more perceptual attributes. It is essential that each observer clearly understands the perceptual attributes being scaled and that the observers are familiar with this scaling method. Therefore, naïve observers completed a straightforward exercise in which they were asked to rate the length of a line in comparison with a line of length 100, similar to a method described in the ATSM International standard test method for unipolar magnitude estimation of sensory attributes [12]. All observers also completed a training experiment with a set of stimuli similar to the ones used in the experiment to help them to become familiar with the rating technique as applied to colored stimuli and to make them aware of the color and luminance range.

3.3 Experiment procedure

Twenty observers, 9 female and 11 male, with ages ranging between 21 and 32 years (average 24.5) participated in the psychophysical experiment. All had normal color vision according to the Ishihara 24 plate Test for Color Blindness and the Farnsworth-Munsell 100 Hue Test (mean Total Error Score of 31, indicating all observers had an average or superior discrimination) [13]. Thirteen of them had participated in previous similar experiments, while the others were naïve with respect to the purpose of the experiment. Prior to the experiment, observers adapted to the dark viewing conditions.

To reduce the influence of fatigue in the experiment, the combined set (test and validation) of stimuli was presented in two sessions taking about 35 minutes each. In each session about 90 stimuli were presented: 10 control stimuli to estimate the intra-observer accuracy and about 80 randomly chosen stimuli from the test and validation sets. A break of about 15 minutes was offered between each session. For each session, the stimuli were randomly arranged in two series, each being evaluated by half of the observers to avoid possible bias due to the series sequence [14]. Also, as preliminary experiments had shown observers to have difficulty rating all three attributes at once, the brightness was rated separately from the hue and amount of white. About half of the observers started with scaling their two sessions for brightness, while the other half started with scaling the hue and amount of white. Each stimulus was presented to the observers for 15 seconds. Between these stimuli, the reference achromatic stimulus was shown for 5 seconds.

3.3.1 Brightness

When scaling brightness, the stimuli were rated in comparison with a 51 cd/m2 reference achromatic stimulus shown in temporal juxtaposition and to which a brightness value of 50 was attributed. The 10° luminance of this reference achromatic stimulus was chosen to correspond to a perceived brightness (as calculated by the CAM97um model) approximately midway the brightness range of all the stimuli in this experiment. The chromaticity of the reference stimulus (u10, v10 = 0.2111, 0.4750) was close to that of the equi-energy stimulus, SE (u10, v10 = 0.2105, 0.4737; ΔEu’,v’ = 0.0014). Preliminary experiments had shown that it is easier to rate the brightness immediately after the stimulus has disappeared [7]. Furthermore, by showing the reference after each stimulus presentation, any errors due to memory effects were minimized. Total darkness never occurred in order to reduce the possibility of temporary blindness and afterimages. The following instructions were given to each observer when judging the brightness (translated from Dutch):

You will see 90 test stimuli. First a reference stimulus will be shown for 5 seconds. Each test stimulus will then be presented for 15 seconds. Between each of these 90 test stimuli, the reference stimulus will again be shown for 5 seconds. You’re asked to give a value to the brightness of the test stimulus with respect to that of the reference immediately after the test stimulus disappears. The reference is assigned a brightness value of 50. A value of zero represents a dark stimulus without any brightness. There is no upper limit to the value of brightness, a value of 100 represents a stimulus appearing twice as bright as the reference, a value of 25 is given to a stimulus appearing half as bright, etc.

3.3.2 Hue and amount of white

When scaling the amount of white, observers were asked to assign a percentage of white versus colored contribution perceived in each stimulus. For hue, observers needed to identify the unique hues (red, green, yellow, blue) they could recognize in the stimulus, as well as their relative proportions: e.g. 60% red and 40% yellow for a particular orange stimulus. For the hue and amount of white, the following instructions were given, in Dutch, to the observers:

You will see 90 test stimuli. First a reference stimulus will be shown for 5 seconds. Each test stimulus is then presented for 15 seconds. Between each of these 90 test stimuli, the reference stimulus will be shown again for 5 seconds. Within the 15 seconds the test stimulus is visible, you have to provide an answer to the following questions:

How much white compared to non-white (or color) do you recognize in the stimulus? Give a percentage of the amount of white. Keep in mind that this amount of white represents the degree of neutrality. Grey or neutral stimuli can be assigned as white. Give 0% when there is only color visible in the stimulus, give 100% when there is no color present and thus only a white, a grey or a neutral stimulus is visible.

Do you see blue in the stimulus?

Do you see green in the stimulus?

Do you see red in the stimulus?

Do you see yellow in the stimulus?

When you see more than one hue, give a percentage to the proportion of each hue present in the stimulus: e.g. 60% red and 40% yellow for a particular orange stimulus.

In hue scaling experiments described in literature, observers are typically limited to indicate only one or two hues, whereby the combinations blue-yellow and red-green are not allowed [11]. In our experiment, to obtain non-forced evaluations, observers were free to choose any number of hues and their combination.

4. Observer data

For each attribute, the inter- and intra-observer variability was assessed using the coefficient of variation (CV), Eq. (1) [15]:

CV=1001ni=1n(AifBi)2A¯2withf=i=1nAiBii=1nBi2
where n indicates the number of data points, A the first data set and B the second data set.

For a perfect agreement between two sets of data, this CV value should be zero. The inter-observer agreement for each color attribute was assessed by calculating the CV values between each individual observer’s results and the average of all observers. For the intra-observer agreement the CV values between each individual observer’s results of the control stimuli, presented twice to each observer in a single session, were calculated. By taking the mean of all inter-observer and of all intra-observer results, the observer variability can be compared to those of other magnitude estimation experiments. Furthermore, the predictive performance of a model can be evaluated by comparing the CV coefficient between the average observer perception and the model prediction with that of the inter-observer agreement [6, 8].

4.1 Brightness

The good average inter-observer CV values for the test and validation set, respectively 17% and 14%, as well as the small CV range (see Table 1), indicate that observers agreed well and had little difficulties in scaling brightness. The intra-observer variability had an average CV value of 20%. The mean CV values for inter-observer variability are slightly higher than those reported in Withouck et al. [5,6] (respectively 11% and 13%), and better than the ones reported in Fu et al. [6] and in Koo and Kwak [16] (respectively 29% and 40%). All studies had similar conditions. The mean CV value for intra-observer agreement is slightly higher than the 15% repeatability obtained by Fu et al. [6] and the 11% short-term intra-observer agreement of Withouck et al. [8].

Tables Icon

Table 1. Evaluation of inter- and intra-observer agreement for the test and validation set in terms of the coefficient of variation CV (%).

As proposed by ASTM International [12], the perceived brightness scaling for an average observer, Qavg,i, was obtained by calculating the geometric mean of all the observers’ brightness scaling Qobs,i for each stimulus i.

4.2 Hue (quadrature)

For hue, a quadrature scale was developed by transforming all the observers’ results into a 0-400 scale [10, 17]: 0-100 for red-yellow, 100-200 for yellow-green, 200-300 for green-blue and 300-400 for blue-red. For example, a value of 40 for the hue of a particular orange stimulus containing 60% red and 40% yellow. Stimuli with a median amount of white above 90 were excluded from the analysis as most observers had difficulty recognizing hue, let alone their relative proportions, in these stimuli.

As mentioned above, observers were not restricted in the number or combination of perceived unique hues they could report. Although binary combinations of blue-yellow and red-green or combinations of three or four hues cannot be transformed into a 0-400 scale, and were therefore excluded from the experiment, non-forced evaluation does provide interesting information about the actual perception of observers. For 16 of the 20 observers these cases almost never occurred: out of the 2512 answers a red-green combination was reported once and a yellow-blue combination 21 times. However four observers, i.e. 20%, showed very divergent responses using the non-forced evaluation. This indicates that a hue quadrature scale is not as representative of typical hue perception as commonly believed. Therefore, these observers’ data was excluded from the hue quadrature analysis: observers 5 and 20 indicated having trouble with scaling hue as they were always thinking about mixing paints and found themselves unreliable for scaling the attribute, observer 4 perceived a yellow hue in almost all the stimuli and thus often perceived blue and yellow together, and observer 13 often reported perceiving more than two hues in a presented stimulus. Although these four observers obtained good results in the Farnsworth Munsell 100 Hue Test and were very dedicated to their task, their answers could not be used in this study and thus all their hue related results were excluded from the analysis. The mean inter-observer CV values for the 16 other observers for the test and validation set were respectively 10% and 11%. The average intra-observer CV was 11%. These low CV values for all observers (see Table 1) indicate a good agreement. Several studies with similar experimental conditions reported comparable levels of agreement: 9% by Luo et al. [11], 12% by Koo and Kwak [16] and 15% by Fu et al. [6] for inter-observer agreement and 6% by Fu et al. [6] for intra-observer agreement.

By calculating the arithmetic mean of all the observers’ hue quadrature responses Hobs,i for each stimulus i (with four outliers excluded) an average observer perceived hue quadrature Havg,i was obtained.

4.3 Amount of white

The CV values for the amount of white were only calculated for 18 observers (see Table 1) as two observers indicated having trouble with scaling the amount of white and their answers diverged substantially from the bulk of the observer answers. The mean inter-observer CV values (with two outliers excluded) for the test and validation set are respectively 30% and 36%. The mean intra-observer CV was 44%. These inter-observer values are typical for this kind of attributes, e.g. values of 27% and 39% were found by Koo and Kwak [16] and by Fu et al. [6], respectively, for the colorfulness of unrelated colors. Although the amount of white may be a more familiar attribute than the colorfulness, it generally does not lead to a more robust estimate. This is probably a result of the increased difficulty of quantifying the amount of white as the stimulus becomes more saturated. However, because of its familiarity and simplicity, amount of white was the preferred attribute in this experiment.

As the distribution of the ratings of the amount of white becomes more skewed near the fixed end points (0% and 100%), the median of the observers’ amount of white Wobs,i for each stimulus i (with two outliers excluded) was calculated to obtain an average observer perceived amount of white Wavg,i.

5. Development of CAM15u

Following the current understanding of human color perception, based on the results of the psychophysical experiment and inspired by other CAMs, such as CAM97u and CAMFu, a new parametrically simpler and more accurate model, CAM15u, to predict the color appearance of unrelated self-luminous colors has been developed. In what follows, the various steps of the model, as well as critical differences with previous CAMs are discussed.

5.1 Absolute, normalized cone excitations

Human color vision starts with light absorption by the photo-sensitive receptor cells in the retina. Two kinds of receptors can be distinguished, the rods and cones. The rods, mainly responsible for scotopic vision, are sensitive to low intensity visible radiation (luminance below 5 cd/m2). The cones, dominating photopic vision, come in three different types and are typically referred to as the ρ, γ, β cones, with peak sensitivities located around 569 nm, 541 nm and 448 nm, respectively. These cones are also denoted with other symbols such as LMS or RGB, suggestive of long-, middle-, and short-wavelength or red, green, and blue sensitivities. In basic colorimetry, the color of a stimulus is usually specified in terms of the CIE tristimulus values XYZ. The latter are calculated from the CIE color matching functions (CMF) obtained in psychophysical experiments using either 2° or 10° stimuli. They can be linearly transformed to LMS type CMFs, called cone fundamentals. The latter are the effective cone excitations taking into account the spectral absorption characteristics of the ocular media and the macular pigment, and the self-screening in the outer segment of the photoreceptors. Recently, the CIE provided a set of cone fundamentals specifically suited to 10° stimuli [18]. These CIE 2006 cone fundamentals were derived from the best set of color-matching functions experimentally collected on a 10° field [19–21]. Although the use of the CIE 2006 cone fundamentals do not significantly change the results compared to the use of CIE 1964 XYZ CMF’s (see Appendix C), they are the most recent fundamentals as proposed by the CIE. In the CAM15u model they are used to calculate the fundamental cone excitations, ρ10, γ10, β10, of the stimulus:

ρ10=kρ390830Le,λ(λ)l¯10(λ)dλγ10=kγ390830Le,λ(λ)m¯10(λ)dλβ10=kβ390830Le,λ(λ)s¯10(λ)dλ
With λ the wavelength from 390 to 830 nm, Le,λ(λ) the spectral radiance of the stimulus and l¯10(λ), m¯10(λ) and s¯10(λ) the CIE 2006 10° cone fundamentals in terms of energy [18].

The coefficients kρ, kγ and kβ can be used for relative and absolute normalization of ρ10, γ10 and β10. The range of stimuli appearing neutral for dark adapted observers is quite large, from 4000K to 11000K and slightly below the black body locus [22]. The equi-energy stimulus, SE, which lies within this range and below the black body locus, is mostly used for normalization in CAMs [4]. To obtain an absolute photometric anchor for dark adapted self-luminous stimuli, in addition to the relative colorimetric normalization with respect to the equi-energy stimulus, the coefficients kρ, kγ and kβ were chosen such that all three cone excitations for SE are equal to the 10° luminance L10,SE:

ρ10,SE=γ10,SE=β10,SE=L10,SE
Or
kρ390830l¯10(λ)dλ=kγ390830m¯10(λ)dλ=kβ390830s¯10(λ)dλ=683.6360830y¯10(λ)dλ
This yields the values kρ=666.7,kγ=782.3andkβ=1444.6.

Using these constants and the absolute spectral radiance of the stimulus, the absolute normalized cone excitations can be calculated from Eq. (2). Note that in calculating 683.6y¯10(λ)dλ, changing the integration limits from 360 to 830 nm to 390 to 830 nm does not induce any difference to the constants mentioned above.

5.2 Compressed cone responses

A non-linear response compression of the cone excitations [1, 23] is thought to be the first processing step in human vision. It compresses the large optical dynamic range into a rather compact range suitable for coding. Often, this compression is implemented using a sigmoidal curve [3, 4, 17, 23, 24]. The intermediate region of this sigmoidal curve (higher than the noise levels and lower than saturation phenomena) can be more or less modelled by a power function. Within the restrictions of the model, i.e. photopic stimuli without glare, the compressed cone responses ρc, γc and βc are therefore calculated from the cone excitations ρ10, γ10 and β10 as follows:

ρc=ρ10cpγc=γ10cpβc=β10cp
The constant cp will be determined by fitting the experimental data (see below).

5.3 Neural signals

The next stage in color vision is believed to be a transformation of the compressed responses (Eq. (5)) into three neural signals: the achromatic signal A, and two color difference signals a and b, respectively related to redness-greenness and yellowness-blueness perception [10, 25]. The achromatic signal is composed of a weighted summation of the three cone responses. The weights were taken in accordance with the estimated numerical distribution of the cones in the retina ρ:γ:β about 40:20:1 [1, 10, 26, 27]:

A=cA(2ρc+γc+120βc)
The color difference signals a and b are taken to be the same as proposed by Hunt [28] and used in other CAM’s [10, 29]:
a=ca(ρc1211γc+βc11)
b=cb(ρc+γc2βc)
cA, ca and cb are constants which will be determined by fitting the experimental data (see below).

5.4 Hue correlate

It is believed that the ratio of the color difference signals a and b causes a hue sensation in our visual cortex [10, 25]. By taking the inverse tangent of a and b, the hue angle h can be calculated:

h=180πtan1(b/a)
To express hue in terms of a quadrature scale H - i.e. in terms of proportions of the unique hues perceived to be present in the stimulus - the hue angle h is linearly transformed form a 0°-360° range to a 0-400 range:
H=Hi+100h'hihi+1hi
With hi the unique hue angle obtained from Hunt [3], Hi the unique hue quadrature,h'=h+360if h is less than h1, otherwise h'=h, and a value of i chosen so that h’ is equal to or greater than hi and less than hi + 1 (see Table 2).

Tables Icon

Table 2. Overview of the unique hue data used for calculating the hue quadrature H [3].

5.5 Colorfulness, brightness, saturation and amount of white correlates

The colorfulness, defined as the perception according to which the perceived color of an stimulus appears to be more or less chromatic [2], can be represented by the strength of the color difference signals a and b [25]:

M=cM×a2+b2
With cM a constant to anchor the colorfulness scale of CAM15u to the one used in CAM97u (see below).

A first estimate of the perceived brightness, is given by the achromatic signal A (Eq. (6)) [25]. However, as discussed in [8], brightness perception is not only dependent on the weighted combination of the cones alone but it is also influenced by the strength of the color of the stimulus (cfr. H-K effect):

Q=A+cHK1×McHK2
With cHK1 and cHK2 constant factors used to modulate the strength of the H-K effect and which will be determined by fitting the experimental data (see below).

Analogous to the CIE definition, saturation can be defined as the colorfulness M relative to the brightness Q [2]:

s=MQ
The amount of white has, as far as we now, never been used and predicted before. From its definition during the experiment, “amount of white” should correlate well to the saturation s or colorfulness M:
W=fW(Mors)
The function fW will be determined by comparing the amount of white perception with the CAM15u saturation and colorfulness.

5.6 Determination of the model parameters

In addition to the yet to be defined amount of white function fW, the model as proposed in the previous sections has only a few free parameters: cp, cA, ca, cb, cM, cHK1 and cHK2.

The parameter cp was determined by optimizing the predictive performance of the model’s brightness perception for the largest available set of achromatic stimuli: 15 achromatic stimuli obtained in a magnitude estimation experiment described in [8]. These 15 stimuli, having a 10° FOV and luminance from 6 to 300 cd/m2, were judged for their brightness by 20 observers. For achromatic stimuli the colorfulness signal is negligible and the brightness correlate is equal to the achromatic signal A (see Eq. (6) and Eq. (12)). By minimizing the mean of the squared residual errors between the observed brightness perception and the prediction of the achromatic signal, the optimal value for the parameter cp was found to be 0.332, which is very close to 1/3. Such a cube root function has often been used to relate physical stimulus quantities to visual sensation: e.g. Leloup [30], Bodmann et al. [31], the CIELAB color space [32], Schuchard [33]. Instead of this cube root, a log compression has also been adopted by some authors [34]. However, the predictive performance of the CAM15u achromatic signal with a cube root compression was slightly better than the one using a logarithmic compression function (coefficients of determination R2 were respectively 0.99 and 0.94). Therefore the parameter cp was fixed to 1/3.

The value of the free parameter cA (Eq. (6)) was set to 3.22 by anchoring the achromatic signal A of this model to the achromatic signal of CAM97u using the same 15 achromatic stimuli [8]. Note that this anchor is limited to the luminance range of these achromatic stimuli, from 6 cd/m2 to 300 cd/m2. In Fig. 3 the achromatic signal of CAM97u for these 15 stimuli is plotted against the one of CAM15u (Eq. (6)). The figure and a coefficient of determination R2 of 0.99 indicate a good correlation between both.

 

Fig. 3 Achromatic signal predicted by CAM97u, ACAM97u, versus the one of CAM15u, ACAM15u, for 15 achromatic stimuli described in [8].

Download Full Size | PPT Slide | PDF

The parameters ca and cb were determined from the experimental hue quadrature data of the test set by minimizing the mean of the squared residual errors between the experimentally observed hue quadrature Havg and the predicted hue quadrature (Eq. (10)): ca = 1 and cb = 0.117. A correlation coefficient R2 of 0.99 and a Spearman correlation r of 1.00 (0.996) between the predicted and the observed hue quadrature indicate that Eq. (10) gives a good prediction of the hue, as illustrated in Fig. 4. In addition, the goodness-of-fit of the model for predicting the hue, as assessed by the coefficient of variation (CV = 5%), is substantially lower than the inter-observer variability (CV = 10%), indicating the model performs adequately. Considering the unique hue angles, hi, as free parameters in the model did not substantially improve the hue quadrature prediction (R2 = 0.99).

 

Fig. 4 Average observed hue Havg with standard error bars versus the hue prediction HCAM15u, for the stimuli of the test set.

Download Full Size | PPT Slide | PDF

Note that Eq. (10) is a simplified transformation compared to the one used in CAM97u, CAMFu and CIECAM02 in that it eliminates the use of the eccentricity factor. An eccentricity factor was introduced to compensate for the differences in strength of perceptual colorization that occur around the hue circle: for example, the perceptual saturation of a yellow stimulus can never be as high as that of blue stimulus. The eccentricity factor for each unique hue was experimentally obtained in a cone excitation space, resulting in about 0.65 for red, 0.5 for yellow, 1.0 for green and 1.45 for blue [28]. In cube root (compressed) cone response space these eccentricities have respective values of about 0.87, 0.79, 1.00 and 1.13, respectively. Above, CAM97u takes the Bezold-Brücke effect into account in the hue quadrature equation by making the eccentricity factors of yellow and blue dependent on the luminance of the stimulus [3]. The decision to eliminate the eccentricity factor and the correction for the Bezold-Brücke effect was made by examining its effect on colorfulness and hue predictions of the model, which was found to be negligible.

The parameter cM was set to 135.52 by anchoring the colorfulness factor M of the CAM15u model to the colorfulness scale used in CAM97u. In Fig. 5 the colorfulness of CAM97u for the stimuli of the test set is plotted as a function of the one of CAM15u. The figure and a coefficient of determination R2 of 0.92 indicate a good correlation between the two.

 

Fig. 5 Colorfulness predicted by CAM97u, MCAM97u, versus the one of CAM15u, MCAM15u, for the stimuli of the test set.

Download Full Size | PPT Slide | PDF

The parameters cHK1 and cHK2 (Eq. (12)) were determined by minimizing the mean of the squared residual errors between the experimentally observed and the predicted brightness of the test set. cHK1 was found to be equal to 2.559 and cHK2 to 0.561. In Fig. 6 the observed brightness of the stimuli of the test set is plotted against the predicted CAM15u brightness (Eq. (12)). From the figure, the coefficient of determination R2 of 0.90 and the Spearman correlation r of 0.95, a very good correlation between the experiments and the model can be observed. In addition, the goodness-of-fit of the model for predicting the brightness, as assessed by the coefficient of variation (CV = 9%), is substantially lower than the inter-observer variability (CV = 17%).

 

Fig. 6 ‘Average observed’ brightness Qavg with standard error bars against the brightness prediction QCAM15u.

Download Full Size | PPT Slide | PDF

A function fW that predicts the amount of white was obtained by comparing the perceived amount of white of the stimuli of the test set with their CAM15u colorfulness and saturation predictions, see Fig. 7.

 

Fig. 7 ‘Average observed’ amount of white Wavg with interquartile range bars against the CAM15u colorfulness prediction M (a) and the CAM15u saturation prediction s (b).

Download Full Size | PPT Slide | PDF

From the figure it is clear that both colorfulness and saturation exhibit a sigmoidal type relationship with the observed amount of white (full line), with a horizontal asymptote towards 0% white and another one towards 100% white. The large interquartile error bars in the figure indicate the large inter-observer variability of this attribute, as discussed above. The graphs and the values of the Spearman correlation coefficient r between the observed amount of white and the predicted CAM15u colorfulness (r = −0.86) and saturation (r = −0.90), suggest saturation would be the best choice as independent variable to predict the amount of white. By minimizing the mean of the squared residual errors between the experimentally observed amount of white and a sigmoidal function of the saturation, a prediction of the amount of white is obtained:

W=1001+2.29×s2.68
The goodness-of-fit of the model for predicting the amount of white, as assessed by the coefficient of variation (CV = 23%), is substantially lower than the inter-observer variability (CV = 30%), indicating the model performs adequately. This is also visible in Fig. 8, where the observed amount of white is plotted against its prediction. The good agreement is also reflected in a high coefficient of determination (R2 = 0.87) and Spearman correlation coefficient (r = 0.90).

 

Fig. 8 ‘Average observed’ amount of white Wavg with interquartile range bars against the predicted amount of white W (Eq. (15)).

Download Full Size | PPT Slide | PDF

6. Validation

The performance of the CAM15u model has been verified by the validation set described above and has been compared to that of three other CAMs for unrelated stimuli: CAM97u [3], CAMFu [6] and CAM97um [8]. Model performance was assessed by calculating the coefficient of determination R2, Spearman correlation coefficient r and coefficient of variation CV between the mean observer data and those predicted by the models. The model performance indicators for brightness, hue and amount of white are given in Table 3. Note that the latter could only be calculated for the CAM15u model.

Tables Icon

Table 3. Model performance assessed by the coefficient of determination R2, Spearman correlation coefficient r and coefficient of variation CV between the mean observed magnitude of the perceptual attributes obtained in the validation experiments with those predicted by the models.

For brightness, it is clear from the results in Table 3 that CAM15u performs best and explains 87% of the variance observed in the visual data. The next best model is the modified CAM97um model, which is almost identical to the original CAM97u except that the prediction of brightness has been modified. Surprisingly, the coefficient of determination of the much more simple and direct model CAM15u is still 7% higher. The original CAM97u model and CAMFu have a rather low performance: both have low Spearman correlation coefficients and respectively account for only 36% and 22% of the observed variance. The relative model performances as indicated by the coefficient of determination and Spearman correlation are confirmed by both the CV values and the graphs in Fig. 9 where the perceived brightness has been plotted as a function of the model prediction.

 

Fig. 9 ‘Average observed’ brightness Qavg with standard error bars against the brightness predictions of CAM15u (a), CAM97u (b), CAM97um (c) and CAMFu (d) for the unrelated stimuli of the validation set.

Download Full Size | PPT Slide | PDF

For the hue quadrature, all models perform very similar (Table 3). All have very high coefficients of variation and Spearman correlation coefficients and the CV values are lower than the inter-observer agreement (11%). The good hue quadrature prediction of all models can also be observed in Fig. 10. Note that the hue prediction for CAM97um is identical to the prediction of CAM97u.

 

Fig. 10 ‘Average observed’ hue quadrature Havg with standard error bars against the hue predictions of CAM15u (a), CAM97u and CAM97um (b) and CAMFu (c) for the unrelated stimuli of the validation set.

Download Full Size | PPT Slide | PDF

Finally, the amount of white observer data of the validation test set is found to be predicted fairly well by the CAM15u model. Although, the Spearman correlation was not as high as for the brightness and hue predictions, the model still accounted for 76% of the variance in the visual data. In addition, the model prediction CV value (32%) was smaller than the inter-observer CV value (36%). The latter was substantially higher than those for the other attributes, suggesting there was quite a bit of inter-observer disagreement. This can also be observed from the rather large interquartile error bars in Fig. 11, where the perception versus the prediction of this attribute has been plotted.

 

Fig. 11 ‘Average observed’ white Wavg with interquartile range bars versus the amount of white prediction of CAM15u for the unrelated stimuli of the validation set.

Download Full Size | PPT Slide | PDF

The performance of CAM15u has also been validated by comparing its predictions with the average observer ratings obtained in previous visual experiments on the color appearance of unrelated self-luminous stimuli [7, 8]. Similar values for the coefficient of determination R2, Spearman correlation coefficient r and coefficient of variation CV were found.

7. Conclusions

The brightness, hue and “amount of white” perception of a set of unrelated self-luminous stimuli was investigated in a magnitude estimation experiment with twenty observers. The amount of white is a new attribute, and basically corresponds to a layperson’s conception of attributes such as colorfulness, chroma or saturation. It was introduced based on the results of a pilot study that showed that laypersons often have difficulty understanding and hence judging colorfulness of a stimulus in an experiment. A non-forced hue evaluation method revealed that the hue perception of a substantial part of the observers, 20%, cannot be mapped to a hue quadrature scale, commonly believed to be representative of typical hue perception of observers.

Based on the obtained visual data, a new color appearance model for unrelated self-luminous stimuli, CAM15u, was developed. The main features of the model are the use of the CIE 2006 cone fundamentals, the inclusion of an absolute brightness scale and a simplified calculation procedure compared to existing models. Using the absolute spectral radiance of the stimulus as input, the model predicts the brightness, hue, colorfulness, saturation and the amount of white. The CAM15u model is restricted to unrelated stimuli having a field of view of 10° and providing a photopic viewing condition while avoiding glare.

An additional magnitude estimation experiment was carried out to validate the CAM15u model and to compare its predictive performance with that of other CAMs for unrelated colors like CAM97u, CAM97um and CAMFu. It was found that, despite its simplicity, CAM15u performs better or at least equally well compared to the existing CAMs.

Future plans are to extend the model’s luminance range, as well as to incorporate a self-luminous background and the effect of stimulus size.

Appendix A: Steps in using CAM15u

Input: Radiance Le,λ(λ) of the unrelated self-luminous stimulus

Step 1: Calculate the normalized ρ10, γ10 and β10 cone excitations directly

ρ10=666.7390830Le,λ(λ)l¯10(λ)dλγ10=782.3390830Le,λ(λ)m¯10(λ)dλβ10=1444.6390830Le,λ(λ)s¯10(λ)dλ
with l¯10(λ),m¯10(λ),s¯10(λ)the CIE 2006 10° cone fundamentals in terms of energy with 1 nm spacing, available on the Website http://www.cvrl.ac.uk. When the radiance is not available, the absolute 10° tristimulus values X10, Y10, Z10 of the stimulus can be used as input. Step 1 is then replaced by a conversion of these tristimulus values into an approximation of the normalized cone excitations ρ10, γ10, β10 (see Appendix C).

Step 2: Calculate the compressed cone responses by taking the cube root of the cone excitations

ρc=ρ101/3γc=γ101/3βc=β101/3
Step 3: Calculate the achromatic signal and the color difference signals
A=3.22(2ρc+γc+120βc)
a=ρc1211γc+βc11
b=0.117(ρc+γc2βc)
Step 4: Calculate the hue angle and hue quadrature
h=180πtan1(b/a)
H=Hi+100h'hihi+1hi
With h'=h+360if h is less than h1, otherwise h'=h, and a value of i chosen so that h’ is equal to or greater than hi and less than hi + 1. With hi and Hi equal to [Table C1]:

Tables Icon

Table C1. Overview of the unique hue data used for calculating the hue quadrature H.

Step 5: Calculate the colorfulness, brightness and saturation

M=135.52×a2+b2
Q=A+2.559×M0.561
s=MQ
Step 6: Calculate the amount of white

W=1001+2.29×s2.68

Appendix B: Worked example

The CAM15u model gives the following results for a 30 cd/m2 sample with a spectral radiance given in Fig. 12: ρ10 = 29.36, γ10 = 33.07, β10 = 38.06, ρc = 3.09, γc = 3.21, βc = 3.36, A = 30.75, a = −0.11, b = −0.05, h = 204.57, H = 255.02, M = 16.49, Q = 43.07, s = 0.38, W = 85.13.

 

Fig. 12 Spectral radiance of the sample used in the worked example.

Download Full Size | PPT Slide | PDF

Appendix C: Conversion from tristimulus values into cone excitations

When the spectral radiance of the stimulus is not available but the absolute tristimulus values X10, Y10, Z10 are, the normalized cone excitations can be approximated as:

[ρ10γ10β10]=[0.2118310.8157890.0424720.4924931.3789210.098745000.985188][X10Y10Z10]
For the worked example, a mean difference of 0.0003% and 3.05% was found between the attributes of the worked example and the attributes obtained using Eq. (C1), whereby the XYZ were respectively calculated with the new CIE 2006 XYZ and the CIE 1964 XYZ CMF’s.

Acknowledgments

The authors would like to thank the Research Council of the KU Leuven for supporting this research project (OT/13/069). Author K.S. would also like to thank the Research Foundation Flanders for the support through a postdoctoral fellowship (12B4913N).

References and links

1. M. D. Fairchild, Color Appearance Models, 3rd ed. (Wiley-IS&T, 2013).

2. CIE, “International Lighting Vocabulary,” (CIE Central Bureau, 2011).

3. R. W. G. Hunt, Measuring colour, 3rd ed. (Fountain Press, 1998), pp. 239–246.

4. R. W. G. Hunt, “Revised colour-appearance model for related and unrelated colours,” Color Res. Appl. 16(3), 146–165 (1991). [CrossRef]  

5. M. R. Luo and C. Li, “CIECAM02 and Its Recent Developments,” in Advanced Color Image Processing and Analysis, F.-M. C., ed. (Springer, 2013).

6. C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012). [CrossRef]  

7. M. Withouck, K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, J. Koenderink, and P. Hanselaer, “Brightness perception of unrelated self-luminous colors,” J. Opt. Soc. Am. A 30(6), 1248–1255 (2013). [CrossRef]   [PubMed]  

8. M. Withouck, K. A. G. Smet, W. R. Ryckaert, G. Deconinck, and P. Hanselaer, “Predicting the brightness of unrelated self-luminous stimuli,” Opt. Express 22(13), 16298–16309 (2014). [CrossRef]   [PubMed]  

9. CIE, “Supplementary System of Photometry,” (CIE Central Bureau, 2011).

10. R. W. G. Hunt and M. R. Pointer, Measuring colour, 4th ed., Wiley-IS&T Series in Imaging Science and Technology (John Wiley & Sons Ltd, 2011).

11. M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991). [CrossRef]  

12. ASTM International, “Standard Test Method for Unipolar Magnitude Estimation of Sensory Attributes,” (2012).

13. D. Farnsworth, The Farnsworth-Munsell 100-hue test for examination of colour discrimination, Munsell Color Co., Baltimore, 1957.

14. S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010). [CrossRef]  

15. P. A. García, R. Huertas, M. Melgosa, and G. Cui, “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24(7), 1823–1829 (2007). [CrossRef]   [PubMed]  

16. B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015). [CrossRef]  

17. CIE, “A colour appearance model for colour management systems: CIECAM02,” (CIE Central Bureau, 2004).

18. CIE, “Fundamental chromaticity diagram with physiological axes - part 1,” (CIE, 2006).

19. A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000). [CrossRef]   [PubMed]  

20. A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999). [CrossRef]   [PubMed]  

21. W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959). [CrossRef]  

22. A. G. Kevin, D. Geert, and H. Peter, “Chromaticity of unique white in object mode,” Opt. Express 22(21), 25830–25841 (2014). [CrossRef]   [PubMed]  

23. T. Kunkel and E. Reinhard, “A neurophysiology-inspired steady-state color appearance model,” J. Opt. Soc. Am. A 26(4), 776–782 (2009). [CrossRef]   [PubMed]  

24. J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983). [CrossRef]   [PubMed]  

25. M. H. Kim, T. Weyrich, and J. Kautz, “Modeling human color perception under extended luminance levels,” in International Conference on Computer Graphics and Interactive Techniques: SIGGRAPH '09 (ACM New York, NY, USA, 2009) [CrossRef]  

26. J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971). [CrossRef]   [PubMed]  

27. J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002). [CrossRef]   [PubMed]  

28. R. W. G. Hunt, “A model of colour vision for predicting colour appearance,” Color Res. Appl. 7(2), 95–112 (1982). [CrossRef]  

29. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

30. F. B. Leloup, M. R. Pointer, P. Dutré, and P. Hanselaer, “Luminance-based specular gloss characterization,” J. Opt. Soc. Am. A 28(6), 1322–1330 (2011). [CrossRef]   [PubMed]  

31. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed. (John Wiley & Sons Inc, 1982).

32. CIE, “Colorimetry,” (CIE Central Bureau, 2004).

33. R. A. Schuchard, “Review of colorimetric methods for developing and evaluating uniform CRT display scales,” Opt. Eng. 29(4), 378–384 (1990). [CrossRef]  

34. M. H. Brill and R. C. Carter, “Does lightness obey a log or a power law? Or is that the right question?” Color Res. Appl. 39(1), 99–101 (2014). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. M. D. Fairchild, Color Appearance Models, 3rd ed. (Wiley-IS&T, 2013).
  2. CIE, “International Lighting Vocabulary,” (CIE Central Bureau, 2011).
  3. R. W. G. Hunt, Measuring colour, 3rd ed. (Fountain Press, 1998), pp. 239–246.
  4. R. W. G. Hunt, “Revised colour-appearance model for related and unrelated colours,” Color Res. Appl. 16(3), 146–165 (1991).
    [Crossref]
  5. M. R. Luo and C. Li, “CIECAM02 and Its Recent Developments,” in Advanced Color Image Processing and Analysis, F.-M. C., ed. (Springer, 2013).
  6. C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
    [Crossref]
  7. M. Withouck, K. A. G. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, J. Koenderink, and P. Hanselaer, “Brightness perception of unrelated self-luminous colors,” J. Opt. Soc. Am. A 30(6), 1248–1255 (2013).
    [Crossref] [PubMed]
  8. M. Withouck, K. A. G. Smet, W. R. Ryckaert, G. Deconinck, and P. Hanselaer, “Predicting the brightness of unrelated self-luminous stimuli,” Opt. Express 22(13), 16298–16309 (2014).
    [Crossref] [PubMed]
  9. CIE, “Supplementary System of Photometry,” (CIE Central Bureau, 2011).
  10. R. W. G. Hunt and M. R. Pointer, Measuring colour, 4th ed., Wiley-IS&T Series in Imaging Science and Technology (John Wiley & Sons Ltd, 2011).
  11. M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
    [Crossref]
  12. ASTM International, “Standard Test Method for Unipolar Magnitude Estimation of Sensory Attributes,” (2012).
  13. D. Farnsworth, The Farnsworth-Munsell 100-hue test for examination of colour discrimination, Munsell Color Co., Baltimore, 1957.
  14. S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010).
    [Crossref]
  15. P. A. García, R. Huertas, M. Melgosa, and G. Cui, “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24(7), 1823–1829 (2007).
    [Crossref] [PubMed]
  16. B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015).
    [Crossref]
  17. CIE, “A colour appearance model for colour management systems: CIECAM02,” (CIE Central Bureau, 2004).
  18. CIE, “Fundamental chromaticity diagram with physiological axes - part 1,” (CIE, 2006).
  19. A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000).
    [Crossref] [PubMed]
  20. A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
    [Crossref] [PubMed]
  21. W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959).
    [Crossref]
  22. A. G. Kevin, D. Geert, and H. Peter, “Chromaticity of unique white in object mode,” Opt. Express 22(21), 25830–25841 (2014).
    [Crossref] [PubMed]
  23. T. Kunkel and E. Reinhard, “A neurophysiology-inspired steady-state color appearance model,” J. Opt. Soc. Am. A 26(4), 776–782 (2009).
    [Crossref] [PubMed]
  24. J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983).
    [Crossref] [PubMed]
  25. M. H. Kim, T. Weyrich, and J. Kautz, “Modeling human color perception under extended luminance levels,” in International Conference on Computer Graphics and Interactive Techniques: SIGGRAPH '09 (ACM New York, NY, USA, 2009)
    [Crossref]
  26. J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971).
    [Crossref] [PubMed]
  27. J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
    [Crossref] [PubMed]
  28. R. W. G. Hunt, “A model of colour vision for predicting colour appearance,” Color Res. Appl. 7(2), 95–112 (1982).
    [Crossref]
  29. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).
  30. F. B. Leloup, M. R. Pointer, P. Dutré, and P. Hanselaer, “Luminance-based specular gloss characterization,” J. Opt. Soc. Am. A 28(6), 1322–1330 (2011).
    [Crossref] [PubMed]
  31. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed. (John Wiley & Sons Inc, 1982).
  32. CIE, “Colorimetry,” (CIE Central Bureau, 2004).
  33. R. A. Schuchard, “Review of colorimetric methods for developing and evaluating uniform CRT display scales,” Opt. Eng. 29(4), 378–384 (1990).
    [Crossref]
  34. M. H. Brill and R. C. Carter, “Does lightness obey a log or a power law? Or is that the right question?” Color Res. Appl. 39(1), 99–101 (2014).
    [Crossref]

2015 (1)

B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015).
[Crossref]

2014 (3)

2013 (1)

2012 (1)

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

2011 (1)

2010 (1)

S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010).
[Crossref]

2009 (1)

2007 (1)

2002 (1)

J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
[Crossref] [PubMed]

2000 (1)

A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000).
[Crossref] [PubMed]

1999 (1)

A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
[Crossref] [PubMed]

1991 (2)

R. W. G. Hunt, “Revised colour-appearance model for related and unrelated colours,” Color Res. Appl. 16(3), 146–165 (1991).
[Crossref]

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

1990 (1)

R. A. Schuchard, “Review of colorimetric methods for developing and evaluating uniform CRT display scales,” Opt. Eng. 29(4), 378–384 (1990).
[Crossref]

1983 (1)

J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983).
[Crossref] [PubMed]

1982 (1)

R. W. G. Hunt, “A model of colour vision for predicting colour appearance,” Color Res. Appl. 7(2), 95–112 (1982).
[Crossref]

1971 (1)

J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971).
[Crossref] [PubMed]

1959 (1)

W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959).
[Crossref]

Brill, M. H.

M. H. Brill and R. C. Carter, “Does lightness obey a log or a power law? Or is that the right question?” Color Res. Appl. 39(1), 99–101 (2014).
[Crossref]

Burch, J. M.

W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959).
[Crossref]

Carroll, J.

J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
[Crossref] [PubMed]

Carter, R. C.

M. H. Brill and R. C. Carter, “Does lightness obey a log or a power law? Or is that the right question?” Color Res. Appl. 39(1), 99–101 (2014).
[Crossref]

Cheal, C.

S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010).
[Crossref]

Clarke, A. A.

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

Cui, G.

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

P. A. García, R. Huertas, M. Melgosa, and G. Cui, “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24(7), 1823–1829 (2007).
[Crossref] [PubMed]

Deconinck, G.

Dutré, P.

Fach, C.

A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
[Crossref] [PubMed]

Fairchild, M. D.

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Fotios, S. A.

S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010).
[Crossref]

Fu, C.

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

García, P. A.

Geert, D.

Hanselaer, P.

Huertas, R.

Hunt, R. W. G.

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

R. W. G. Hunt, “Revised colour-appearance model for related and unrelated colours,” Color Res. Appl. 16(3), 146–165 (1991).
[Crossref]

R. W. G. Hunt, “A model of colour vision for predicting colour appearance,” Color Res. Appl. 7(2), 95–112 (1982).
[Crossref]

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Kevin, A. G.

Koenderink, J.

Koo, B.

B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015).
[Crossref]

Kunkel, T.

Kwak, Y.

B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015).
[Crossref]

Leloup, F. B.

Li, C.

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Luo, M. R.

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Melgosa, M.

Moroney, N.

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Neitz, J.

J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
[Crossref] [PubMed]

Neitz, M.

J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
[Crossref] [PubMed]

Newman, T.

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

Peter, H.

Pointer, M. R.

Reinhard, E.

Rhodes, P. A.

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

Ryckaert, W. R.

Schappo, A.

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

Schuchard, R. A.

R. A. Schuchard, “Review of colorimetric methods for developing and evaluating uniform CRT display scales,” Opt. Eng. 29(4), 378–384 (1990).
[Crossref]

Scrivener, S. A. R.

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

Sharpe, L. T.

A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000).
[Crossref] [PubMed]

A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
[Crossref] [PubMed]

Smet, K. A. G.

Stiles, W. S.

W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959).
[Crossref]

Stockman, A.

A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000).
[Crossref] [PubMed]

A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
[Crossref] [PubMed]

Tait, C. J.

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

Valeton, J. M.

J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983).
[Crossref] [PubMed]

van Norren, D.

J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983).
[Crossref] [PubMed]

Vos, J. J.

J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971).
[Crossref] [PubMed]

Walraven, P. L.

J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971).
[Crossref] [PubMed]

Withouck, M.

Color Res. Appl. (6)

R. W. G. Hunt, “Revised colour-appearance model for related and unrelated colours,” Color Res. Appl. 16(3), 146–165 (1991).
[Crossref]

C. Fu, C. Li, G. Cui, M. R. Luo, R. W. G. Hunt, and M. R. Pointer, “An investigation of colour appearance for unrelated colours under photopic and mesopic vision,” Color Res. Appl. 37(4), 238–254 (2012).
[Crossref]

M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. R. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991).
[Crossref]

B. Koo and Y. Kwak, “Color appearance and color connotation models for unrelated colors,” Color Res. Appl. 40(1), 40–49 (2015).
[Crossref]

R. W. G. Hunt, “A model of colour vision for predicting colour appearance,” Color Res. Appl. 7(2), 95–112 (1982).
[Crossref]

M. H. Brill and R. C. Carter, “Does lightness obey a log or a power law? Or is that the right question?” Color Res. Appl. 39(1), 99–101 (2014).
[Crossref]

J. Opt. Soc. Am. A (4)

J. Vis. (1)

J. Carroll, J. Neitz, and M. Neitz, “Estimates of L:M cone ratio from ERG flicker photometry and genetics,” J. Vis. 2(8), 531–542 (2002).
[Crossref] [PubMed]

Lighting Res. Tech. (1)

S. A. Fotios and C. Cheal, “A comparison of simultaneous and sequential brightness judgements,” Lighting Res. Tech. 42(2), 183–197 (2010).
[Crossref]

Opt. Acta (1)

W. S. Stiles and J. M. Burch, “N.P.L. Colour-matching Investigation: Final Report (1958),” Opt. Acta 6(1), 1–26 (1959).
[Crossref]

Opt. Eng. (1)

R. A. Schuchard, “Review of colorimetric methods for developing and evaluating uniform CRT display scales,” Opt. Eng. 29(4), 378–384 (1990).
[Crossref]

Opt. Express (2)

Vision Res. (4)

A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000).
[Crossref] [PubMed]

A. Stockman, L. T. Sharpe, and C. Fach, “The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches,” Vision Res. 39(17), 2901–2927 (1999).
[Crossref] [PubMed]

J. M. Valeton and D. van Norren, “Light adaptation of primate cones: An analysis based on extracellular data,” Vision Res. 23(12), 1539–1547 (1983).
[Crossref] [PubMed]

J. J. Vos and P. L. Walraven, “On the derivation of the foveal receptor primaries,” Vision Res. 11(8), 799–818 (1971).
[Crossref] [PubMed]

Other (14)

N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 Color Appearance Model,” in 10th Color Imaging Conference, IS&T and SID, (Scottsdale, Arizona, 2002).

M. H. Kim, T. Weyrich, and J. Kautz, “Modeling human color perception under extended luminance levels,” in International Conference on Computer Graphics and Interactive Techniques: SIGGRAPH '09 (ACM New York, NY, USA, 2009)
[Crossref]

G. Wyszecki and W. S. Stiles, Color Science, 2nd ed. (John Wiley & Sons Inc, 1982).

CIE, “Colorimetry,” (CIE Central Bureau, 2004).

CIE, “A colour appearance model for colour management systems: CIECAM02,” (CIE Central Bureau, 2004).

CIE, “Fundamental chromaticity diagram with physiological axes - part 1,” (CIE, 2006).

ASTM International, “Standard Test Method for Unipolar Magnitude Estimation of Sensory Attributes,” (2012).

D. Farnsworth, The Farnsworth-Munsell 100-hue test for examination of colour discrimination, Munsell Color Co., Baltimore, 1957.

CIE, “Supplementary System of Photometry,” (CIE Central Bureau, 2011).

R. W. G. Hunt and M. R. Pointer, Measuring colour, 4th ed., Wiley-IS&T Series in Imaging Science and Technology (John Wiley & Sons Ltd, 2011).

M. R. Luo and C. Li, “CIECAM02 and Its Recent Developments,” in Advanced Color Image Processing and Analysis, F.-M. C., ed. (Springer, 2013).

M. D. Fairchild, Color Appearance Models, 3rd ed. (Wiley-IS&T, 2013).

CIE, “International Lighting Vocabulary,” (CIE Central Bureau, 2011).

R. W. G. Hunt, Measuring colour, 3rd ed. (Fountain Press, 1998), pp. 239–246.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (a) Experimental setup. (b) Example of a stimulus under dark viewing conditions [8].
Fig. 2
Fig. 2 CIE 1976 u10, v10 chromaticity coordinates of the 105 test stimuli (a) and 52 validation stimuli (b).
Fig. 3
Fig. 3 Achromatic signal predicted by CAM97u, ACAM97u, versus the one of CAM15u, ACAM15u, for 15 achromatic stimuli described in [8].
Fig. 4
Fig. 4 Average observed hue Havg with standard error bars versus the hue prediction HCAM15u, for the stimuli of the test set.
Fig. 5
Fig. 5 Colorfulness predicted by CAM97u, MCAM97u, versus the one of CAM15u, MCAM15u, for the stimuli of the test set.
Fig. 6
Fig. 6 ‘Average observed’ brightness Qavg with standard error bars against the brightness prediction QCAM15u.
Fig. 7
Fig. 7 ‘Average observed’ amount of white Wavg with interquartile range bars against the CAM15u colorfulness prediction M (a) and the CAM15u saturation prediction s (b).
Fig. 8
Fig. 8 ‘Average observed’ amount of white Wavg with interquartile range bars against the predicted amount of white W (Eq. (15)).
Fig. 9
Fig. 9 ‘Average observed’ brightness Qavg with standard error bars against the brightness predictions of CAM15u (a), CAM97u (b), CAM97um (c) and CAMFu (d) for the unrelated stimuli of the validation set.
Fig. 10
Fig. 10 ‘Average observed’ hue quadrature Havg with standard error bars against the hue predictions of CAM15u (a), CAM97u and CAM97um (b) and CAMFu (c) for the unrelated stimuli of the validation set.
Fig. 11
Fig. 11 ‘Average observed’ white Wavg with interquartile range bars versus the amount of white prediction of CAM15u for the unrelated stimuli of the validation set.
Fig. 12
Fig. 12 Spectral radiance of the sample used in the worked example.

Tables (4)

Tables Icon

Table 1 Evaluation of inter- and intra-observer agreement for the test and validation set in terms of the coefficient of variation CV (%).

Tables Icon

Table 2 Overview of the unique hue data used for calculating the hue quadrature H [3].

Tables Icon

Table 3 Model performance assessed by the coefficient of determination R2, Spearman correlation coefficient r and coefficient of variation CV between the mean observed magnitude of the perceptual attributes obtained in the validation experiments with those predicted by the models.

Tables Icon

Table C1 Overview of the unique hue data used for calculating the hue quadrature H.

Equations (27)

Equations on this page are rendered with MathJax. Learn more.

CV=100 1 n i=1 n ( A i f B i ) 2 A ¯ 2 with f= i=1 n A i B i i=1 n B i 2
ρ 10 = k ρ 390 830 L e,λ (λ) l ¯ 10 (λ)dλ γ 10 = k γ 390 830 L e,λ (λ) m ¯ 10 (λ)dλ β 10 = k β 390 830 L e,λ (λ) s ¯ 10 (λ)dλ
ρ 10, S E = γ 10, S E = β 10, S E = L 10, S E
k ρ 390 830 l ¯ 10 (λ)dλ = k γ 390 830 m ¯ 10 (λ)dλ = k β 390 830 s ¯ 10 (λ)dλ =683.6 360 830 y ¯ 10 (λ)dλ
ρ c = ρ 10 c p γ c = γ 10 c p β c = β 10 c p
A= c A ( 2 ρ c + γ c + 1 20 β c )
a= c a ( ρ c 12 11 γ c + β c 11 )
b= c b ( ρ c + γ c 2 β c )
h= 180 π tan 1 (b/a)
H= H i +100 h' h i h i+1 h i
M= c M × a 2 + b 2
Q=A+ c HK1 × M c HK2
s= M Q
W= f W (M or s)
W= 100 1+2.29× s 2.68
ρ 10 =666.7 390 830 L e,λ (λ) l ¯ 10 (λ)dλ γ 10 =782.3 390 830 L e,λ (λ) m ¯ 10 (λ)dλ β 10 =1444.6 390 830 L e,λ (λ) s ¯ 10 (λ)dλ
ρ c = ρ 10 1/3 γ c = γ 10 1/3 β c = β 10 1/3
A=3.22( 2 ρ c + γ c + 1 20 β c )
a= ρ c 12 11 γ c + β c 11
b=0.117( ρ c + γ c 2 β c )
h= 180 π tan 1 (b/a)
H= H i +100 h' h i h i+1 h i
M=135.52× a 2 + b 2
Q=A+2.559× M 0.561
s= M Q
W= 100 1+2.29× s 2.68
[ ρ 10 γ 10 β 10 ]=[ 0.211831 0.815789 0.042472 0.492493 1.378921 0.098745 0 0 0.985188 ][ X 10 Y 10 Z 10 ]

Metrics