Abstract

In this paper we present a parametric model for automatic color naming where each color category is modeled as a fuzzy set with a parametric membership function. The parameters of the functions are estimated in a fitting process using data derived from psychophysical experiments. The name assignments obtained by the model agree with previous psychophysical experiments, and therefore the high-level color-naming information provided can be useful for different computer vision applications where the use of a parametric model will introduce interesting advantages in terms of implementation costs, data representation, model analysis, and model updating.

© 2008 Optical Society of America

1. INTRODUCTION

Color is a very important visual cue in human perception. Among the various visual tasks performed by humans that involve color, color naming is one of the most common. However, the perceptual mechanisms that rule this process are still not completely known [1]. Color naming has been studied from very different points of view. The anthropological study of Berlin and Kay [2] was a starting point that stimulated much research about the topic in the subsequent decades. They studied color naming in different languages and stated the existence of universal color categories. They also defined the set of 11 basic categories that have the most evolved languages. These are white, black, red, green, yellow, blue, brown, purple, pink, orange, and gray. Since then, several studies have confirmed and extended their results [3, 4, 5, 6].

In computer vision, color has been numerically represented in different color spaces that, unfortunately, do not easily derive information about how color is named by humans. Hence a computational model of color naming would be very useful for several tasks such as segmentation, retrieval, tracking, or human–machine interaction. Although some models based on a pure tessellation of a color space have been proposed [7, 8, 9], the most accepted framework has been to consider color naming as a fuzzy process; that is, any color stimulus has a membership value between 0 and 1 to each color category. Kay and McDaniel [10] were the first to propose a theoretical fuzzy model for color naming. Later, some approaches from the computer vision field adopted this point of view. Lammens [11] developed a fuzzy computational model where the membership to the color categories was modeled by a variant of the Gaussian function that was fitted to Berlin and Kay’s data. In recent years, more complex and complete models have been proposed. Mojsilovic [12] defined a perceptual metric derived from color-naming experiments and proposed a model that also takes into account other perceptual issues such as color constancy and spatial averaging. Seaborn et al. [13] have developed a fuzzy model based on the application of the fuzzy k-means algorithm to the data obtained from the psychophysical experiments of Sturges and Whitfield [14]. More recently, van den Broek et al. [15] have proposed a categorization method based on psychophysical data and the Euclidean distance. Apart from Lammens’ model, the rest are nonparametric models.

In this paper we present a fuzzy color-naming model based on the use of parametric membership functions whose advantages are discussed later in Section 2. The main goal of this model is to provide high-level color descriptors containing color-naming information useful for several computer vision applications [16, 17, 18, 19].

The paper is organized as follows. In Section 2, we explain the fuzzy framework and present our parametric approach. Next, in Section 3, we detail the process followed to estimate the parameters of the model. Section 4 is devoted to discussing the results obtained and, finally, in Section 5, we present the conclusions of this work.

2. PARAMETRIC MODEL

The essential contribution of this paper is to take a further step toward building computational engines to automate the color categorization task. As similarly done in previous works, such as Mojsilovic in [12] or Seaborn et al. in [13], we present the color-naming task as a decision problem formulated in the frame of the fuzzy-set theory [20]. Whereas in the first work a nearest neighbor classifier is used, in the second one a fuzzy k-means algorithm is used. The essential difference of our proposal relies on the definition of a parametric model; that is, we propose a set of tuneable parameters that analytically define the shape of the fuzzy sets representing each color category. Parametric models have been previously used to model color information [21], and the suitability of such an approach can be summed up in the following points:

Inclusion of prior knowledge. Prior knowledge about the structure of the data allows us to choose the best model on each case. However, this could turn into a disadvantage if a nonappropriate function for the model is selected.

Compact categories. Each category is completely defined by a few parameters, and training data do not need to be stored after an initial fitting process. This implies lower memory usage and less computation time when the model is applied.

Meaningful parameters. Each parameter has a meaning in terms of the characterization of the data, which allows us to modify and improve the model by just adjusting the parameters.

Easy analysis. As a consequence of the previous point, the model can be analyzed and compared by studying the values of its parameters.

Considering the perceptual spaces derived from previous works and from psychophysical data, we have fitted color membership using a triple-sigmoid function [see Eq. (11)] for the eight basic chromatic categories (Red, Orange, Brown, Yellow, Green, Blue, Purple, and Pink). To this end, we have worked on the CIELab color space, since it is a quasi-perceptually-uniform color space where a good correlation between the Euclidean distance between color pairs and the perceived color dissimilarity can be observed. It is likely that other spaces could be suitable whenever one of the dimensions correlates with color lightness and the other two with chromaticity components. In this paper, we will denote any color point in such a space as s=(I,c1,c2), where I is the lightness and c1 and c2 are the chromaticity components of the color point.

Ideally, color memberships should be modeled by three-dimensional functions, i.e., functions defined by R3[0,1], but unfortunately it is not easy to infer precisely the way in which color-naming data are distributed in the color space, and hence finding parametric functions that fit these data is a very complicated task. For this reason, in our proposal the three-dimensional color space has been sliced into a set of NL levels along the lightness axis (see Fig. 1 ), obtaining a set of chromaticity planes over which membership functions have been modeled by two-dimensional functions. Therefore, any specific chromatic category will be defined by a set of functions, each one depending on a lightness component, as is expressed later in Eq. (12). Achromatic categories (Black, Gray, and White) will be given as the complementary function of the chromatic ones but weighted by the membership function of each one of the three achromatic categories. To go into the details of the proposed approach, we will first give the basis of the fuzzy framework, and afterward we will pose the considerations on the function shapes for the chromatic categories. Finally, the complementary achromatic categories will be derived.

2A. Fuzzy Color Naming

A fuzzy set is a set whose elements have a degree of membership. In a more formal way, a fuzzy set A is defined by a crisp set X, called the universal set, and a membership function, μA, which maps elements of the universal set into the [0, 1] interval, that is, μA:X[0,1].

Fuzzy sets are a good tool to represent imprecise concepts expressed in natural language. In color naming, we can consider that any color category, Ck, is a fuzzy set with a membership function, μCk, which assigns, to any color sample s represented in a certain color space, i.e., our universal set, a membership value μCk(s) within the [0,1] interval. This value represents the certainty we have that s belongs to category Ck, which is associated with the linguistic term tk.

In our context of color categorization with a fixed number of categories, we need to impose the constraint that, for a given sample s, the sum of its memberships to the n categories must be the unity

k=1nμCk(s)=1withμCk(s)[0,1],k=1,,n.
In the rest of the paper, this constraint will be referred to as the unity-sum constraint. Although this constraint does not hold in fuzzy-set theory, it is interesting in our case because it allows us to interpret the memberships of any sample as the contributions of the considered categories to the final color sensation.

Hence, for any given color sample s, it will be possible to compute a color descriptor, CD, such as

CD(s)=[μC1(s),,μCn(s)],
where each component of this n-dimensional vector describes the membership of s to a specific color category.

The information contained in such a descriptor can be used by a decision function, N(s), to assign the color name of the stimulus s. The most easy decision rule we can derive is to choose the maximum from CD(s):

N(s)=tkmaxkmax=argmaxk=1,,n{μCk(s)},
where tk is the linguistic term associated with color category Ck.

In our case the categories considered are the basic categories proposed by Berlin and Kay, that is, n=11, and the set of categories is

Ck{Red,Orange,Brown,Yellow,Green,Blue,Purple,Pink,Black,Gray,White}.

2B. Chromatic Categories

According to the fuzzy framework defined previously, any function we select to model color categories must map values to the [0,1] interval, i.e., μCk(s)[0,1]. In addition, the observation of the membership values of psychophysical data obtained from a color-naming experiment [22] made us hypothesize about a set of necessary properties that membership functions for the chromatic categories should fulfill:

Triangular basis. Chromatic categories present a plateau, or area with no confusion about the color name, with a triangular shape and a principal vertex shared by all the categories.

Different slopes. For a given chromatic category, the slope of naming certainty toward the neighboring categories can be different on each side of the category (e.g., transition from blue to green can be different from that from blue to purple).

Central notch. The transition from a chromatic category to the central achromatic one has the form of a notch around the principal vertex.

In Fig. 2 we show a scheme of the preceding conditions on a chromaticity diagram where the samples of the color-naming experiment have been plotted.

After considering different membership functions [23, 24, 25] that fulfilled some of the previous properties, we have defined a new variant of them, the triple sigmoid with elliptical center (TSE), as a two-dimensional function, TSE:R2[0,1]. The definition of the TSE starts from the one-dimensional sigmoid function:

S1(x,β)=11+exp(βx),
where β controls the slope of the transition from 0 to 1 [see Fig. 3a ].

This can be extended to a two-dimensional sigmoid function, S:R2[0,1], as

S(p,β)=11+exp(βuip),i=1,2,
where p=(x,y)T is a point in the plane and vectors u1=(1,0) and u2=(0,1) define the axis in which the function is oriented [see Fig. 3b].

By adding a translation, t=(tx,ty), and a rotation, α, to the previous equation, the function can adopt a wide set of shapes. In order to represent the formulation in a compact matrix form, we will use homogeneous coordinates [26]. Let us redefine p to be a point in the plane expressed in homogeneous coordinates as p=(x,y,1)T, and let us denote the vectors u1=(1,0,0) and u2=(0,1,0). We define S1 as a function oriented in axis x with rotation α with respect to axis y, and S2 as a function oriented in axis y with rotation α with respect to axis x:

Si(p,t,α,β)=11+exp(βuiRαTtp),i=1,2,
where Tt and Rα are a translation matrix and a rotation matrix, respectively:
Tt=(10tx01ty001),Rα=(cos(α)sin(α)0sin(α)cos(α)0001).

By multiplying S1 and S2, we define the double-sigmoid (DS) function, which fulfills the first two properties proposed before:

DS(p,t,θDS)=S1(p,t,αy,βy)S2(p,t,αx,βx),
where θDS=(αx,αy,βx,βy) is the set of parameters of the DS function. Functions S1, S2, and DS are plotted in Fig. 4 .

To obtain the central notch shape needed to fulfill the third proposed property, let us define the elliptic-sigmoid (ES) function by including the ellipse equation in the sigmoid formula:

ES(p,t,θES)=11+exp{βe[(u1RϕTtpex)2+(u2RϕTtpey)21]},
where θES=(ex,ey,ϕ,βe) is the set of parameters of the ES function, ex and ey are the semiminor and semimajor axes, respectively, ϕ is the rotation angle of the ellipse, and βe is the slope of the sigmoid curve that forms the ellipse boundary. The function obtained is an elliptic plateau if βe is negative and an elliptic valley if βe is positive. The surfaces obtained can be seen in Fig. 5 .

Finally, by multiplying the DS by the ES (with a positive βe), we define the TSE as

TSE(p,θ)=DS(p,t,θDS)ES(p,t,θES),
where θ=(t,θDS,θES) is the set of parameters of the TSE.

The TSE function defines a membership surface that fulfills the properties defined at the beginning of Subsection 2B. Figure 6 shows the form of the TSE function.

Hence, once we have the analytic form of the chosen function, the membership function for a chromatic category μCk is given by

μCk(s)={μCk1=TSE(c1,c2,θCk1)ifII1μCk2=TSE(c1,c2,θCk2)ifI1<II2,μCkNL=TSE(c1,c2,θCkNL)ifINL1<I}
where s=(I,c1,c2) is a sample on the color space, NL is the number of chromaticity planes, θCki is the set of parameters of the category Ck on the ith chromaticity plane, and Ii are the lightness values that divide the space into the NL lightness levels.

By fitting the parameters of the functions, it is possible to obtain the variation of the chromatic categories through the lightness levels. By doing this for all the categories, it will be possible to obtain membership maps; that is, for a given lightness level we have a membership value to each category for any color point s=(I,c1,c2) of the level. Notice that since some categories exist only at certain lightness levels (e.g., brown is defined only for low lightness values and yellow only for high values), on each lightness level not all the categories will have memberships different from zero for any point of the level. Figure 7 shows an example of the membership map provided by the TSE functions for a given lightness level, in which there exist six chromatic categories. The other two chromatic categories in this example would have zero membership for any point of the level.

2C. Achromatic Categories

The three achromatic categories (Black, Gray, and White) are first considered as a unique category at each chromaticity plane. To ensure that the unity-sum constraint is fulfilled (i.e., the sum of all memberships must be one), a global achromatic membership, μA, is computed for each level as

μAi(c1,c2)=1k=1ncμCki(c1,c2),
where i is the chromaticity plane that contains the sample s=(I,c1,c2) and nc is the number of chromatic categories (in our case, nc=8). The differentiation among the three achromatic categories must be done in terms of lightness. To model the fuzzy boundaries among these three categories, we use one-dimensional sigmoid functions along the lightness axis:
μABlack(I,θBlack)=11+exp[βb(Itb)],
μAGray(I,θGray)=11+exp[βb(Itb)]11+exp[βw(Itw)],
μAWhite(I,θWhite)=11+exp[βw(Itw)],
where θBlack=(tb,βb), θGray=(tb,βb,tw,βw), and θWhite=(tw,βw) are the set of parameters for Black, Gray, and White, respectively. Figure 8 shows a scheme of this division along the lightness axis.

Hence, the membership of the three achromatic categories on a given chromaticity plane is computed by weighting the global achromatic membership [Eq. (13)] with the corresponding membership in the lightness dimension [Eqs. (14, 15, 16)]:

μCk(s,θCk)=μAi(c1,c2)μACk(I,θCk),
9k11,Ii<IIi+1,
where i is the chromaticity plane in which the sample is included and the values of k correspond to the achromatic categories [see Eq. (4)]. In this way we can assure that the unity-sum constraint is fulfilled on each specific chromaticity plane,
k=111μCki(s)=1,i=1,,NL,
where NL is the number of chromaticity planes in the model.

3. FUZZY-SETS ESTIMATION

Once we have defined the membership functions of the model, the next step is to fit their parameters. To this end, we need a set of psychophysical data, D, composed of a set of samples from the color space and their membership values to the 11 categories,

D={si,m1i,,m11i},i=1,,ns,
where si is the ith sample of the learning set, ns is the number of samples in the learning set, and mki is the membership value of the ith sample to the kth category.

Such data will be the knowledge basis for a fitting process to estimate the model parameters, taking into account our unity-sum constraint given in Eq. (18). In this case, the model will be estimated for the CIELab space, since it is a standard space with interesting properties. However, any other color space with a lightness dimension and two chromatic dimensions would be suitable for this purpose.

3A. Learning Set

The data set for the fitting process must be perceptually significant; that is, the judgements should be coherent with results from psychophysical color-naming experiments and the samples should cover all the color space. At present, there are no color-naming experiments providing fuzzy judgements. We proposed a fuzzy methodology for that purpose in [22], but the sampling of the color space is not large enough to fit the presented model.

Thus, to build a wide learning set, we have used the color-naming map proposed by Seaborn et al. in [13]. This color map has been built by making some considerations on the consensus areas of the Munsell color space provided by the psychophysical data from the experiments of Sturges and Whitfield [14]. Using such data and the fuzzy k-means algorithm, this method allows us to derive the memberships of any point in the Munsell space to the 11 basic color categories.

In this way, we have obtained the memberships of a wide sample set, and afterward we have converted this color sampling set to their corresponding CIELab representation. Our data set was initially composed of the 1269 samples of the Munsell Book of Color [27]. Their reflectances and CIELab coordinates, calculated by using the CIE D65 illuminant, are available at the Web site of the University of Joensuu in Finland [28]. In order to avoid problems in the fitting process due to the reduced number of achromatic and low-chroma samples, the set was completed with 18 achromatic samples (from value=1 to value=9.5 at steps of 0.5), 320 low-chroma samples (for values from 2 to 9, hue at steps of 2.5, and chroma=1), and 10 samples with value=2.5, and chroma=2 (hues 5YR, 7.5YR, 10YR, 2.5Y, 5Y, 7.5Y, 10Y, 2.5GY, 5GY, and 7.5GY). The CIELab coordinates of these additional samples were computed with the Munsell Conversion software (Version 6.5.10). Therefore, the total number of samples of our learning set is 1617. Hence, with such a data set we accomplish the perceptual significance required for our learning set. First, by using Seaborn’s method, we include the results of the psychophysical experiment of Sturges and Whitfield, and, in addition, it covers an area of the color space that suffices for our purpose.

3B. Parameter Estimation

Before starting with the fitting process, the number of chromaticity planes and the values that define the lightness levels [see Eq. (12)] must be set. These values depend on the learning set used and must be chosen while taking into account the distribution of the samples from the learning set. In our case, the number of planes that delivered best results was found to be 6, and the values that define the levels were selected by choosing some local minima in the histogram of samples along the lightness axis. Figure 9 shows the samples' histogram and the values selected. However, if a more extensive learning set were available, a higher number of levels would possibly deliver better results.

For each chromaticity plane, the global goal of the fitting process is finding an estimation of the parameters, θ̂j, that minimizes the mean squared error between the memberships from the learning set and the values provided by the model:

θ̂j=argminθj1ncpi=1ncpk=1nc(μCkj(si,θCkj)mki)2,j=1,,NL,
where θ̂j=(θ̂C1j,,θ̂Cncj) is the estimation of the parameters of the model for the chromatic categories on the jth chromaticity plane, θCkj is the set of parameters of the category Ck for the jth chromaticity plane, nc is the number of chromatic categories, ncp is the number of samples of the chromaticity plane, μCkj is the membership function of the color category Ck for the jth chromaticity plane, and mki is the membership value of the ith sample of the learning set to the kth category.

The previous minimization is subject to the unity-sum constraint:

k=111μCkj(s,θCkj)=1,s=(I,c1,c2)Ij1<IIj,
which is imposed to the fitting process through two assumptions. The first one is related to the membership transition from chromatic categories to achromatic categories:

Assumption 1: All the chromatic categories in a chromaticity plane share the same ES function, which models the membership transition to the achromatic categories. This means that all the chromatic categories share the set of estimated parameters for ES:

θESCpj=θESCqj,tCpj=tCqj,p,q{1,,nc},
where nc is the number of chromatic categories.

The second assumption refers to the membership transition between adjacent chromatic categories:

Assumption 2: Each pair of neighboring categories, Cp and Cq, share the parameters of slope and angle of the DS function, which define their boundary:

βyCp=βxCq,αyCp=αxCq(π2),
where the superscripts indicate the category to which the parameters correspond.

These assumptions considerably reduce the number of parameters to be estimated. Hence, for each chromaticity plane, we must estimate 2 parameters for the translation, t=(tx,ty), 4 for the ES function, θES=(ex,ey,ϕ,βe), and a maximum of 2×nc for the DS functions, since the other two parameters of θDS=(αx,αy,βx,βy) can be obtained from the neighboring category [Eq. (23)].

Hence, following the two previous assumptions, the parameters of the chromatic categories at each chromaticity plane, θ̂Ckj=(t̂j,θ̂DSCkj,θ̂ESj), with k=1,,nc, are estimated in two steps:

1. According to assumption 1, we estimate the parameters of a unique ES function, t̂j and θ̂ESj, for each chromaticity plane by minimizing:

(t̂j,θ̂ESj)=argmintj,θESj1ncpi=1ncp(ES(si,tj,θESj)k=911mki)2,
where ncp is the number of samples from the learning set in the jth chromaticity plane and mki is the membership to the kth category of the ith sample for values of k between 9 and 11, which correspond to the achromatic categories according to Eq. (4).

2. Considering assumption 2 allows us to estimate the rest of the parameters, θ̂DSCkj, of each color category by minimizing the following expression for each pair of neighboring categories, Cp and Cq:

(θ̂DSCpj,θ̂DSCqj)=argminθDSCpj,θDSCqji=1ncp((μCpj(si,θCpj)mpi)2+(μCqj(si,θCqj)mqi)2),
where θCkj=(t̂j,θDSCkj,θ̂ESj).

Once all the parameters of the chromatic categories have been estimated for all the chromaticity planes, the parameters used to differentiate among the three achromatic categories, θ̂A=(θ̂C9,θ̂C10,θ̂C11) are estimated by minimizing the expression

θ̂A=argminθAi=1nsk=911(μCk(si,θCk)mki)2,
where ns is the number of samples in the learning set and the values of k correspond to the three achromatic categories, that is, C9=Black, C10=Gray, and C11=White [see Eq. (4)].

All the minimizations to estimate the parameters are performed by using the simplex search method proposed in [29]. After the fitting process, we obtain the parameters that completely define our color-naming model and that are presented and discussed in the next section.

4. RESULTS AND DISCUSSION

The essential result of this work is the set of parameters of the color-naming model that are summarized in Table 1 .

The evaluation of the fitting process is done in terms of two measures. The first one is the mean absolute error (MAEfit) between the learning set memberships and the memberships obtained from the parametric membership functions:

MAEfit=1ns111i=1nsk=111mkiμCk(si),
where ns is the number of samples in the learning set, mki is the membership of si to the kth category, and μCk(si) is the parametric membership of si to the kth category provided by our model.

The value of MAEfit is a measure of the accuracy of the model fitting to the learning data set, and in our case the value obtained was MAEfit=0.0168. This measure was also computed for a test data set of 3149 samples. To build the test data set, the Munsell space was sampled at hues 1.25, 3.75, 6.25, and 8.75; values from 2.5 to 9.5 at steps of 1  unit; and chromas from 1 to the maximum available with a step of 2  units. As in the case of the learning set, the memberships of the test set that were considered the ground truth were computed with Seaborn’s algorithm. The corresponding CIELab values to apply our parametric functions were computed with the Munsell Conversion software. The value of MAEfit obtained was 0.0218, which confirms the accuracy of the fitting that allows the model to provide membership values with very low error even for samples that were not used in the fitting process.

The second measure evaluates the degree of fulfillment of the unity-sum constraint. Considering as error the difference between the unity and the sum of all the memberships at a point, pi, the measure proposed is

MAEunitsum=1npi=1np1k=111μCk(pi),
where np is the number of points considered and μCk is the membership function of category Ck.

To compute this measure, we have sampled each one of the six chromaticity planes with values from 80 to 80 at steps of 0.5 units on both the a and b axes, which means that np=153,600. The value obtained for MAEunitsum=6.41e04 indicates that the model provides a great fulfillment of that constraint, making the model consistent with the proposed framework.

Hence, for any point of the CIELab space we can compute the membership to all the categories and, at each chromaticity plane, these values can be plotted to generate a membership map. In Fig. 10 we show the membership maps of the six chromaticity planes considered, with the membership surfaces labeled with their corresponding color terms.

In many previous works on color naming, results have been evaluated in terms of the categorization of the Munsell space [2, 11, 13, 30]. To be able to compare our results to the previous ones, we will also categorize the Munsell space by applying the maximum criteria [Eq. (3)] as a decision rule to assign a color name to each chip of the Munsell data set.

To evaluate the plausibility of the model with psychophysical data, we compare our categorization to the results reported in two works of reference: the study of Berlin and Kay [2] and the experiments of Sturges and Whitfield [14]. Figure 11 shows the boundaries found by Berlin and Kay in their work, superimposed on our categorization. Samples inside these boundaries assigned with a different name by our model are marked with a cross. As can be seen, there are a total of 17 samples out of 210 inside Berlin and Kay’s boundaries with a different name. The errors are concentrated on certain boundaries, namely, green-blue, blue-purple, purple-pink, and purple-red.

The comparison to Sturges and Whitfield’s results is presented in Fig. 12 . In Sturges and Whitfield’s experiment the samples labeled with the same name by all the subjects defined the consensus areas for each category. Among these samples, the fastest-named sample for each category was its focus. These areas are superimposed over our categorization to show that all the consensus and focal samples from Sturges and Whitfield’s experiment are assigned the same name by our model.

The analysis done to our TSE model (TSEM) was also performed on some previous categorizations. These are obtained by Lammens’s Gaussian model (LGM) [11], an English speaker presented by MacLaury (MES) in [30], Seaborn’s fuzzy k-means model (SFKM) [13], and our previous TS model (TSM) [24]. The results are summarized in Table 2 .

As can be seen in the table, the results of our TSEM equal the previous best of Seaborn’s nonparametric model but add the advantages of having a parametric model that have been previously discussed in Section 2. Notice that although the learning process of both models was based on data derived from Sturges’s results, they are the most consistent with Berlin and Kay’s experiments and are also better than the results of the English speaker’s categorization, which shows the variability of the problem, since any individual subject’s judgements will normally differ from those of a color-naming experiment.

5. CONCLUSIONS

In this paper we have proposed a parametric fuzzy model for color naming based on the definition of the TSE as a membership function. The use of a parametric model introduces several advantages with respect to previous nonparametric approaches. These advantages, which have been discussed in Section 2, include a reduction in the implementation costs in terms of memory and computation time; a compact data representation; and simplicity for model analysis, since each parameter has a meaning in terms of the characterization of the data and, consequently, the model can be easily updated by just tuning some of the parameters.

The model has been conceived for any color space with two chromatic dimensions and a lightness dimension, but in the present work the parameters have been estimated for the CIELab space. The estimation process includes some constraints to assure the fulfillment of our imposed constraint that the memberships sum for any point must be one. The result is the set of parameters that defines a model that achieves a low fitting error to both the learning and test data sets and also fulfills the unity-sum constraint. The evaluation of the model when compared to previous results from the color-naming experiments of Berlin and Kay, and Sturges and Whitfield demonstrates that our model is plausible with these psychophysical data.

Hence, the memberships to the 11 basic color categories can be obtained for any point in the CIELab space to provide a color-naming descriptor with meaningful information about how humans name colors. The results are promising and have many applications to different computer vision tasks, such as image description, indexing, and segmentation, among others, where inclusion of this high-level information might improve their performance. The proposed representation of color information could also be used as a more perceptual measure of similarity for color, instead of the Euclidean distance in color spaces.

However, it must be pointed out that the model has been fitted to data derived from psychophysical experiments where a homogeneous color area is shown to an observer who has been adapted to the scene illuminant, and therefore the name assignment has been done under ideal conditions where influences from neither the illuminant nor the surround of the observed area have any effect on the naming process. In practice, the color-name assignment is a content-dependent task, and therefore perceptual considerations about the surround influence must be taken into account. The model we have proposed is assumed to work on perceived images, that is, images where the effects of perceptual adaptation to the illuminant and to the surround have been previously considered in a preprocessing step. Hence, the application of a color constancy algorithm can provide images under a canonical illuminant, thus simulating an adaptation process to the illuminant [31, 32, 33]. On the other hand, induction operators take into account the influence of the color surrounds in the final color representations as proposed in [34, 35].

Another fact that must be considered is that since Sturges and Whitfield’s experiments were done with physical color samples, the data used to fit the model reduce the space that occupy some categories (e.g., red) due to the limitations in the production of some colors with pigments. Hence, if the model is applied to other kinds of stimuli, e.g., lights, some errors could appear. This problem has already been detected in previous works [36].

One limitation of the model is the reduced vocabulary of color names that are considered. However, this vocabulary could be easily extended by using the fuzzy information provided by the model. Hence, compound nouns could be used for samples with a membership of 0.5 to two categories (e.g., samples with memberships 0.5 to green and 0.5 to blue could be named as blue-green), or the “-ish” suffix could be used on samples with a high membership to a category and up to a certain membership to another (e.g., samples with memberships 0.7 to green and 0.3 to blue could be named as bluish green). Nonetheless, the 11 basic categories considered will normally be enough for most of the applications the model can have, as psychophysical experiments have demonstrated that humans tend to use basic terms more frequently, more consistently, and faster than nonbasic color terms [3, 4].

It would also be interesting to obtain a wider set of data from a fuzzy psychophysical experiment covering an area of the color space as wide as possible and thus avoiding undersampling problems. With these psychophysical data, the proposed model could be improved on several points. First, it would be desirable to relax or even eliminate the first assumption done in the fitting process to allow for the membership transition from chromatic categories to the achromatic center to be different for each category. Second, the division of the color space into different lightness levels should be removed. Observation of the membership maps of the TSE model (Fig. 10) allows us to detect some tendencies in the evolution of the boundaries between color categories across lightness levels. Hence, the parameters of the membership functions could be interpolated along the levels defined in the current model to obtain the parameters of the membership functions for any given value of lightness.

However, to do this, the estimation of some parameters should be improved. We have noticed that for some categories, the β parameters do not vary across lightness, as could be expected. Intuitively, we could think that the values of β should be lower for high and low lightness, where colors are more easily confused, and therefore the transition from one color to another should be smoother and higher for intermediate lightness levels, where there is less uncertainty. However, some factors cause the evolution of β values to not always be as expected. The consensus areas of Sturges and Whitfield’s experiment (areas with no confusion between subjects) are assumed to have membership 1. Intuitively, we could think that these consensus areas should be larger in the intermediate lightness levels than in the extremes. However, this is not the case, and the extension of these areas at different lightness is much more similar than we could expect. Moreover, the color solid provided by the CIELab space is wider in the central levels of lightness than in the extremes. Hence, the consensus areas of Sturges and Whitfield are more spread out in the central areas than in the lower and higher levels of the lightness axis. This causes the membership transitions between regions of membership 1 to be smoother in the central lightness levels than in the low and high lightness levels. In addition, the slicing of the color space into different levels can also contribute to distortion of the boundaries, since all the samples on each level are collapsed on a chromaticity plane where memberships are modeled with our TSE functions. To solve this, we are doing new psychophysical experiments focused on the areas around boundaries in order to estimate better the parameters that define the transitions between categories.

Nonetheless, the final goal should be to define three-dimensional membership functions to model color categories. If a larger fuzzy data set were available, the membership distributions in the whole color space could be analyzed to define the properties that three-dimensional functions should fulfill to accurately model color categories in a way similar to what we did in this work for the two-dimensional functions. Unfortunately, this seems not to be an easy task.

ACKNOWLEDGMENTS

This work has been partially supported by projects TIN2004-02970, TIN2007-64577, and Consolider-Ingenio 2010 (CSD2007-00018) of the Spanish Ministry of Education and Science (MEC) and European Community (EC) grant IST-045547 for the VIDI-video project. Robert Benavente is funded by the “Juan de la Cierva” program (JCI-2007-627) of the Spanish MEC. The authors would also like to acknowledge Agata Lapedriza for her suggestions during this work.

Tables Icon

Table 1. Parameters of the Triple-Sigmoid with Elliptical Center Modela

Tables Icon

Table 2. Comparison of Different Munsell Categorizations to the Results from Color-Naming Experiments of Berlin and Kay [2] and Sturges and Whitfield [14]

 

Fig. 1 Scheme of the model. The color space is divided into NL levels along the lightness axis.

Download Full Size | PPT Slide | PDF

 

Fig. 2 Desirable properties of the membership function for chromatic categories. In this case, on the blue category.

Download Full Size | PPT Slide | PDF

 

Fig. 3 (a) Sigmoid function in one dimension. The value of β determines the slope of the function. (b) Sigmoid function in two dimensions. Vector ui determines the axis in which the function is oriented.

Download Full Size | PPT Slide | PDF

 

Fig. 4 Two-dimensional sigmoid functions. (a) S1, sigmoid function oriented in the x-axis direction (b) S2, sigmoid function oriented in the y-axis direction. (c) DS, product of two differently oriented sigmoid functions generates a plateau with some of the properties needed for the membership function.

Download Full Size | PPT Slide | PDF

 

Fig. 5 Elliptic-sigmoid function ES(p,t,θES). (a) ES for βe<0 and (b) ES for βe>0.

Download Full Size | PPT Slide | PDF

 

Fig. 7 TSE function fitted to the chromatic categories defined on a given lightness level. In this case, only six categories have memberships different from zero.

Download Full Size | PPT Slide | PDF

 

Fig. 8 Sigmoid functions are used to differentiate among the three achromatic categories

Download Full Size | PPT Slide | PDF

 

Fig. 9 Histogram of the learning set samples used to determine the values that define the lightness levels of the model.

Download Full Size | PPT Slide | PDF

 

Fig. 10 Membership maps for the six chromaticity planes of the model.

Download Full Size | PPT Slide | PDF

 

Fig. 11 Comparison between our model’s Munsell categorization and Berlin and Kay’s boundaries. Samples named differently by our model are marked with a cross.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Consensus areas and focus from Sturges and Whitfield’s experiment superimposed on our model’s categorization of the Munsell array.

Download Full Size | PPT Slide | PDF

1. K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181–206 (2003). [CrossRef]   [PubMed]  

2. B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution (University of California Press, 1969).

3. R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311–1317 (1990). [CrossRef]   [PubMed]  

4. J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307–313 (1997). [CrossRef]   [PubMed]  

5. S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723–734 (2000). [CrossRef]   [PubMed]  

6. N. Moroney, “Unconstrained Web-based color naming experiment,” Proc. SPIE 5008, 36–46 (2003). [CrossRef]  

7. S. Tominaga, “A color-naming method for computer color vision,” in Proceedings of IEEE International Conference on Cybernetics and Society (IEEE, 1985), pp. 573–577.

8. H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III—A colour-naming model,” Color Res. Appl. 26, 270–277 (2001). [CrossRef]  

9. Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426–430.

10. P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610–646 (1978). [CrossRef]  

11. J. Lammens, “A computational model of color perception and color naming,” Ph.D. thesis (State University of New York, 1994).

12. A. Mojsilovic, “A computational model for color naming and describing color composition of images,” IEEE Trans. Image Process. 14, 690–699 (2005). [CrossRef]   [PubMed]  

13. M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165–177 (2005).

14. J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364–376 (1995). [CrossRef]  

15. E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136–1144 (2008). [CrossRef]  

16. G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81–91 (2002). [CrossRef]  

17. P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913–929 (2003). [CrossRef]  

18. A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79–107 (2004). [CrossRef]  

19. E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S. Singh, M. Singh, C. Apte, and P. Perner (Springer-Verlag, 2005), pp. 492–501.

20. G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications (Prentice Hall, 1995).

21. D. Alexander, “Statistical modelling of colour data and model selection for region tracking,” Ph.D. thesis (Department of Computer Science, University College London, 1997).

22. R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48–56 (2006). [CrossRef]  

23. R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV’2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406–411.

24. R. Benavente and M. Vanrell, “Fuzzy colour naming based on sigmoid membership functions,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV’2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 135–139.

25. R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342–353 (2004). [CrossRef]  

26. W. Graustein, Homogeneous Cartesian Coordinates. Linear Dependence of Points and Lines (Macmillan, 1930), Chap. 3, pp. 29–49.

27. Munsell Color Company, Munsell Book of Color—Matte Finish Collection (Munsell Color Company, 1976).

28. Spectral Database, University of Joensuu Color Group, last accessed May 27, 2008, http://spectral.joensuu.fi/.

29. J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder–Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112–147 (1998). [CrossRef]  

30. R. MacLaury, “From brightness to hue: an explanatory model of color-category evolution,” Curr. Anthropol. 33, 137–186 (1992). [CrossRef]  

31. D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5–36 (1990). [CrossRef]  

32. G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209–1221 (2001). [CrossRef]  

33. G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93–109 (2006). [CrossRef]  

34. M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92–114 (2004). [CrossRef]  

35. X. Otazu and M. Vanrell, “Building perceived colour images,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV’2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 140–145.

36. R. Boynton, “Insights gained from naming the OSA colors,” in Color Categories in Thought and Language, C. L. Hardin, and L. Maffi, eds. (Cambridge U. Press, 1997), pp. 135–150. [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181-206 (2003).
    [CrossRef] [PubMed]
  2. B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution (University of California Press, 1969).
  3. R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311-1317 (1990).
    [CrossRef] [PubMed]
  4. J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307-313 (1997).
    [CrossRef] [PubMed]
  5. S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723-734 (2000).
    [CrossRef] [PubMed]
  6. N. Moroney, “Unconstrained Web-based color naming experiment,” Proc. SPIE 5008, 36-46 (2003).
    [CrossRef]
  7. S. Tominaga, “A color-naming method for computer color vision,” in Proceedings of IEEE International Conference on Cybernetics and Society (IEEE, 1985), pp. 573-577.
  8. H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
    [CrossRef]
  9. Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.
  10. P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610-646 (1978).
    [CrossRef]
  11. J. Lammens, “A computational model of color perception and color naming,” Ph.D. thesis (State University of New York, 1994).
  12. A. Mojsilovic, “A computational model for color naming and describing color composition of images,” IEEE Trans. Image Process. 14, 690-699 (2005).
    [CrossRef] [PubMed]
  13. M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).
  14. J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364-376 (1995).
    [CrossRef]
  15. E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
    [CrossRef]
  16. G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81-91 (2002).
    [CrossRef]
  17. P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913-929 (2003).
    [CrossRef]
  18. A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
    [CrossRef]
  19. E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S.Singh, M.Singh, C.Apte, and P.Perner (Springer-Verlag, 2005), pp. 492-501.
  20. G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications (Prentice Hall, 1995).
  21. D. Alexander, “Statistical modelling of colour data and model selection for region tracking,” Ph.D. thesis (Department of Computer Science, University College London, 1997).
  22. R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
    [CrossRef]
  23. R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.
  24. R. Benavente and M. Vanrell, “Fuzzy colour naming based on sigmoid membership functions,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 135-139.
  25. R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
    [CrossRef]
  26. W. Graustein, Homogeneous Cartesian Coordinates. Linear Dependence of Points and Lines (Macmillan, 1930), Chap. 3, pp. 29-49.
  27. Munsell Color Company, Munsell Book of Color--Matte Finish Collection (Munsell Color Company, 1976).
  28. Spectral Database, University of Joensuu Color Group, last accessed May 27, 2008, http://spectral.joensuu.fi/.
  29. J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
    [CrossRef]
  30. R. MacLaury, “From brightness to hue: an explanatory model of color-category evolution,” Curr. Anthropol. 33, 137-186 (1992).
    [CrossRef]
  31. D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5-36 (1990).
    [CrossRef]
  32. G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
    [CrossRef]
  33. G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
    [CrossRef]
  34. M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
    [CrossRef]
  35. X. Otazu and M. Vanrell, “Building perceived colour images,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 140-145.
  36. R. Boynton, “Insights gained from naming the OSA colors,” in Color Categories in Thought and Language, C.L.Hardin, and L.Maffi, eds. (Cambridge U. Press, 1997), pp. 135-150.
    [CrossRef]

2008 (1)

E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
[CrossRef]

2006 (2)

R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
[CrossRef]

G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
[CrossRef]

2005 (2)

A. Mojsilovic, “A computational model for color naming and describing color composition of images,” IEEE Trans. Image Process. 14, 690-699 (2005).
[CrossRef] [PubMed]

M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).

2004 (3)

R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
[CrossRef]

A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
[CrossRef]

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

2003 (3)

P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913-929 (2003).
[CrossRef]

N. Moroney, “Unconstrained Web-based color naming experiment,” Proc. SPIE 5008, 36-46 (2003).
[CrossRef]

K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181-206 (2003).
[CrossRef] [PubMed]

2002 (1)

G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81-91 (2002).
[CrossRef]

2001 (2)

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
[CrossRef]

2000 (1)

S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723-734 (2000).
[CrossRef] [PubMed]

1998 (1)

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

1997 (1)

J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307-313 (1997).
[CrossRef] [PubMed]

1995 (1)

J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364-376 (1995).
[CrossRef]

1992 (1)

R. MacLaury, “From brightness to hue: an explanatory model of color-category evolution,” Curr. Anthropol. 33, 137-186 (1992).
[CrossRef]

1990 (2)

D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5-36 (1990).
[CrossRef]

R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311-1317 (1990).
[CrossRef] [PubMed]

1978 (1)

P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610-646 (1978).
[CrossRef]

Alexander, D.

D. Alexander, “Statistical modelling of colour data and model selection for region tracking,” Ph.D. thesis (Department of Computer Science, University College London, 1997).

Baldrich, R.

R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
[CrossRef]

R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
[CrossRef]

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.

Benavente, R.

R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
[CrossRef]

R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
[CrossRef]

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.

R. Benavente and M. Vanrell, “Fuzzy colour naming based on sigmoid membership functions,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 135-139.

Berlin, B.

B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution (University of California Press, 1969).

Bowden, R.

P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913-929 (2003).
[CrossRef]

Boynton, R.

R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311-1317 (1990).
[CrossRef] [PubMed]

R. Boynton, “Insights gained from naming the OSA colors,” in Color Categories in Thought and Language, C.L.Hardin, and L.Maffi, eds. (Cambridge U. Press, 1997), pp. 135-150.
[CrossRef]

Choh, H.

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

Finlayson, G.

G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
[CrossRef]

G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
[CrossRef]

Forsyth, D.

D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5-36 (1990).
[CrossRef]

Gagaudakis, G.

G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81-91 (2002).
[CrossRef]

Gegenfurtner, K.

K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181-206 (2003).
[CrossRef] [PubMed]

Gomes, J.

A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
[CrossRef]

Graustein, W.

W. Graustein, Homogeneous Cartesian Coordinates. Linear Dependence of Points and Lines (Macmillan, 1930), Chap. 3, pp. 29-49.

Guest, S.

S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723-734 (2000).
[CrossRef] [PubMed]

Hepplewhite, L.

M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).

Hordley, S.

G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
[CrossRef]

G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
[CrossRef]

Hubel, P.

G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
[CrossRef]

KaewTrakulPong, P.

P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913-929 (2003).
[CrossRef]

Kang, B.

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

Kay, P.

P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610-646 (1978).
[CrossRef]

B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution (University of California Press, 1969).

Kim, C.

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

Kiper, D.

K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181-206 (2003).
[CrossRef] [PubMed]

Kisters, P.

E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
[CrossRef]

Klir, G.

G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications (Prentice Hall, 1995).

Laar, D. V.

S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723-734 (2000).
[CrossRef] [PubMed]

Lagarias, J.

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

Lammens, J.

J. Lammens, “A computational model of color perception and color naming,” Ph.D. thesis (State University of New York, 1994).

Lin, H.

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

Luo, M.

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

MacDonald, L.

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

MacLaury, R.

R. MacLaury, “From brightness to hue: an explanatory model of color-category evolution,” Curr. Anthropol. 33, 137-186 (1992).
[CrossRef]

McDaniel, C.

P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610-646 (1978).
[CrossRef]

Mojsilovic, A.

A. Mojsilovic, “A computational model for color naming and describing color composition of images,” IEEE Trans. Image Process. 14, 690-699 (2005).
[CrossRef] [PubMed]

A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
[CrossRef]

Moroney, N.

N. Moroney, “Unconstrained Web-based color naming experiment,” Proc. SPIE 5008, 36-46 (2003).
[CrossRef]

Olson, C.

R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311-1317 (1990).
[CrossRef] [PubMed]

Otazu, X.

X. Otazu and M. Vanrell, “Building perceived colour images,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 140-145.

Reeds, J.

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

Rogowitz, B.

A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
[CrossRef]

Rosin, P.

G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81-91 (2002).
[CrossRef]

Salvatella, A.

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

Schouten, T.

E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
[CrossRef]

E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S.Singh, M.Singh, C.Apte, and P.Perner (Springer-Verlag, 2005), pp. 492-501.

Seaborn, M.

M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).

Stonham, J.

M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).

Sturges, J.

J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307-313 (1997).
[CrossRef] [PubMed]

J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364-376 (1995).
[CrossRef]

Tarrant, A.

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

Tastl, I.

G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
[CrossRef]

Tominaga, S.

S. Tominaga, “A color-naming method for computer color vision,” in Proceedings of IEEE International Conference on Cybernetics and Society (IEEE, 1985), pp. 573-577.

Tous, F.

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.

van den Broek, E.

E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
[CrossRef]

E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S.Singh, M.Singh, C.Apte, and P.Perner (Springer-Verlag, 2005), pp. 492-501.

van Rikxoort, E.

E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S.Singh, M.Singh, C.Apte, and P.Perner (Springer-Verlag, 2005), pp. 492-501.

Vanrell, M.

R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
[CrossRef]

R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
[CrossRef]

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.

R. Benavente and M. Vanrell, “Fuzzy colour naming based on sigmoid membership functions,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 135-139.

X. Otazu and M. Vanrell, “Building perceived colour images,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 140-145.

Wang, Z.

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

Whitfield, T.

J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307-313 (1997).
[CrossRef] [PubMed]

J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364-376 (1995).
[CrossRef]

Wright, M.

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

Wright, P.

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

Yuan, B.

G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications (Prentice Hall, 1995).

Annu. Rev. Neurosci. (1)

K. Gegenfurtner and D. Kiper, “Color vision,” Annu. Rev. Neurosci. 26, 181-206 (2003).
[CrossRef] [PubMed]

Color Res. Appl. (4)

H. Lin, M. Luo, L. MacDonald, and A. Tarrant, “A cross-cultural colour-naming study. Part III--A colour-naming model,” Color Res. Appl. 26, 270-277 (2001).
[CrossRef]

J. Sturges and T. Whitfield, “Locating basic colours in the Munsell space,” Color Res. Appl. 20, 364-376 (1995).
[CrossRef]

R. Benavente, M. Vanrell, and R. Baldrich, “A data set for fuzzy colour naming,” Color Res. Appl. 31, 48-56 (2006).
[CrossRef]

R. Benavente, M. Vanrell, and R. Baldrich, “Estimation of fuzzy sets for computational colour categorization,” Color Res. Appl. 29, 342-353 (2004).
[CrossRef]

Comput. Vis. Image Underst. (1)

M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous, “Induction operators for a computational colour texture representation,” Comput. Vis. Image Underst. 94, 92-114 (2004).
[CrossRef]

Curr. Anthropol. (1)

R. MacLaury, “From brightness to hue: an explanatory model of color-category evolution,” Curr. Anthropol. 33, 137-186 (1992).
[CrossRef]

IEEE Trans. Image Process. (1)

A. Mojsilovic, “A computational model for color naming and describing color composition of images,” IEEE Trans. Image Process. 14, 690-699 (2005).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

G. Finlayson, S. Hordley, and P. Hubel, “Color by correltation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1209-1221 (2001).
[CrossRef]

Image Vis. Comput. (1)

P. KaewTrakulPong and R. Bowden, “A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes,” Image Vis. Comput. 21, 913-929 (2003).
[CrossRef]

Int. J. Comput. Vis. (3)

A. Mojsilovic, J. Gomes, and B. Rogowitz, “Semantic-friendly indexing and quering of images based on the extraction of the objective semantic cues,” Int. J. Comput. Vis. 56, 79-107 (2004).
[CrossRef]

G. Finlayson, S. Hordley, and I. Tastl, “Gamut constrained illuminant estimation,” Int. J. Comput. Vis. 67, 93-109 (2006).
[CrossRef]

D. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5-36 (1990).
[CrossRef]

Language (1)

P. Kay and C. McDaniel, “The linguistic significance of the meanings of basic color terms,” Language 3, 610-646 (1978).
[CrossRef]

Pattern Recogn. (2)

M. Seaborn, L. Hepplewhite, and J. Stonham, “Fuzzy colour category map for the measurement of colour similarity and dissimilarity,” Pattern Recogn. 38, 165-177 (2005).

G. Gagaudakis and P. Rosin, “Incorporating shape into histograms for CBIR,” Pattern Recogn. 35, 81-91 (2002).
[CrossRef]

Pattern Recogn. Lett. (1)

E. van den Broek, T. Schouten, and P. Kisters, “Modeling human color categorization,” Pattern Recogn. Lett. 29, 1136-1144 (2008).
[CrossRef]

Proc. SPIE (1)

N. Moroney, “Unconstrained Web-based color naming experiment,” Proc. SPIE 5008, 36-46 (2003).
[CrossRef]

SIAM J. Optim. (1)

J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112-147 (1998).
[CrossRef]

Vision Res. (3)

R. Boynton and C. Olson, “Salience of chromatic basic color terms confirmed by three measures,” Vision Res. 30, 1311-1317 (1990).
[CrossRef] [PubMed]

J. Sturges and T. Whitfield, “Salient features of Munsell color space as a function of monolexemic naming and response latencies,” Vision Res. 37, 307-313 (1997).
[CrossRef] [PubMed]

S. Guest and D. V. Laar, “The structure of colour naming space,” Vision Res. 40, 723-734 (2000).
[CrossRef] [PubMed]

Other (14)

B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution (University of California Press, 1969).

S. Tominaga, “A color-naming method for computer color vision,” in Proceedings of IEEE International Conference on Cybernetics and Society (IEEE, 1985), pp. 573-577.

Z. Wang, M. Luo, B. Kang, H. Choh, and C. Kim, “An algorithm for categorising colours into universal colour names,” in Proceedings of the 3rd European Conference on Colour in Graphics, Imaging, and Vision (Society for Imaging Science and Technology, IS&T, 2006), pp. 426-430.

J. Lammens, “A computational model of color perception and color naming,” Ph.D. thesis (State University of New York, 1994).

E. van den Broek, E. van Rikxoort, and T. Schouten, “Human-centered object-based image retrieval,” in Advances in Pattern Recognition, Vol. 3687 of Lecture Notes in Computer Science, S.Singh, M.Singh, C.Apte, and P.Perner (Springer-Verlag, 2005), pp. 492-501.

G. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications (Prentice Hall, 1995).

D. Alexander, “Statistical modelling of colour data and model selection for region tracking,” Ph.D. thesis (Department of Computer Science, University College London, 1997).

X. Otazu and M. Vanrell, “Building perceived colour images,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 140-145.

R. Boynton, “Insights gained from naming the OSA colors,” in Color Categories in Thought and Language, C.L.Hardin, and L.Maffi, eds. (Cambridge U. Press, 1997), pp. 135-150.
[CrossRef]

R. Benavente, F. Tous, R. Baldrich, and M. Vanrell, “Statistical modelling of a colour naming space,” in Proceedings of the 1st European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2002) (Society for Imaging Science and Technology, IS&T, 2002), pp. 406-411.

R. Benavente and M. Vanrell, “Fuzzy colour naming based on sigmoid membership functions,” in Proceedings of the 2nd European Conference on Colour in Graphics, Imaging, and Vision (CGIV'2004) (Society for Imaging Science and Technology, IS&T, 2004), pp. 135-139.

W. Graustein, Homogeneous Cartesian Coordinates. Linear Dependence of Points and Lines (Macmillan, 1930), Chap. 3, pp. 29-49.

Munsell Color Company, Munsell Book of Color--Matte Finish Collection (Munsell Color Company, 1976).

Spectral Database, University of Joensuu Color Group, last accessed May 27, 2008, http://spectral.joensuu.fi/.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

Scheme of the model. The color space is divided into N L levels along the lightness axis.

Fig. 2
Fig. 2

Desirable properties of the membership function for chromatic categories. In this case, on the blue category.

Fig. 3
Fig. 3

(a) Sigmoid function in one dimension. The value of β determines the slope of the function. (b) Sigmoid function in two dimensions. Vector u i determines the axis in which the function is oriented.

Fig. 4
Fig. 4

Two-dimensional sigmoid functions. (a) S 1 , sigmoid function oriented in the x-axis direction (b) S 2 , sigmoid function oriented in the y-axis direction. (c) D S , product of two differently oriented sigmoid functions generates a plateau with some of the properties needed for the membership function.

Fig. 5
Fig. 5

Elliptic-sigmoid function E S ( p , t , θ E S ) . (a) ES for β e < 0 and (b) ES for β e > 0 .

Fig. 6
Fig. 6

TSE function.

Fig. 7
Fig. 7

TSE function fitted to the chromatic categories defined on a given lightness level. In this case, only six categories have memberships different from zero.

Fig. 8
Fig. 8

Sigmoid functions are used to differentiate among the three achromatic categories

Fig. 9
Fig. 9

Histogram of the learning set samples used to determine the values that define the lightness levels of the model.

Fig. 10
Fig. 10

Membership maps for the six chromaticity planes of the model.

Fig. 11
Fig. 11

Comparison between our model’s Munsell categorization and Berlin and Kay’s boundaries. Samples named differently by our model are marked with a cross.

Fig. 12
Fig. 12

Consensus areas and focus from Sturges and Whitfield’s experiment superimposed on our model’s categorization of the Munsell array.

Tables (2)

Tables Icon

Table 1 Parameters of the Triple-Sigmoid with Elliptical Center Model a

Tables Icon

Table 2 Comparison of Different Munsell Categorizations to the Results from Color-Naming Experiments of Berlin and Kay [2] and Sturges and Whitfield [14]

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

k = 1 n μ C k ( s ) = 1 with μ C k ( s ) [ 0 , 1 ] , k = 1 , , n .
C D ( s ) = [ μ C 1 ( s ) , , μ C n ( s ) ] ,
N ( s ) = t k max k max = arg max k = 1 , , n { μ C k ( s ) } ,
C k { Red , Orange , Brown , Yellow , Green , Blue , Purple , Pink , Black , Gray , White } .
S 1 ( x , β ) = 1 1 + exp ( β x ) ,
S ( p , β ) = 1 1 + exp ( β u i p ) , i = 1 , 2 ,
S i ( p , t , α , β ) = 1 1 + exp ( β u i R α T t p ) , i = 1 , 2 ,
T t = ( 1 0 t x 0 1 t y 0 0 1 ) , R α = ( cos ( α ) sin ( α ) 0 sin ( α ) cos ( α ) 0 0 0 1 ) .
DS ( p , t , θ DS ) = S 1 ( p , t , α y , β y ) S 2 ( p , t , α x , β x ) ,
ES ( p , t , θ ES ) = 1 1 + exp { β e [ ( u 1 R ϕ T t p e x ) 2 + ( u 2 R ϕ T t p e y ) 2 1 ] } ,
TSE ( p , θ ) = DS ( p , t , θ DS ) ES ( p , t , θ ES ) ,
μ C k ( s ) = { μ C k 1 = TSE ( c 1 , c 2 , θ C k 1 ) if I I 1 μ C k 2 = TSE ( c 1 , c 2 , θ C k 2 ) if I 1 < I I 2 , μ C k N L = TSE ( c 1 , c 2 , θ C k N L ) if I N L 1 < I }
μ A i ( c 1 , c 2 ) = 1 k = 1 n c μ C k i ( c 1 , c 2 ) ,
μ A Black ( I , θ Black ) = 1 1 + exp [ β b ( I t b ) ] ,
μ A Gray ( I , θ Gray ) = 1 1 + exp [ β b ( I t b ) ] 1 1 + exp [ β w ( I t w ) ] ,
μ A White ( I , θ White ) = 1 1 + exp [ β w ( I t w ) ] ,
μ C k ( s , θ C k ) = μ A i ( c 1 , c 2 ) μ A C k ( I , θ C k ) ,
9 k 11 , I i < I I i + 1 ,
k = 1 11 μ C k i ( s ) = 1 , i = 1 , , N L ,
D = { s i , m 1 i , , m 11 i } , i = 1 , , n s ,
θ ̂ j = arg min θ j 1 n cp i = 1 n cp k = 1 n c ( μ C k j ( s i , θ C k j ) m k i ) 2 , j = 1 , , N L ,
k = 1 11 μ C k j ( s , θ C k j ) = 1 , s = ( I , c 1 , c 2 ) I j 1 < I I j ,
θ ES C p j = θ ES C q j , t C p j = t C q j , p , q { 1 , , n c } ,
β y C p = β x C q , α y C p = α x C q ( π 2 ) ,
( t ̂ j , θ ̂ ES j ) = arg min t j , θ ES j 1 n cp i = 1 n cp ( ES ( s i , t j , θ ES j ) k = 9 11 m k i ) 2 ,
( θ ̂ DS C p j , θ ̂ DS C q j ) = argmin θ DS C p j , θ DS C q j i = 1 n cp ( ( μ C p j ( s i , θ C p j ) m p i ) 2 + ( μ C q j ( s i , θ C q j ) m q i ) 2 ) ,
θ ̂ A = arg min θ A i = 1 n s k = 9 11 ( μ C k ( s i , θ C k ) m k i ) 2 ,
MAE fit = 1 n s 1 11 i = 1 n s k = 1 11 m k i μ C k ( s i ) ,
MAE unitsum = 1 n p i = 1 n p 1 k = 1 11 μ C k ( p i ) ,

Metrics