Abstract

The goal of discrimination of one color from many other similar-appearing colors even when the colored objects show substantial variation or noise is of obvious import. We show how to accomplish that using a technique called Margin Setting. It is possible not only to have very low error rates but also to have some control over the types of errors that do occur. Robust spectral filtering prior to spatial pattern recognition allows subsequent filtering processes to be based on conventional coherent optical correlation that can be done monochromatically.

© 2007 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. H. J. Caulfield, "Artificial Color," Neurocomputing,  51, 463-465 (2003).
  2. H. J. Caulfield, Kaveh Heidary, "Exploring Margin Setting for good generalization in multiple class discrimination," Pattern Recognition 38, 1225-1238 (2005).
  3. H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).
  4. C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).
  5. L. G. Valiant, "A theory of learnability," Communications ACM 27, 1134-1142 (1984).
    [CrossRef]
  6. R. E. Schapire, "The strength of weak learnability," Machine Learning 5(2), 197-227 (1990).
    [CrossRef]
  7. http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html.
  8. R.C. Gonzalez, R.E. Woods, Digital Image Processing, (Prentice Hall, New York, 2002).

2005

H. J. Caulfield, Kaveh Heidary, "Exploring Margin Setting for good generalization in multiple class discrimination," Pattern Recognition 38, 1225-1238 (2005).

2004

H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).

2003

H. J. Caulfield, "Artificial Color," Neurocomputing,  51, 463-465 (2003).

1990

R. E. Schapire, "The strength of weak learnability," Machine Learning 5(2), 197-227 (1990).
[CrossRef]

1984

L. G. Valiant, "A theory of learnability," Communications ACM 27, 1134-1142 (1984).
[CrossRef]

Caulfield, H. J.

H. J. Caulfield, Kaveh Heidary, "Exploring Margin Setting for good generalization in multiple class discrimination," Pattern Recognition 38, 1225-1238 (2005).

H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).

H. J. Caulfield, "Artificial Color," Neurocomputing,  51, 463-465 (2003).

Karavolos, A.

H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).

Ludman, J. E.

H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).

Schapire, R. E.

R. E. Schapire, "The strength of weak learnability," Machine Learning 5(2), 197-227 (1990).
[CrossRef]

Valiant, L. G.

L. G. Valiant, "A theory of learnability," Communications ACM 27, 1134-1142 (1984).
[CrossRef]

Communications ACM

L. G. Valiant, "A theory of learnability," Communications ACM 27, 1134-1142 (1984).
[CrossRef]

Information Sciences

H. J. Caulfield, A. Karavolos, J. E. Ludman, "Improving optical Fourier pattern recognition by accommodating the missing information," Information Sciences 162, 35-52 (2004).

Machine Learning

R. E. Schapire, "The strength of weak learnability," Machine Learning 5(2), 197-227 (1990).
[CrossRef]

Neurocomputing

H. J. Caulfield, "Artificial Color," Neurocomputing,  51, 463-465 (2003).

Pattern Recognition

H. J. Caulfield, Kaveh Heidary, "Exploring Margin Setting for good generalization in multiple class discrimination," Pattern Recognition 38, 1225-1238 (2005).

Other

C. J. Burges, A tutorial on Support Vector Machines for Pattern Recognition, (Kluwer Academic Publishers, New York, 1998).

http://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html.

R.C. Gonzalez, R.E. Woods, Digital Image Processing, (Prentice Hall, New York, 2002).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1.

Noiseless (top) and a noisy (bottom) Mondrian using the 24 color categories of Table 1.

Fig. 2.
Fig. 2.

Original (top) and filtered Mondrian (bottom) using a four-round classifier with ten-percent margin.

Fig. 3.
Fig. 3.

Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=10%.

Fig. 4.
Fig. 4.

Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=25%.

Fig. 5.
Fig. 5.

Pixels classified as Color-15 class (top image) and Color-20 class (bottom image) are colored white. Margin value=40%.

Fig. 6.
Fig. 6.

Mondrian with pixels classified as Color-15 in the original image colored white in the filtered image. From top left clockwise the margin values are ten, twenty-five, sixty, and forty percent, respectively. Clearly 40% is better than either 25% or 60%.

Fig. 7.
Fig. 7.

The top left image is the original. The top right image was obtained after one round. The bottom left used two rounds, and the bottom right used four rounds. No improvement is evident in going from two to four rounds.

Fig. 8.
Fig. 8.

Original Mondrian (top left) and filtered images using one (top right), two (middle left), four (middle right), six (bottom left), and eight (bottom right) classification rounds. Color standard deviation is 0.05 and classifier Margin is 25%.

Fig. 9.
Fig. 9.

Original Mondrian (top left) and filtered images using one (top right), two (bottom left), four (bottom right) classification rounds. Color standard deviation is 0.04 and classifier margin is zero percent.

Fig. 10.
Fig. 10.

Filtered images using one (top left), two (top right), three (bottom left), and four (bottom right) classification rounds. Color standard deviation is 0.04 and classifier Margin is ten percent.

Fig. 11.
Fig. 11.

Original Mondrian (top left) and filtered images using one (top right), two (bottom left), four (bottom right) classification rounds. Color standard deviation is 0.035 and classifier Margin is thirty percent.

Fig. 12.
Fig. 12.

Top image is the original. Second from top is the spectrally processed image using a filter trained with 20% margin, and third image is the result of processing with a filter trained with 10% margin filters. Bottom image is the result of application of a median filter to the spectrally filtered (third) image.

Tables (1)

Tables Icon

Table 1. Twenty-four mean colors used in the images. The actual pixel colors in each patch of the image under test were drawn stochastically from a normal distribution with one of those means and a standard deviation we varied.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

N ( j + 1 ) N ( j ) ; 1 j M
X ( j ) = i = 1 N ( 1 ) { X ( i ) ( j ) } ; X ( i ) ( j ) = { X ( i ) k ( j ) ; 1 k ≤M ( i ) ( j ) }
R ( i ) k ( j ) = { min l i x ( i ) k ( j ) x ( l ) m ( j ) ; 1 m M ( l ) ( j ) } ; 1 k M ( i ) ( j )
F ( i ) k ( j ) = n ( X ´ ( i ) k ( j ) ) ; 1 k M ( i ) ( j )
X ́ ( i ) k ( j ) = { x ( i ) l ( j ) x ( i ) l ( j ) x ( i ) k ( j ) < R ( i ) k ( j ) ; 1 l M ( i ) ( j ) }
X ́ ( i ) k ( j ) X ( i ) ( j )
F ( i ) ( j ) = max k { F ( i ) k ( j ) }
X ( i ) ( j ) X = ( i ) ( j )
X = ( i ) ( j ) = l = 1 n ( X ( i ) ( j ) ) X ¯ ( i ) l ( j )
X ¯ ( i ) l ( j ) = { x ¯ ( i ) l ( j ) x ¯ ( i ) l ( j ) N ( μ ( i ) l ( j ) , Σ ( i ) ) }
μ ( i ) l ( j ) = x ( i ) l ( j )
Σ ( i ) = [ σ ( i ) 1,1 2 σ ( i ) 1 , N d 2 . . . . . . σ ( i ) N d , 1 2 σ ( i ) N d , N d 2 ]
f x ¯ ( i ) l ( j ) x ¯ ( i ) l , 1 ( j ) x ¯ ( i ) l , N d ( j ) = 1 ( 2 π ) N d 2 Σ ( i ) 1 2 exp ( 1 2 ( x ¯ ( i ) l ( j ) μ ( i ) l ( j ) ) T Σ ( i ) 1 ( x ¯ ( i ) l ( j ) μ ( i ) l ( j ) ) )
X ( i ) ( j ) X = ( i ) ( j )

Metrics