Abstract

We present an optical implementation of the Hamming net that can be used as an optimum image classifier or an associative memory. We introduce a modified Hamming net, in which the dynamic range requirement of the spatial light modulator can be relaxed and the number of iteration cycles in the second layer (or maxnet) can be reduced. Experimental demonstrations of the optical implementation of the Hamming net are also given.

© 1992 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. D. Psaltis, N. H. Farhat, “Optical information processing based on an associative-memory model of neural nets with thresholding and feedback,” Opt. Lett. 10, 98–100 (1985).
  2. N. H. Farhat, D. Psaltis, A. Drata, E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1459–1475 (1985).
  3. N. H. Farhat, D. Psaltis, “Optical implementation of associative memory based on models of neural networks,” in Optical Signal Processing, J. L. Horner, ed. (Academic, New York, 1987), pp. 129–162.
  4. R. A. Athale, C. W. Stirk, “Compact architecture for adaptive neural net,” Opt. Eng. 28, 447–455 (1989).
  5. G. J. Dunning, Y. Owechko, B. H. Soffer, “Hybrid optoelectronic neural networks using a mutually pumped phase-conjugate mirror,” Opt. Lett. 16, 928–930 (1991).
  6. L. Zhang, M. G. Robinson, K. M. Johnson, “Optical implementation of a second-order neural network,” Opt. Lett. 16, 45–47 (1991).
  7. C.-H. Wang, B. K. Jenkens, “Subtracting incoherent optical neuron model: analysis, experiments, and applications,” Appl. Opt. 29, 2171–2186 (1990).
  8. J. Hong, S. Campbell, P. Yeh, “Optical pattern classifier with perceptron learning,” Appl. Opt. 29, 3019–3025 (1990).
  9. T. Lu, S. Wu, X. Xu, F. T. S. Yu, “Two-dimensional programmable optical neural network,” Appl. Opt. 28, 4908–4913 (1989).
  10. F. T. S. Yu, T. Lu, X. Yang, D. A. Gregory, “Optical neural network with pocket size liquid-crystal televisions,” Opt. Lett. 15, 863–865 (1990).
  11. T. Lu, F. T. S. Yu, D. A. Gregory, “Self-organizing optical neural network for unsupervised learning,” Opt. Eng. 29, 1107–1113 (1990).
  12. X. Yang, T. Lu, F. T. S. Yu, “Compact optical neural network using cascaded liquid-crystal televisions,” Appl. Opt. 29, 5223–5225 (1990).
  13. F. T. S. Yu, X. Yang, T. Lu, “Space-time sharing optical neural network,” Opt. Lett. 16, 247–249 (1991).
  14. R. P. Lippmann, “Introduction to computing with neural nets,” IEEE Trans. Acoust. Speech Signal Process. 4, 4–12 (1987).
  15. R. P. Lippmann, B. Gold, M. L. Malpass, “A comparison of Hamming and Hopfield neural nets for pattern classification,” Tech. Rep. TR. 769 (MIT Lincoln Laboratory, Cambridge, Mass., 1987).
  16. R. P. Lippmann, “Pattern classification using neural networks,” IEEE Trans. Commun. 27, 47–64 (1989).
  17. P. Lazzaro, M. Ryckebusch, C. Mead, “Winner take all networks of O(n) complexity,” Tech. Rep. CALTECH-CS-TR-21-88 (California Institute of Technology, Pasadena, Calif., 1988).
  18. X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).
  19. S. Grossberg, “Studies of mind and brain neural principles of learning, perception, development, cognition and motor control,” in Boston Studies in Philosophy of Science, S. Grossberg, ed. (Reidel, Dordrecht, The Netherlands, 1982), Chap. 8.

1991 (3)

1990 (5)

1989 (3)

R. A. Athale, C. W. Stirk, “Compact architecture for adaptive neural net,” Opt. Eng. 28, 447–455 (1989).

T. Lu, S. Wu, X. Xu, F. T. S. Yu, “Two-dimensional programmable optical neural network,” Appl. Opt. 28, 4908–4913 (1989).

R. P. Lippmann, “Pattern classification using neural networks,” IEEE Trans. Commun. 27, 47–64 (1989).

1987 (1)

R. P. Lippmann, “Introduction to computing with neural nets,” IEEE Trans. Acoust. Speech Signal Process. 4, 4–12 (1987).

1985 (2)

Astor, M.

X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).

Athale, R. A.

R. A. Athale, C. W. Stirk, “Compact architecture for adaptive neural net,” Opt. Eng. 28, 447–455 (1989).

X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).

Campbell, S.

Drata, A.

Dunning, G. J.

Farhat, N. H.

D. Psaltis, N. H. Farhat, “Optical information processing based on an associative-memory model of neural nets with thresholding and feedback,” Opt. Lett. 10, 98–100 (1985).

N. H. Farhat, D. Psaltis, A. Drata, E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1459–1475 (1985).

N. H. Farhat, D. Psaltis, “Optical implementation of associative memory based on models of neural networks,” in Optical Signal Processing, J. L. Horner, ed. (Academic, New York, 1987), pp. 129–162.

Gold, B.

R. P. Lippmann, B. Gold, M. L. Malpass, “A comparison of Hamming and Hopfield neural nets for pattern classification,” Tech. Rep. TR. 769 (MIT Lincoln Laboratory, Cambridge, Mass., 1987).

Gregory, D. A.

T. Lu, F. T. S. Yu, D. A. Gregory, “Self-organizing optical neural network for unsupervised learning,” Opt. Eng. 29, 1107–1113 (1990).

F. T. S. Yu, T. Lu, X. Yang, D. A. Gregory, “Optical neural network with pocket size liquid-crystal televisions,” Opt. Lett. 15, 863–865 (1990).

Grossberg, S.

S. Grossberg, “Studies of mind and brain neural principles of learning, perception, development, cognition and motor control,” in Boston Studies in Philosophy of Science, S. Grossberg, ed. (Reidel, Dordrecht, The Netherlands, 1982), Chap. 8.

Hong, J.

Jenkens, B. K.

Johnson, K. M.

Lazzaro, P.

P. Lazzaro, M. Ryckebusch, C. Mead, “Winner take all networks of O(n) complexity,” Tech. Rep. CALTECH-CS-TR-21-88 (California Institute of Technology, Pasadena, Calif., 1988).

Lippmann, R. P.

R. P. Lippmann, “Pattern classification using neural networks,” IEEE Trans. Commun. 27, 47–64 (1989).

R. P. Lippmann, “Introduction to computing with neural nets,” IEEE Trans. Acoust. Speech Signal Process. 4, 4–12 (1987).

R. P. Lippmann, B. Gold, M. L. Malpass, “A comparison of Hamming and Hopfield neural nets for pattern classification,” Tech. Rep. TR. 769 (MIT Lincoln Laboratory, Cambridge, Mass., 1987).

Lu, T.

Malpass, M. L.

R. P. Lippmann, B. Gold, M. L. Malpass, “A comparison of Hamming and Hopfield neural nets for pattern classification,” Tech. Rep. TR. 769 (MIT Lincoln Laboratory, Cambridge, Mass., 1987).

Mead, C.

P. Lazzaro, M. Ryckebusch, C. Mead, “Winner take all networks of O(n) complexity,” Tech. Rep. CALTECH-CS-TR-21-88 (California Institute of Technology, Pasadena, Calif., 1988).

Owechko, Y.

Paek, E.

Psaltis, D.

N. H. Farhat, D. Psaltis, A. Drata, E. Paek, “Optical implementation of the Hopfield model,” Appl. Opt. 24, 1459–1475 (1985).

D. Psaltis, N. H. Farhat, “Optical information processing based on an associative-memory model of neural nets with thresholding and feedback,” Opt. Lett. 10, 98–100 (1985).

N. H. Farhat, D. Psaltis, “Optical implementation of associative memory based on models of neural networks,” in Optical Signal Processing, J. L. Horner, ed. (Academic, New York, 1987), pp. 129–162.

Robinson, M. G.

Ryckebusch, M.

P. Lazzaro, M. Ryckebusch, C. Mead, “Winner take all networks of O(n) complexity,” Tech. Rep. CALTECH-CS-TR-21-88 (California Institute of Technology, Pasadena, Calif., 1988).

Seidermann, W.

X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).

Soffer, B. H.

Stirk, C. W.

R. A. Athale, C. W. Stirk, “Compact architecture for adaptive neural net,” Opt. Eng. 28, 447–455 (1989).

Wang, C.-H.

Wu, S.

Xu, X.

Yang, X.

F. T. S. Yu, X. Yang, T. Lu, “Space-time sharing optical neural network,” Opt. Lett. 16, 247–249 (1991).

F. T. S. Yu, T. Lu, X. Yang, D. A. Gregory, “Optical neural network with pocket size liquid-crystal televisions,” Opt. Lett. 15, 863–865 (1990).

X. Yang, T. Lu, F. T. S. Yu, “Compact optical neural network using cascaded liquid-crystal televisions,” Appl. Opt. 29, 5223–5225 (1990).

X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).

Yeh, P.

Yu, F. T. S.

Zhang, L.

Appl. Opt. (5)

IEEE Trans. Acoust. Speech Signal Process. (1)

R. P. Lippmann, “Introduction to computing with neural nets,” IEEE Trans. Acoust. Speech Signal Process. 4, 4–12 (1987).

IEEE Trans. Commun. (1)

R. P. Lippmann, “Pattern classification using neural networks,” IEEE Trans. Commun. 27, 47–64 (1989).

Opt. Eng. (2)

R. A. Athale, C. W. Stirk, “Compact architecture for adaptive neural net,” Opt. Eng. 28, 447–455 (1989).

T. Lu, F. T. S. Yu, D. A. Gregory, “Self-organizing optical neural network for unsupervised learning,” Opt. Eng. 29, 1107–1113 (1990).

Opt. Lett. (5)

Other (5)

N. H. Farhat, D. Psaltis, “Optical implementation of associative memory based on models of neural networks,” in Optical Signal Processing, J. L. Horner, ed. (Academic, New York, 1987), pp. 129–162.

R. P. Lippmann, B. Gold, M. L. Malpass, “A comparison of Hamming and Hopfield neural nets for pattern classification,” Tech. Rep. TR. 769 (MIT Lincoln Laboratory, Cambridge, Mass., 1987).

P. Lazzaro, M. Ryckebusch, C. Mead, “Winner take all networks of O(n) complexity,” Tech. Rep. CALTECH-CS-TR-21-88 (California Institute of Technology, Pasadena, Calif., 1988).

X. Yang, W. Seidermann, R. A. Athale, M. Astor, “Optical Winner-Take-All neural network based on electron trapping materials,” in Image Storage and Retrieval Systems, A. A. Jamberdino, W. Niblack, eds., Proc. Soc. Photo-Opt. Instrum. Eng.1662 (to be published).

S. Grossberg, “Studies of mind and brain neural principles of learning, perception, development, cognition and motor control,” in Boston Studies in Philosophy of Science, S. Grossberg, ed. (Reidel, Dordrecht, The Netherlands, 1982), Chap. 8.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Hamming net.

Fig. 2
Fig. 2

Hybrid optical Hamming net. The output signal is fed back for a multilayer operation.

Fig. 3
Fig. 3

Circular lenslet array.

Fig. 4
Fig. 4

Area-modulation-encoded IWM in the Hamming layer: (a) transparent and opaque encoding for +1 and −1 in the positive IWM, (b) encoded pixel in the negative IWM.

Fig. 5
Fig. 5

Exemplar set and encoded IWM’s in the Hamming layer: (a) 12 exemplars, (b) encoded positive IWM, and (c) encoded negative IWM.

Fig. 6
Fig. 6

Encoded IWM’s in the maxnet: (a) pixel encoding for +1 and −1/16, (b) encoded positive IWM, and (c) encoded negative IWM.

Fig. 7
Fig. 7

Experimental demonstrations: (a), (e) Input patterns embedded in 20% random noise; (b), (f) encoded input patterns; (c), (g) outputs from the first layer; (d), (h) output results obtained from maxnet after two iterations for A and three iterations for H, respectively.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

W i j = X i ( j ) 2             ( 1 i N ,             1 j M ) ,
θ j = N 2             ( 1 j M ) .
U j ( 0 ) = i = 1 N ( W i j X i ) + θ j             ( 1 j M ) .
U j ( 0 ) = N - HD             ( 1 j M ) ,
W i j = X i ( j ) 2 α             ( 1 i N ,             1 j M ) ,
θ j = N ( 1 - 1 2 α )             ( 1 j M ) .
U j ( 0 ) = f ( i = 1 N W i j X i + θ j ) = { N - HD α when HD < α N 0 otherwise             ( 1 j M ) ,
f ( x ) = { 1 X > 0 0 X 0 .
W i j = X i ( j ) ,
θ j = 0 ,
U j ( 0 ) = f ( i = 1 N W i j X i ) = { N - 2 HD HD < N 2 0 otherwise .
t j k = { 1 , j = k - , j = k             ( < 1 M ,             1 j ,             k M ) ,
U k ( n + 1 ) = g [ j = 1 M t j k U j ( n ) ] = g [ U k ( n ) - j = k U j ( n ) ]             ( 1 j , k M ) ,

Metrics