Abstract

A Hopfield model with multistate neurons is described. This model is able to deal with multivalued problems such as restoring a degraded image with gray level equivalent to that produced by the two-states model, but needs many less neurons and interconnections. The performance of this model is compared with that of the linear model, and it is concluded that the multistate neuron model can produce convergence more quickly. Finally, a hybrid system for the implementation of this model is discussed.

© 1991 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. J. Hopfield, D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biol. Cybern. 52, 141–152 (1985).
    [PubMed]
  2. M. Takeda, J. W. Goodman, “Neural Networks For Computation: Number Representations and Programming Complexity,” Appl. Opt. 25, 3033–3046 (1986).
    [CrossRef] [PubMed]
  3. Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
    [CrossRef]
  4. G. Y. Sirat, A. D. Maruani, R. C. Chevallier, “Grey Level Neural Networks,” Appl. Opt. 28, 414–415 (1989).
    [CrossRef] [PubMed]
  5. D. W. Tank, J. J. Hopfield, “Simple Neural Optimization Networks: an A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. CAS-33, 533–541 (1986).
    [CrossRef]
  6. E. Barnard, D. Casasent, “New Optical Neural System Architectures and Applications,” Proc. Soc. Photo-Opt. Instrum. Eng. 963, 537–544 (1988).
  7. E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines (Wiley, New York1989).
  8. D. E. Rumelhart et al., Parallel Distributed Processing (MIT press, Cambridge, 1986).
  9. N. H. Farhat, D. Psaltis, A. Prata, E. Paek, “Optical Implementation of the Hopfield Model,” Appl. Opt. 24, 1469–1475 (1985).
    [CrossRef] [PubMed]
  10. J. Ohta, S. Tai, M. Oita, K. Koroda, K. Kyuma, K. Hamaraka, “Optical Implementation of an Associative Neural Network Model With a Stochastic Process,” Appl. Opt. 28, 2426–2428 (1989).
    [CrossRef] [PubMed]
  11. E. Barnard, D. Casasent, “Optical Neural Net for Matrix Inversion,” Appl. Opt. 28, 2499–2504 (1989).
    [CrossRef] [PubMed]

1989

1988

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

E. Barnard, D. Casasent, “New Optical Neural System Architectures and Applications,” Proc. Soc. Photo-Opt. Instrum. Eng. 963, 537–544 (1988).

1986

D. W. Tank, J. J. Hopfield, “Simple Neural Optimization Networks: an A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. CAS-33, 533–541 (1986).
[CrossRef]

M. Takeda, J. W. Goodman, “Neural Networks For Computation: Number Representations and Programming Complexity,” Appl. Opt. 25, 3033–3046 (1986).
[CrossRef] [PubMed]

1985

J. J. Hopfield, D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biol. Cybern. 52, 141–152 (1985).
[PubMed]

N. H. Farhat, D. Psaltis, A. Prata, E. Paek, “Optical Implementation of the Hopfield Model,” Appl. Opt. 24, 1469–1475 (1985).
[CrossRef] [PubMed]

Aarts, E.

E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines (Wiley, New York1989).

Barnard, E.

E. Barnard, D. Casasent, “Optical Neural Net for Matrix Inversion,” Appl. Opt. 28, 2499–2504 (1989).
[CrossRef] [PubMed]

E. Barnard, D. Casasent, “New Optical Neural System Architectures and Applications,” Proc. Soc. Photo-Opt. Instrum. Eng. 963, 537–544 (1988).

Casasent, D.

E. Barnard, D. Casasent, “Optical Neural Net for Matrix Inversion,” Appl. Opt. 28, 2499–2504 (1989).
[CrossRef] [PubMed]

E. Barnard, D. Casasent, “New Optical Neural System Architectures and Applications,” Proc. Soc. Photo-Opt. Instrum. Eng. 963, 537–544 (1988).

Chellappa, R.

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

Chevallier, R. C.

Farhat, N. H.

Goodman, J. W.

Hamaraka, K.

Hopfield, J. J.

D. W. Tank, J. J. Hopfield, “Simple Neural Optimization Networks: an A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. CAS-33, 533–541 (1986).
[CrossRef]

J. J. Hopfield, D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biol. Cybern. 52, 141–152 (1985).
[PubMed]

Jenkins, K.

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

Koroda, K.

Korst, J.

E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines (Wiley, New York1989).

Kyuma, K.

Maruani, A. D.

Ohta, J.

Oita, M.

Paek, E.

Prata, A.

Psaltis, D.

Rumelhart, D. E.

D. E. Rumelhart et al., Parallel Distributed Processing (MIT press, Cambridge, 1986).

Sirat, G. Y.

Tai, S.

Takeda, M.

Tank, D. W.

D. W. Tank, J. J. Hopfield, “Simple Neural Optimization Networks: an A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. CAS-33, 533–541 (1986).
[CrossRef]

J. J. Hopfield, D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biol. Cybern. 52, 141–152 (1985).
[PubMed]

Vaid, A.

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

Zhou, Y.-T.

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

Appl. Opt.

Biol. Cybern.

J. J. Hopfield, D. W. Tank, “Neural Computation of Decisions in Optimization Problems,” Biol. Cybern. 52, 141–152 (1985).
[PubMed]

IEEE Trans. Acoust. Speech Signal Process

Y.-T. Zhou, R. Chellappa, A. Vaid, K. Jenkins, “Image Restoration Using a Neural Network,” IEEE Trans. Acoust. Speech Signal Process, ASSP-36, 1141–1151 (1988).
[CrossRef]

IEEE Trans. Circuits Syst.

D. W. Tank, J. J. Hopfield, “Simple Neural Optimization Networks: an A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit,” IEEE Trans. Circuits Syst. CAS-33, 533–541 (1986).
[CrossRef]

Proc. Soc. Photo-Opt. Instrum. Eng.

E. Barnard, D. Casasent, “New Optical Neural System Architectures and Applications,” Proc. Soc. Photo-Opt. Instrum. Eng. 963, 537–544 (1988).

Other

E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines (Wiley, New York1989).

D. E. Rumelhart et al., Parallel Distributed Processing (MIT press, Cambridge, 1986).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1

Network of the simple-sum scheme.

Fig. 2
Fig. 2

Neural network model with multistate neurons: (a) the network and (b) structure of one unit.

Fig. 3
Fig. 3

Examples of the nonlinear decision function.

Fig. 4
Fig. 4

Energy decrease in the Hitchcock problem using different scales (curve 1, left; curve 2, right).

Fig. 5
Fig. 5

Example of the image restoration.

Fig. 6
Fig. 6

Energy decrease in the image restoration (curve 1 on the left scale; curve 2 on the right scale).

Fig. 7
Fig. 7

Performance comparison of the linear model and the multistate model. Type 1, the linear model, type 2, the multistate model with one neuron in each unit, and type 3, multistate model with ten neurons in one unit.

Fig. 8
Fig. 8

Hybrid system for implementation of the multistate model.

Tables (2)

Tables Icon

Table II Flow Matrix fij

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

u i = j = 1 N t i j υ j + b i , υ i = { 0 if u i 0 , 1 otherwise ,
E = 1 2 i j t i j υ j υ i i b i υ i
E = 1 2 i k j l t i k ; j l υ i k υ j l i k b i k υ i k .
x i = k = 1 M υ i k , k = 1 , 2 , , M .
E = i j c i j x i x j + i d i x i + f ,
E = i j c i j k υ i k l υ j l + i d i k υ i k + f = i j k l c i j υ i k υ j l + i k d i υ i k + f .
t i k ; j l = c i j , b i k = d i .
υ i k ( t + 1 + σ i k ) = { υ i k ( t ) + ε s ( j = 1 N t i j x j ( t + σ i k ) + b i ) , if 0 υ i k ( t ) υ M ; υ i k ( t ) otherwise ,
υ M x M / L ,
x i ( t ) = k = 1 L υ i k ( t ) , k = 1 , 2 , , L .
E = 1 2 i j t i j x j x i i b i x i ,
Δ E = ( j t i j x j + b i ) Δ υ i k 1 2 t i i Δ υ i k 2 ,
u i = j = 1 N t i j l = 1 L υ j l , υ i l ( t + 1 ) = υ i l ( t ) η · sgn ( u i ) Δ υ i l , Δ υ i l [ 0 , 1 ] , p ( Δ υ i l = 1 ) = 1 1 + exp ( 1 T | u i | ) ,
i = 1 m j = 1 n c i j f i j min ,
i = 1 m f i j = s j ; j = 1 n f i j = d i .
E = ( i j c i j f i j ) 2 + α i ( s i j f i j ) 2 + β j ( d j i f i j ) 2 ,
t i j ; p q = { c i j c p q + α δ ( i p ) + β δ ( j q ) } , b i j = α s i + β d j .
δ ( i j ) = { 1 if i = j , 0 otherwise .
Y = A X + N ,
{ E = 1 2 Y A X ˆ 2 + 1 2 λ L X ˆ 2 , 0 x ˆ i x max .
T = A t A λ L t L , b = A t Y .
f = 1 10 [ 1 1 1 1 1 1 1 1 1 ] ,
A = 1 10 [ 1111100 0 11111100 0 00 01111111110 0 000 011111 ] .
x i ( t + 1 ) = x i ( t ) η δ E δ x i ,
x i ( t + 1 ) = x i ( t ) ε s ( δ E δ x i ) ,

Metrics