Abstract

A system for analyzing single-layer optical thin films has been formulated by the use of artificial neural networks. The training data sets stem from the computational results of the physical model of thin films, and they are used to train the artificial neural network, which, when done, can give values of film parameters in the millisecond time regime. The fast backpropagation algorithm is employed during training. The results of training are also given.

© 1996 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J.-F. Tang, Applied Thin Film Optics, 1st ed. (Shanghai Science and Technology, Shanghai, 1984).
  2. P. G. J. Lisboa, Neural and Networks Current Applications (Chapman and Hall, London, 1992).
  3. P. D. Wasserman, Neural Computing, Theory and Practice (Van Nostrand Reinhold, New York, 1989).
  4. A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993).
    [CrossRef]
  5. Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their application to neural network theory,” Neural Networks 4, 385–394 (1991).
    [CrossRef]
  6. H. White, Artificial Neural Networks Approximation and Learning Theory (Blackwell, Oxford, 1992).
  7. D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
    [CrossRef]
  8. P. Tawel, “Does the neuron learn like the synapse?” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 169–176.
  9. G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989).
    [CrossRef]
  10. R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks 1, 295–307 (1988).
    [CrossRef]
  11. D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing (MIT, Cambridge, Mass., 1987), Vol. 1.
  12. S. J. Hanson, L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 177–185.

1993

A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993).
[CrossRef]

1991

Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their application to neural network theory,” Neural Networks 4, 385–394 (1991).
[CrossRef]

1989

G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989).
[CrossRef]

1988

R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks 1, 295–307 (1988).
[CrossRef]

1986

D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
[CrossRef]

Atarita, A.

A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993).
[CrossRef]

Cao, W.

G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989).
[CrossRef]

Hanson, S. J.

S. J. Hanson, L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 177–185.

Hinton, G. E.

D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
[CrossRef]

Ito, Y.

Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their application to neural network theory,” Neural Networks 4, 385–394 (1991).
[CrossRef]

Jacobs, R. A.

R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks 1, 295–307 (1988).
[CrossRef]

Lisboa, P. G. J.

P. G. J. Lisboa, Neural and Networks Current Applications (Chapman and Hall, London, 1992).

McClelland, J. L.

D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing (MIT, Cambridge, Mass., 1987), Vol. 1.

Mirchandani, G.

G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989).
[CrossRef]

Pratt, L. Y.

S. J. Hanson, L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 177–185.

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
[CrossRef]

D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing (MIT, Cambridge, Mass., 1987), Vol. 1.

Sperduti, A.

A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993).
[CrossRef]

Tang, J.-F.

J.-F. Tang, Applied Thin Film Optics, 1st ed. (Shanghai Science and Technology, Shanghai, 1984).

Tawel, P.

P. Tawel, “Does the neuron learn like the synapse?” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 169–176.

Wasserman, P. D.

P. D. Wasserman, Neural Computing, Theory and Practice (Van Nostrand Reinhold, New York, 1989).

White, H.

H. White, Artificial Neural Networks Approximation and Learning Theory (Blackwell, Oxford, 1992).

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
[CrossRef]

IEEE Trans. Circuits Syst.

G. Mirchandani, W. Cao, “On hidden nodes for neural nets,” IEEE Trans. Circuits Syst. 36, 661–664 (1989).
[CrossRef]

Nature (London)

D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representation by back-propagation errors,” Nature (London) 323, 583–536 (1986).
[CrossRef]

Neural Networks

R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks 1, 295–307 (1988).
[CrossRef]

A. Sperduti, A. Atarita, “Speed up learning and network optimization with extended back propagation,” Neural Networks 6, 365–383 (1993).
[CrossRef]

Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their application to neural network theory,” Neural Networks 4, 385–394 (1991).
[CrossRef]

Other

H. White, Artificial Neural Networks Approximation and Learning Theory (Blackwell, Oxford, 1992).

J.-F. Tang, Applied Thin Film Optics, 1st ed. (Shanghai Science and Technology, Shanghai, 1984).

P. G. J. Lisboa, Neural and Networks Current Applications (Chapman and Hall, London, 1992).

P. D. Wasserman, Neural Computing, Theory and Practice (Van Nostrand Reinhold, New York, 1989).

D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing (MIT, Cambridge, Mass., 1987), Vol. 1.

S. J. Hanson, L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 177–185.

P. Tawel, “Does the neuron learn like the synapse?” in Advances in Neural Information Processing Systems I, D. S. Touretzky, ed. (Kaufmann, San Mateo, Calif., 1989), pp. 169–176.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1
Fig. 1

Architecture of the ANN.

Fig. 2
Fig. 2

Schematic of the neural network system for a single-layer thin film.

Fig. 3
Fig. 3

Errors of actual outputs obtained by the trained neural networks: (a) errors of actual outputs obtained by the trained refractive-index net versus desired output, (b) errors of actual outputs obtained by those trained thickness nets versus refractive indices, (c) errors of refractive index obtained by the trained refractive-index net versus optical thickness, (d) errors of optical thicknesses obtained by the trained thickness net versus the desired optical thickness, (e) errors of actual outputs obtained by the trained refractive-index net versus optical thickness, (f) errors of actual outputs obtained by the trained thickness net versus the desired optical thickness. (a) and (b) have fixed thicknesses and varied refractive indices, (c) and (d) have low fixed refractive indices and varied optical thicknesses, (e) and (f) have high fixed refractive indices and varied optical thicknesses.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

S r j = i w j i o r i ,
o r j = f ( S r j ) ,
f ( S r j ) = 1 1 + exp ( - S r j ) .
E r = 1 2 j ( y r j - o r a j ) 2 .
E = 1 2 r j ( y r j - o r a j ) 2 = r E r .
Δ r w j i = - η E r w j i .
E r w j i = - δ r j o r i ,
δ r j = ( y r j - o r a j ) f j ( S r j ) = ( y r j - o r a j ) o r a j ( 1 - o r a j )
δ r j = o r j ( 1 - o r j ) k δ r k w j k
Δ w j i = η δ r j o r i ,
Δ w j i ( r ) = η δ r j o r a j + α Δ w j i ( r - 1 ) ,

Metrics