Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Wavefront reconstruction with artificial neural networks

Open Access Open Access

Abstract

In this work, a new approach, a method using artificial neural networks was applied to reconstruct the wavefront. First, the optimal structure of neural networks was found. Then, the networks were trained on both noise-free and noisy spot patterns. The results of the wavefront reconstruction with artificial neural networks were compared to those obtained through the least square fit and singular value decomposition method.

©2006 Optical Society of America

1. Introduction

Measurements of the wavefront aberrations are in general performed with a Hartmann-Shack wavefront sensor (HSS) [1]. To reconstruct a wavefront from spot displacements two standard methods are used: least square fit (LSF) and singular value decomposition (SVD) [2, 3, 4].

Lately, simulated artificial neural networks (ANN) have begun to be used as a very powerful tool in many fields of research. Based on the model of biological neurons of the human brain, the artificial neural networks are capable of learning fromthe examples and therefore, of solving the problems of function approximation or pattern classification [5].

Artificial neural networks have already been used for aberration measurements in several optical systems. Yasuno et al. applied neural networks to determine the spherical aberration coefficient of a confocal objective from an axial intensity response [6]. Barrett and Sandier reported on a neural network designed to estimate optical aberrations of the Hubble Space Telescope [7]. Montera et al. used neural networks to predict wavefront sensor slope measurements, to perform the parameter estimation, and to reduce the wavefront sensor measurement noise effect [8]. But until now there is no systematic study of the neural networks in the application for the reconstruction of the wavefront of the human eyes.

Here, the back-propagation artificial neural networks were applied to study the possibility to reconstruct a wavefront from spot displacements measured by the CMOS-based Hartman-Shack wavefront sensor. Different structures of neural networks were investigated to find the optimal network architecture. A comparison of the evaluation done with the artificial neural networks, the least square fit, and the singular value decomposition was also done to determine the performance of these three reconstruction methods.

2. Artificial neural networks

Figure 1 shows the model of a simple artificial neuron. The input to the neuron p multiplied with its weight w, which represents the coupling strength, and summed with the bias value b goes to the transfer function f. The output of the neuron is calculated as a=f(w*p+b).

 figure: Fig. 1.

Fig. 1. A simple neuron model.

Download Full Size | PDF

An artificial neural network consists of simple neurons operating in parallel and organized in layers. The transfer functions being used and the connections between the layers determine the function of the whole network. The artificial neural networks learn from the given samples by modifying the weights and biases. After training, the networks can accomplish a given task [9].

Here, the back-propagation artificial neural networks were used. Such kind of networks are multi-layer feed forward networks based on a back-propagation algorithm, which is the most widely used algorithm for network training. With given input vectors and targets such networks can approximate a function or classify input vectors in an appropriate way as defined by the user. Standard back-propagation is a gradient descent algorithm, in which the network weights are moved along the negative of the gradient of the performance function: x k+1=xk-akgk, where xk is a vector of current weights and biases, gk is the current gradient, and ak is the learning rate.

3. Results

Here, a CMOS-based Hartman-Shack wavefront sensor designed by Nirmeier was used for the simulations [10]. The lens array of the Hartman-Shack sensor was assumed to consist of 8×8 sublenses. That corresponds to the 64 displacements in the x- and 64 displacements in the y-direction. Thus, the number of the variables of the input vector representing the spot displacements in x- and y-direction was fixed by 128 (i.e., 128 neurons). As the transfer function the sigmoid function was used. The output vector had 5 variables since only low-order Zernike coefficients (z1 and z2 - tilts, z3 and z5 - astigmatism, and z4 - defocus) were to be calculated.

The neural networks were to be applied to estimate the optical aberrations of human eyes. Therefore, training and validation patterns were generated by Virtual Eye Programm supplied by Prof. Thibos [11]. Sets of Zernike coefficients randomly generated by this programm cover all reasonable values of the optical aberrations of normal human eyes [12, 13, 14]. The displacements of the spots representing the inputs to the neural networks were calculated from the generated Zernike coefficients for the given geometry of the wavefront sensor. The training algorithm is shown schematically in Fig. 2.

 figure: Fig. 2.

Fig. 2. The principle of training.

Download Full Size | PDF

To make the network more insensitive to the noise in the spot patterns, the training was also performed on noisy patterns. The noise added to the spot displacements was Gaussian random noise. No dynamical changes of the optical aberrations were taken into account, and the algorithm has not been optimized for Kolmogorov spectrum.

To evaluate the performance of the wavefront reconstruction using the neural networks the root-mean-square error (RMS) of the reconstructed wavefront, the residual RMS as the difference between the RMS of the reconstructed and the generated wavefront, and the errors of each Zernike coefficient were calculated.

3.1. Network architecture

First, a two-layer feed forward back-propagation network was performed and trained on noise-free patterns. The goal of that was to establish the use of the artificial neural networks to correctly estimate the wavefront of human eyes (if a network cannot evaluate a noise-free sample correctly, it is unlikely it can perform with complication of noise).

In the case of training on the noise-free patterns the number of neurons in the hidden layer is not important [15]. Figure 3 shows one result of the validation after training of the network. As it can be seen the evaluation was satisfying. The reconstructed wavefront had only a small difference in RMS compared to the target. The residual RMS of the reconstructed wavefront was equal to 0.136 nm (the relative error of about 0.2%). Thus, the neural networks were found to perform well for wavefront reconstruction from the noise-free patterns.

 figure: Fig. 3.

Fig. 3. One example of the validation after training on the noise-free patterns. The generated noise-free and reconstructed wavefronts are shown.

Download Full Size | PDF

For training on the noisy patterns the number of neurons in the hidden layer of the network was more important. In general, the networks with more neurons in the hidden layer should provide a better average performance than those with fewer neurons in the hidden layer. In our case, the number of neurons in the hidden layer was a factor that critically influenced the performance and led to different accuracy even if the same target was presented. To find an optimal network architecture, a series of neural networks with different numbers of neurons in the hidden layer was investigated. Figure 4 shows the average error calculated as RMSresidual/RMSgenerated after training on 500 patterns. The best evaluation with an average error of less than 4% was performed by a neural network with 90 neurons in the hidden layer.

 figure: Fig. 4.

Fig. 4. The average RMS errors demonstrating the effect of different network architecture on the performance of the wavefront reconstruction.

Download Full Size | PDF

To test the performance of the neural network trained on the noisy patterns, diverse noisy sets of Zernike coefficients were presented to the network. Figure 5 shows two results of this test.

As it can be seen, the network was also able to correctly reconstruct the wavefront to which the Gaussian noise was added.

 figure: Fig. 5.

Fig. 5. Two results of the validation after the training on the noisy patterns. The noisy patterns were presented to the network.

Download Full Size | PDF

3.2. Comparison of the reconstruction methods

Three reconstruction methods - LSF, SVD, and NN - were compared to each other. Figure 6 shows an example of the validation. Each of these methods was able to reconstruct the wave-front, although, the method using the neural networks had the least error.

The reconstruction methods were also compared in cases of different noise conditions. While the neural networks could be easily trained on the noisy patterns (due to the adaptive learning of the networks), the consideration of the added noise for the other two reconstruction methods (LSF and SVD) seems to be more complicated. A regulation term can be added in the case that noise is presented in a measurement to reconstruct the wavefront using the LSF or SVD, but it requires the knowledge of the noise measurement. In this work no regulation terms were implemented.

Figure 7 demonstrates the accuracy of three different reconstruction methods in dependence on the added noise. The x-axis is the dynamic magnitude of the added noise ranging from 0 to 10% of the diameter of the sublens. The y-axis presents the error of each low-order Zernike coefficient defined as the squared error between the output of the reconstruction method and the generated Zernike coefficients σ 2=(z*i-zi)2, where z*i and zi are the Zernike coefficients recalculated by the reconstruction method and generated by Virtual EyeModel, respectively. As the noise increases, the error of every single Zernike coefficient recalculated by LSF and SVD increased significantly, while the errors made by the neural network were smaller for most of the Zernike coefficients.

 figure: Fig. 6.

Fig. 6. An example of the wavefront validation with three different reconstruction methods: LSF, SVD, and ANN. The neural network has 90 neurons in the hidden layer.

Download Full Size | PDF

4. Conclusion

A new method using artificial neural networks was applied to reconstruct wavefronts from spot displacements measured by a Hartmann-Shack wavefront sensor. The networks were trained on noise-free and noisy samples to calculate the low-order Zernike coefficients. In both cases the networks could estimate the statical optical aberrations of normal human eyes.

The optimal architecture of the back-propagation networks was found by calculating the average RMS error after training on the noisy patterns. In our case the neural network that gave the smallest average error had 90 neurons in the hidden layer.

Validations proved that the artificial neural networks are very powerful tools. A simple back-propagation neural network could be used for wavefront reconstruction satisfactorily. For noisefree spot patterns, back-propagation networks could reconstruct the wavefront accurately. For noisy spot patterns with added random Gaussian noise the accuracy of the performance decreased, although, it was acceptable.

Performances of three different methods for wavefront reconstruction were evaluated. Each of the reconstruction methods (LSF, SVD, ANN) was capable of reconstructing the wavefront from the noisy spot displacements, but the reconstruction done with the neural networks was on average more accurate. We assume that the performance of the wavefront reconstruction done by the analytical methods (LSF and SVD) would increase if a regulation term is added.

Here, only the low-order Zernike coefficients (tilts, defocus, and astigmatism) were considered because they are the most dominant optical aberrations of normal human eyes. But the method using artificial neural networks can be potentially applied to estimate also the higher order optical aberrations (such as coma, trefoil, or spherical aberration). This is the next step of future research.

 figure: Fig. 7.

Fig. 7. The comparison among the evaluation methods - LSF, SVD, and ANN. The x-axis corresponds to the dynamic magnitude of the noise, the y-axis shows the errors of each low-order Zernike coefficient. The numbering of Zernike coefficients: z1, z2 - tilt and tip, z3 and z5 - astigmatism, z4 - defocus.

Download Full Size | PDF

One possible application of the neural networks is the implementation of the neural networks to a single chip for the wavefront sensing. This way a very fast wavefront reconstruction could be achieved without a host computer.

Acknowledgments

The author wish to thank Landesstiftung Baden-Wuerttemberg for the scholarship during the master degree at the University of Heidelberg.

References and links

1. J. Liang, B. Grimm, S. Goelz, and J. Bille, “Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor,” J. Opt. Soc. Am. A 11, 1949–1957 (1994). [CrossRef]  

2. D. Fried, “Least-square fitting a wave-front distortion estimate to an array of phase-difference measurements,” J. Opt. Soc. Am. 67, 370–375 (1977). [CrossRef]  

3. R. Cubalchini, “Modal wave-front estimation from phase derivative measurements,” J. Opt. Soc. Am. 69, 972–977 (1979). [CrossRef]  

4. W. H. Southwell, “Wave-front estimation from wavefront slope measurements,” J. Opt. Soc. Am. 70, 998–1006 (1980). [CrossRef]  

5. C. Bishop, Neural Networks for Pattern Recognition (Clarendon Press, Oxford, 1995).

6. Y. Yasuno, T. Yatagai, T. F. Wiesendanger, A. K. Ruprecht, and H. J. Tiziani, “Aberration measurement from confocal axial intensity response using neural network,” Opt. Express 10, 1451–1457 (2002). [PubMed]  

7. T. K. Barrett and D. G. Sandier, “Artificial neural network for the determination of Hubble Space Telescope aberration from stellar images,” Appl. Opt. 32, 1720–1727 (1993). [CrossRef]   [PubMed]  

8. D. A. Montera, B. M. Welsh, M. C. Roggemann, and D. W. Ruck, “Prediction of wave-front sensor slope measurements with artificial neural networks,” Appl. Opt. 36, 675–681 (1997). [CrossRef]   [PubMed]  

9. J. W. Clark, T. Lindenau, and M. L. Ristig, Scientific Applications of Neural Nets (Springer, 1998).

10. T. Nirmaier, G. Pudasaini, and J. Bille, “Very fast wave-front measurements at the human eye with a custom CMOS-based Hartmann-Shack sensor,” Opt. Express 11, 2704–2716 (2003). [CrossRef]   [PubMed]  

11. L. N. Thibos, A. Bradley, and X. Hong, “A statistical model of the aberration structure of normal, well-corrected eyes,” Ophthalmic Physiol. Opt. 22, 427–433 (2002). [CrossRef]   [PubMed]  

12. J. Porter, A. Guirao, I. G. Gox, and D. R. Williams, “Monochromatic aberrations of the human eyes in a large population,” J. Opt. Soc. Am. A 18, 1793–1803 (2001). [CrossRef]  

13. L. N. Thibos, X. Hong, A. Bradley, and X. Cheng, “Statistical variation of aberration structure and image quality in a normal population of healthy eyes,” J. Opt. Soc. Am. A 19, 2329–2348 (2002). [CrossRef]  

14. J. F. Castejón-Mochón, N. López-Gil, A. Benito, and P. Artal, “Ocular wave-front statistics in a normal young population,” Vis. Res. 42, 1611–1617 (2002). [CrossRef]   [PubMed]  

15. S. Walczak and N. Cerpa, “Heuristic principles for the design of artificial neural networks,” Information and Software Technology 41, 107–117 (1999). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. A simple neuron model.
Fig. 2.
Fig. 2. The principle of training.
Fig. 3.
Fig. 3. One example of the validation after training on the noise-free patterns. The generated noise-free and reconstructed wavefronts are shown.
Fig. 4.
Fig. 4. The average RMS errors demonstrating the effect of different network architecture on the performance of the wavefront reconstruction.
Fig. 5.
Fig. 5. Two results of the validation after the training on the noisy patterns. The noisy patterns were presented to the network.
Fig. 6.
Fig. 6. An example of the wavefront validation with three different reconstruction methods: LSF, SVD, and ANN. The neural network has 90 neurons in the hidden layer.
Fig. 7.
Fig. 7. The comparison among the evaluation methods - LSF, SVD, and ANN. The x-axis corresponds to the dynamic magnitude of the noise, the y-axis shows the errors of each low-order Zernike coefficient. The numbering of Zernike coefficients: z1, z2 - tilt and tip, z3 and z5 - astigmatism, z4 - defocus.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.