## Abstract

Absolute planarity measurements by interferometry are classically made using three flats, compared two by two in the course of four or more tests. Data reduction is performed with various analytical methods. Here we present instead a data processing algorithm that converges to solution numerically by iteration. Examples are presented both on synthetic interferograms and on experimental data. High accuracy and versatility of the approach are demonstrated.

©2007 Optical Society of America

## 1. Introduction

In recent years, several papers have appeared discussing the problem of absolute testing of flats, aimed at improving the overall measuring accuracy. Physical methods include the liquid mirror approach, the interferometric three-flat test, and the deflectometric scanning technique. As to the first method, a liquid mirror is used as a flatness standard, and a glass plate is compared interferometrically with it [1–9]. Among the merits of such a method, it directly relates the definition of planarity to basic physical laws. On the other hand, making accurate measurements with a liquid mirror is in practice quite difficult. Major problems are meniscus effects, surface contamination, vibration-induced waviness, and temperature gradients; also, the bending of the glass plate, held horizontally, needs to be considered. As to the approach of the three-flat test, measurements are taken by facing the flats two by two, and processing the interference patterns that are produced. This is far simpler than operating with a liquid mirror, but is limited in that it provides information only about a single diameter, owing to the inversion that takes place when one flat is flipped front to back to face its counterpart. To overcome this limitation, different methods introduce additional measurements, like rotations and translations, so that information is retrieved also along other sections, covering in practice most of the surface [10–17]. A variant of this approach is to use a fitting procedure based upon Zernike polynomials or symmetry considerations to obtain the absolute figure of the whole surfaces, starting from four measurements or more [18–27]. As to the deflectometric technique, the map of the surface slope is obtained by laser scanning with a pentagon prism using a single or multiple sensor system [28, 29]. The overall surface figure is then calculated by slope integration. This method is looked at with interest as a future standard, since it does not rely upon a material surface, and it can be extended to shape measurement in general.

Presently, the most diffused method of measuring the absolute planarity is by the three flat test. Depending on the variant of the method that is adopted, a great part of the work is anyway taken by setting up and clearing a reliable procedure for data processing. Measurements are usually made with phase-shifting interferometers, producing maps of optical path differences that need to be properly handled by a dedicated software. The programs are often quite involved, both as to surface reconstruction and to uncertainty balance. This is because the classical approaches, particularly the ones based on Zernike’s polynomials, aim at reconstructing the surfaces analytically. Profiting the enormous power offered by modern personal computers, here we propose instead a data reduction procedure that achieves reconstruction numerically, by means of an iterative algorithm. Simply stated, assuming that the set of source interferograms contains sufficient information, the problem is finding three surfaces that, combined interferometrically, produce the given interferograms. Starting from three trial surfaces and generating trial interferograms, the surfaces are recursively modified until close coincidence between trial and given interferograms is achieved. We present an implementation of this concept, along with demonstrative processing of interferograms, both on simulated and experimental data. In the latter case, comparison with a conventional processing approach is provided, also addressing the problem of data uncorrelation and discussing the uncertainty balance [24, 30].

## 2. Principle of the iterative algorithm

Although it can be applied to other measuring approaches as well, to illustrate the iterative algorithm here we refer to the Fritz’s method described in Ref. [18]. The three flats are named K, L, M, intending the surfaces to be tested for planarity. For each surface, a Cartesian coordinate system is considered, with the origin at the center and the *z*-axis coincident with the external normal. The flats are compared in pairs. Referring to a Fizeau interferometer, one flat requires to be flipped front to back, say about the *y*-axis. We indicate the flipping operation by a subscript F, meaning for example

Naturally, a double flip leads back to the unflipped surface.

At some point during the measuring procedure, one flat is rotated by a fixed angle about the *z*-axis; in our case, the flat that is rotated is M, and the angle is 54° [31]. We indicate such a rotation by a subscript R. Overall, four interferograms are produced according to

$${L}_{F}M={L}_{F}+M\text{}$$

$${L}_{F}{M}_{R}={L}_{F}+{M}_{R}$$

$${L}_{F}K={L}_{F}+K$$

The algorithm we use operates on trial K, L, M surfaces, and compares the synthetic interferograms with the interferograms from experiments. To indicate the latter ones, we use a subscript “exp”. Prior to processing, interferograms from experiments are prepared by removing the best fit plane in least-squares sense. In detail, the iterative algorithm is as follows:

- Select an initial set of trial surfaces K, L, M;
- Compute the synthetic interference patterns K
_{F}M, L_{F}M, L_{F}M_{R}, L_{F}K; - Evaluate the differences$$\Delta \left({K}_{F}M\right)={\left({K}_{F}M\right)}_{\mathrm{exp}}-{K}_{F}M\text{}$$

$$\phantom{\rule{.1em}{0ex}}\Delta \left({L}_{F}M\right)={\left({L}_{F}M\right)}_{\mathrm{exp}}-{L}_{F}M\text{}$$

$$\Delta \left({L}_{F}{M}_{R}\right)={\left({L}_{F}{M}_{R}\right)}_{\mathrm{exp}}-{L}_{F}{M}_{R}$$

$$\Delta \left({L}_{F}K\right)={\left({L}_{F}K\right)}_{\mathrm{exp}}-{L}_{F}K$$ - Flip again or counter-rotate where necessary, scale down (here by a factor of 10) and share the above differences among K, L, M:$${K}_{\mathrm{new}}=K+\frac{1}{2}\frac{\Delta {\left({K}_{F}M\right)}_{F}}{10}+\frac{1}{2}\frac{\Delta \left({L}_{F}K\right)}{10}$$

$${L}_{\mathrm{new}}=L+\frac{1}{3}\frac{\Delta {\left({L}_{F}K\right)}_{F}}{10}+\frac{1}{3}\frac{{\Delta \left({L}_{F}M\right)}_{F}}{10}+\frac{1}{3}\frac{\Delta {\left({L}_{F}{M}_{R}\right)}_{F}}{10}$$

$${M}_{\mathrm{new}}=M+\frac{1}{3}\frac{\Delta \left({K}_{F}M\right)}{10}+\frac{1}{3}\frac{\Delta \left({L}_{F}M\right)}{10}+\frac{1}{3}\frac{\Delta {\left({L}_{F}{M}_{R}\right)}_{\mathrm{-R}}}{10}.$$In the above, the subscript “-R” is intended to mean a counter-rotation of the same amount as “R” (i.e., -54°); obviously, being applied to an already rotated map, such a counter-rotation leads back to the unrotated surface.

- Substitute K
_{new}, L_{new}, M_{new}for K, L, M, and back to step 2. The cycle is repeated until the sum of the rms differences computed in step 3 reaches a minimum.

The actual initial set of trial surfaces in step 1 can be various. The simplest choice, that we have used in the demonstrative examples discussed below, consists of setting all the elements of the matrices representing the surfaces to zero. Such a choice is a neutral guess, meaning perfectly plane surfaces. The common dimensions of the matrices need to be the same as those of the interferograms to be analysed. In the examples reported below, we use circular maps with a diameter of 169 pixels.

As to data manipulation, two basic routines are utilised: flip and rotate. The flip routine simply performs the operation of Eq. (1), exchanging the matrix elements of each row about the central element of the row. For this reason, this is an exact and very fast mathematical operation. The rotate routine is somewhat more complex, since it requires a conversion to polar coordinates (ρ, θ), a rotation, and a return to Cartesian. In detail, being (i, j) the matrix indices of the current element, and (i_{0}, j_{0}) the indices of the central element, we compute

$$\theta ={\mathrm{tan}}^{-1}\frac{j-{j}_{0}}{i-{i}_{0}}$$

Then we do the rotation to (ρ’, θ’) by the equations

where φ is the rotation angle (54° or -54°, according to the operation in point). The new indices (i’, j’) are determined by computing

$$j\prime =\mathrm{Int}(\rho \prime \mathrm{sin}\theta \prime )$$

being Int(x) a function that returns the closest integer to x. In practice, however, to end up with a completely filled matrix of rotated elements, it is customary to do the operation in reverse, looking back from the rotated elements to the source ones. As a further computational detail, not to miss points about the edge, prior to rotation the source matrix is extrapolated radially by one element, trimming the outer points afterwards. While the results we obtain prove satisfactory, standard routines can likely be found that accomplish all the above operations automatically. The rotation operation may anyway produce minor errors on the data set; such errors expectedly become smaller if the pixel size can be reduced. Error propagation from one iteration to next is maintained under control thanks to cyclical reference to the experimental data (step 3). In fact, at each iteration the new synthetic interferograms are compared to the experimental data; at the end of the process, the rotation errors remain embedded in the residuals. A discussion in terms of high and low spatial frequency residuals will be given in next section.

As to the scaling factor used in step 4, choosing 1/10 lets to proceed smoothly towards the end result; the choice of a smaller factor would increase the computation time, with negligible advantages on the final accuracy achieved. Strategies could also be implemented to speed up convergence by properly varying the scaling factor each iteration. As an alternative to looking for a minimum of the rms value mentioned in step 5, iterations may be stopped for instance when such a value becomes smaller than a fixed amount. In fact, the above rms can be used to monitor the convergence of the algorithm. No case of failure was detected so far. However, in principle one cannot exclude that, by reasons to be ascertained, the process is trapped in a “false” minimum. Such an occurrence would though be revealed by an anomalously high value of the end rms, as compared to typical values that remain because of noise and decorrelation effects (see below).

## 3. Examples with synthetic and real maps

In order to validate our method, we have generated three synthetic circular surfaces of 169 pixel in diameter using random Zernike polynomials with zero piston and tilt. Then we have computed the corresponding interferograms K_{F}M, L_{F}K, L_{F}M and L_{F}M_{R}. The latter were regarded as experimental data, and used within the iterative algorithm described above. Using a computer with a 1500 MHz CPU, processing is completed in approximately 90 seconds; the number of iterations is 256. After processing, we have compared the reconstructed surfaces with the surfaces generated initially. The difference between initial and final maps is less than 0.1 nm Peak-to-Valley. In Fig. 1 the evolution of the trial surfaces during the iterations is displayed.

We have also tested the iterative approach on real interferograms. Archival data are available from annual calibration of a set of three reference flats with a modified Fritz’s method [24]. Such a method yields the Zernike maps of best fit to the surfaces K, L, M. There are though residuals, of two different kinds: high spatial frequency and decorrelation. High spatial frequency residuals are remaining because of the limited bandwidth of the Zernike terms employed for fitting. Decorrelation residuals are low spatial frequency contributions that are not compatible with the interferometric data. The latter contributions result from different sources such as mechanical strain and thermal behavior of the flats in the course of the measurements [24, 30].

Using the iterative algorithm, we can account also for the high spatial frequency components, because this method is not limited by the Zernike polynomial bandwidth. The
residuals that remain are yet divided into “high frequency” and “low frequency” contributions using a Zernike fitting; this is done only with the purpose of comparison with Fritz’s approach. Results on a typical set of archival data (2003) are shown in Figs. 2 and 3. As it appears, agreement on the K, L, M surfaces is achieved to the nanometer scale; the reconstruction given by the iterative algorithm contains though finer details, due to the wider spatial frequency bandwidth of the approach. An example of residuals of the interferometric data (K_{F}M) is then shown in Figs. 4 and 5. As to high frequency residuals, it is found that the iterative algorithm has reduced the Peak-to-Valley to approximately one fifth of that with Fritz’s approach. As to low frequency residuals, agreement is achieved to better than 0.1 nm Peak-to-Valley; this is particularly interesting, since both methods consistently point out the existence of decorrelation effects, both in terms of shape and amount, and validate each other in this respect.

## 4. Uncertainty estimate

Provided that the data set is completely correlated, the iterative algorithm appears capable of accounting for the given interferograms up to the digitisation limit. The data set, however, is subjected to various sources of uncertainty, that show up as small variations at the single pixels while repeating the measurement. Our archival data are in fact a collection of 40 measurements, that are used to find out the average data set and the standard deviation at each pixel. In previous analysis with Fritz’s method, uncertainty was propagated to the end results analytically. In the present case, however, analytical propagation is inconvenient due to the recursive algorithm. We therefore evaluate the propagation numerically, repeating the computations for each of the 40 data sets, and finding out the final average maps of K, L, M and their standard deviations. The latter are reported in Fig. 6, showing in all cases a Peak-to-Valley of the order of 0.1 nm. It is understood that the major source of uncertainty is given by the residuals (both high and low frequency), which with our data are anyway limited to approximately 0.7 nm Peak-to-Valley. In addition, one should consider whether other sources of uncertainty are present, for example derived from specifications of the interferometer, properties of the relevant materials, and more (“type B” uncertainty contributions [32]).

It is noted that a further source of uncertainty is due to the assumption that the set of source interferograms contains sufficient information for reconstruction. Such an assumption is common to Fritz’s method, as well as to other approaches; however, its impact on the uncertainty balance is not directly quantifiable. In fact, the iterative method could produce a result even in the case of less than four source interferograms, but the assumption of sufficient information could not be maintained any more. Conversely, in the case of more than four source interferograms, the information available increases; the iterative method could easily be tailored to the actual procedure selected, and conveniently used for data processing, still under the basic assumptions of the physical approach in point. Considering for example a data set of four measurements as with Fritz’s method, and a rotation angle of 54°, it is known that interferograms do not contain information about azimuthal frequencies such as 20 θ. Such an information cannot be reliably retrieved by any means, just because it is missing from the data. This is peculiar of the physical method; other azimuthal frequencies would be missing, if a different rotation angle were selected. The iterative algorithm here described intervenes at the data reduction stage, as a tool to find three surfaces accounting for the given interferograms, but it can do nothing on the above feature. Problems such as the choice of an optimum angle of rotation, or the measuring method itself, remain with the physical assessment of the experimental approach, as well as the estimate of the uncertainty to be associated with the missing information.

## 5. Discussion

A peculiar aspect of the approach here presented is that surface reconstruction is not achieved at once by means of a set of closed mathematical formulas but through iterations that, cycle after cycle, get as close as to the solution as time and numerical precision allow. Convergence is monitored by the residuals, acting as an error function to be minimized by the numerical process. The effectiveness of the algorithm is in that the end residuals are very small.

A major obstacle to the possibility of achieving the solution analytically by exact mathematical formulas is that the source interferograms are sampled at a square grid of points instead of a polar one. Rotating a square grid (by 54° in our case) moves the sampling points to locations that generally do not coincide with those of the unrotated grid. This in principle does not allow writing a solvable set of equations as in Refs. [11], [12]. On the other hand, it can be noted that in modern interferometers the sampling operation is performed by a CCD camera over a very fine grid of data points; also, the wave aberration being dealt with in the case of high quality flats is quite low. As a consequence, the mapping of the interference patterns given by the source data can be conceptually regarded as a continuum; the use of square grid sampling instead of polar is just related to hardware, and does not prevent the possibility of extracting the relevant information from the source interferograms. Rotation errors, though present, are in fact negligibly affecting the iterative approach. Although such errors remain embedded in the residuals, the latter clearly permit to single out even slight decorrelation effects that may be present in the source data. Conversely, decorrelation phenomena could pose problems to the pure analytical approach, where procedures such as least squares fitting (and anyway the use of an error function to be minimized) should further be taken into account.

## 6. Conclusions

An iterative algorithm for interferogram processing in measurements of absolute planarity has been presented. The approach proves effective to accurately account for synthetic interferograms, and produces results in full agreement with Fritz’s method on experimental data. The algorithm can easily be extended to other measuring procedures as well, taking in due account the physical requirement on the availability of a sufficient amount of information to achieve surface reconstruction. The approach can also be extended to sphere testing.

As compared to other methods, the algorithm is particularly simple. The computer processing time of the source interferograms is at all manageable, even in the absence of special optimisations. This approach could be implemented as an application in existing programmable interferometers, to facilitate the task of absolute planarity measurement.

## Acknowledgments

The authors wish to express their gratitude to J. C. Wyant for generous encouragement and invaluable suggestions on the presentation of this work.

## References and links

**1. **Lord Rayleigh, “Interference bands and their application,” Nature (London) **48**, 212–214 (1893). [CrossRef]

**2. **H. Barrell and R. Marriner, “Liquid surface interferometry,” Nature (London) **162**, 529–530 (1948). [CrossRef]

**3. **G. D. Dew, “The measurement of optical flatness,” J. Sci. Instrum. **43**, 409–415 (1966). [CrossRef] [PubMed]

**4. **R. Brünnagel, H. -A. Oehring, and K. Steiner, “Fizeau interferometer for measuring the flatness of optical surfaces,” Appl. Opt. **7**, 331–335 (1967). [CrossRef]

**5. **J. P. Marioge, B. Bonino, and M. Mullot, “Standard of flatness: its application to Fabry-Perot interferometers,” Appl. Opt. **14**, 2283–2285 (1975). [CrossRef] [PubMed]

**6. **K.-E. Elssner, A. Vogel, J. Grzanna, and G. Schulz, “Establishing a flatness standard,” Appl. Opt. **33**, 2437–2446 (1994). [CrossRef] [PubMed]

**7. **J. Chen, D. Song, R. Zhu, Q. Wang, and L. Chen, “Large-aperture high-accuracy phase-shifting digital flat interferometer,” Opt. Eng. **35**, 1936–1942 (1996). [CrossRef]

**8. **I. Powell and E. Goulet, “Absolute figure measurements with a liquid-flat reference,” Appl. Opt. **37**, 2579–2588 (1998). [CrossRef]

**9. **M. Vannoni and G. Molesini, “Validation of absolute planarity reference plates with a liquid mirror,” Metrologia **42**, 389–393 (2005). [CrossRef]

**10. **G. Schulz and J. Schwider, “Precise measurement of planeness,” Appl. Opt. **6**, 1077–1084 (1967). [CrossRef] [PubMed]

**11. **G. Schulz, J. Schwider, C. Hiller, and B. Kicker, “Establishing an optical flatness standard,” Appl. Opt. **10**, 929–934 (1971). [CrossRef] [PubMed]

**12. **J. Grzanna and G. Schulz, “Absolute testing of flatness standards at square-grid points,” Opt. Commun. **77**, 107–112 (1990). [CrossRef]

**13. **G. Schulz and J. Grzanna, “Absolute flatness testing by the rotation method with optimal measuring error compensation,” Appl. Opt. **31**, 3767–3780 (1992). [CrossRef] [PubMed]

**14. **G. Schulz, “Absolute flatness testing by an extended rotation method using two angles of rotation,” Appl. Opt. **32**, 1055–1059 (1993). [CrossRef] [PubMed]

**15. **J. Grzanna, “Absolute testing of optical flats at points on a square grid: error propagation,” Appl. Opt. **33**, 6654–6661 (1994). [CrossRef] [PubMed]

**16. **B. (B). F. Oreb, D. I. Farrant, C. J. Walsh, G. Forbes, and P. S. Fairman, “Calibration of a 300-mm-Aperture Phase-Shifting Fizeau Interferometer,” Appl. Opt. **39**, 5161–5171 (2000). [CrossRef]

**17. **S. Sonozaki, K. Iwata, and Y. Iwahashi, “Measurement of profiles along a circle on two flat surfaces by use of a Fizeau interferometer with no standard,” Appl. Opt. **42**, 6853–6858 (2003). [CrossRef] [PubMed]

**18. **B. S. Fritz, “Absolute calibration of an optical flat,” Opt. Eng. **33**, 379–383 (1984).

**19. **C. Ai and J. C. Wyant, “Absolute testing of flats by using even and odd functions,” Appl. Opt. **32**, 4698–4705 (1993). [CrossRef] [PubMed]

**20. **C. J. Evans and R. N. Kestner, “Test optics error removal,” Appl. Opt. **35**, 1015–1021 (1996). [CrossRef] [PubMed]

**21. **P. Hariharan, “Interferometric testing of optical surfaces: absolute measurement of flatness,” Opt. Eng. **36**, 2478–2481 (1997). [CrossRef]

**22. **C. J. Evans, “Comment on the paper ‘Interferometric testing of optical surfaces: absolute measurement of flatness,” Opt. Eng. **37**, 1880–1882 (1998). [CrossRef]

**23. **R. E. Parks, L.-Z. Shao, and C. J. Evans, “Pixel-based absolute topography test for three flats,” Appl. Opt. **37**, 5951–5956 (1998). [CrossRef]

**24. **V. Greco, R. Tronconi, C. Del Vecchio, M. Trivi, and G. Molesini, “Absolute measurement of planarity with Fritz’s method: uncertainty evaluation,” Appl. Opt. **38**, 2018–2027 (1999). [CrossRef]

**25. **K. R. Freischlad, “Absolute interferometric testing based on reconstruction of rotational shear,” Appl. Opt. **40**, 1637–1648 (2001). [CrossRef]

**26. **M. F. Küchel, “A new approach to solve the three flat problem,” Optik **112**, 381–391 (2001). [CrossRef]

**27. **U. Griesmann, “Three-flat test solutions based on simple mirror symmetry,” Appl. Opt. **45**, 5856–5865 (2006). [CrossRef] [PubMed]

**28. **W. Gao, P.S. Huang, T. Yamada, and S. Kiyono, “A compact and sensitive two-dimensional angle probe for flatness measurement of large silicon wafers,” Precision Engineering **26**, 396–404 (2002). [CrossRef]

**29. **M. Schulz and C. Elster, “Traceable multiple sensor system for measuring curved surface profiles with high accuracy and high lateral resolution,” Opt. Eng. **45**, 060503 (2006). [CrossRef]

**30. **V. Greco and G. Molesini, “Micro-temperature effects on absolute flatness test plates,” Pure Appl. Opt. **7**, 1341–1346 (1998). [CrossRef]

**31. **V. B. Gubin and V. N. Sharonov, “Absolute calibration of spherical surfaces,” Sov. J. Opt. Technol. **57**, 554–555 (1990).

**32. **
International Bureau of Weights and Measures, International Electrotechnical Commission, International Federation of Clinical Chemistry, International Organization for Standardization, International Union of Pure and Applied Chemistry, International Union of Pure and Applied Physics, and International Organization of Legal Metrology, *Guide to the Expression of Uncertainty in Measurements* (International Organization for Standardization, Geneva, 1993).