In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.
©2013 Optical Society of America
In photographic practice one is often faced with the problem of imaging of objects that are lying in the plane tilted by some angle with respect to an optical axis of a rotationally symmetrical optical system, i.e. in the plane, which is not perpendicular to the optical axis. A typical situation of the object lying in such a tilted plane is, for example, the case of taking pictures of tall buildings. In the field of metrology this situation occurs, for example, in the case of laser triangulation sensors, etc. This problem was firstly investigated by Jules Carpentier . However, he has not described the problem mathematically. The detailed analysis of the problem was performed later by Theodor Scheimpflug (1865-1911) , who proved mathematically that if the image of the object lying in the plane tilted with respect to the optical axis of a photographic lens should be sharp, then the plane of the film (photographic plate, detector) has to be tilted with respect to the optical axis of a photographic lens in such a way that it intersects the image principal plane of the lens in the same height as the object plane intersects the object principal plane and it goes through the image of the axial point of the object. This condition is called Scheimpflug condition (Scheimpflug rule) [2–4]. Several companies manufacture commercially professional photographic cameras that enable to use Scheimpflug condition or various tilt and shift lenses are used for classical cameras [5,6].
Another field where one can meet with the problem of imaging of objects that lie in planes tilted with respect to the optical axis of a rotationally symmetrical optical system is the field of laser triangulation sensors for distance or surface topography measurements [7–18]. Due to the fact that Theodor Scheimpflug assumed the photographic lens as an ideal optical system in his analysis, the results he obtained are not completely accurate for the case of a real optical system. If one performs a more detailed analysis of a given type of imagery for the case of the real optical system, then it is possible to find that for different object points the optical system has different f-numbers and different aberrations. Due to these two effects the image generated by the real optical system will not be sharp. In order to reduce the negative influence of above mentioned effects as much as possible one can reduce the aperture of the optical system, for example, by setting the f-number higher than 11. As far as we are concerned this problem of an analysis of Scheimpflug imaging with respect to aberrations was not analyzed and described in literature yet. In further text we focus on a detailed analysis of laser triangulation sensors based on geometrical optics theory and the relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.
2. Imaging of objects lying in the plane tilted with respect to optical axis of rotationally symmetrical optical system
Let us now focus on a derivation of an equation for sharp imaging of points in the object space on a straight line tilted with respect to the optical axis of the ideal rotationally symmetrical optical system, i.e. the optical system that images point as a point, line as a line and plane as a plane.
Assume imaging of two different points by a given optical system shown in Fig. 1. The first point is the point A that lies on the optical axis of our optical system in the axial distance qA from the object focal point F and the second point is the point C that lies in distance y perpendicularly from the optical axis of our optical system and in the axial distance qB from object focal point F. F' is the image focal point of a given optical system. The image of the point A is the point A' and the image of the point C is the point C', that lies in distance y′ perpendicularly from the optical axis of optical system. Points P and P' are principal points of the optical system. For imaging of such optical system in air it holds [19,20]Fig. 1. The straight line going in the object space through points A and C makes an angle α with the optical axis of the optical system (α is negative), for which we can writeEq. (2). If we denote and then according to Fig. 1 it holdsEqs. (4) and (5) one obtains
Let us now apply the previous equations to the problem of laser triangulation sensors [7–18]. Assume we have the detector (e.g. CCD matrix sensor) in the image plane, which is tilted by the angle with respect to the optical axis of the optical system of the sensor that enable us to measure the distance (d is positive and is negative). Now, if we want to determine the distance in the object space, which corresponds to the value d measured by the detector, we will proceed in the following way. From Eq. (4) we obtainEq. (9) we obtain for the distance d the following formulaEq. (10) we haveEquation (11) enable us to calculate the change of the position of the spot on the detector corresponding to the change of the quantity qB by a small value . The quantity is the measurement error using the triangulation sensor. If we can find the position of the spot on the detector with accuracy then the measurement error is according to Eq. (11) given byEqs. (4) and (9) the following formulaFigure 2 shows the situation, where ζ and ζ' denote the entrance and exit pupils of the optical system. The diameter of the entrance pupil is D and the diameter of the exit pupil is D'. The meaning of other symbols is clear from Fig. 2.
Assume now that the optical system is the diffraction limited optical system (aberrations are zero). For angle between edge meridional rays that emerge from the point C and go through the edge of the entrance pupil and for angle between two edge meridional rays that go through the edge of the exit pupil and converge into point C' one can derive the following relations
Denoting ω0 the angle between the two meridional rays emerging from the axial point A and going through the edge of the entrance pupil and the angle between the two meridional rays converging to the axial point A' and going through the edge of the exit pupil, one can derive following relations20]
3. Change in aberrations with object position
Consider the problem of the influence of the change in the object position on imaging properties of a general rotationally symmetrical optical system. The described problem will be analyzed using the theory of third-order aberrations [19,21–28] which enables to obtain the solution in a simple analytical form. Aberration properties for light of a specific wavelength are given by its third-order aberration coefficients SI, SII, SIII, SIV, SV, and SVI, where SI is the coefficient of spherical aberration, SII is the coefficient of coma, SIII is the coefficient of astigmatism, SIV is the Petzval’s sum, SV is the coefficient of distortion, and SVI is the coefficient of spherical aberration in pupils. We obtain the following equations [21,22] for transverse ray aberrations within the validity of the third-order aberration theory26,27]. Coefficients SI, SII, SIII, SIV, SV, which describe imaging properties of the optical system for an arbitrary position of the object (arbitrary transverse magnification m), can be expressed using the third order aberration coefficients . These coefficients characterize imaging properties of the optical system for imaging of the object at infinity (, transverse magnification m = 0), and the coefficient of spherical aberration in pupils . Formulas for aberration coefficients SI, SII, SIII, SIV, SV [19,21–28] can be rewritten after a tedious derivation into the following matrix form Eqs. (22) – (24) the advantage of the matrix form of formulas for aberration coefficients. The matrix B has to be calculated once for all and one can use it for different values of magnification. The matrix form is also very useful for zoom lens design.
It can be shown that previous formulas are generally valid (within the validity of the third-order aberration theory) and do not depend on the composition of the optical system [21–28]. Properties of the optical system are then fully specified by its focal length f′ and third-order aberration coefficients , which characterize imaging properties of the optical system for imaging of the object at infinity. Aberration coefficients SI, SII, SIII, SIV, SV are calculated for the following input values
Assume now that for laser triangulation sensor we will use the optical system (objective lens) that has corrected all third order aberrations for the object at infinity and therefore third order aberration coefficients have zero values: . The matrix B then simplifies toEqs. (22) – (24) and Eq. (27) we can rewrite Eq. (20) asEq. (29) denotes the aberration coefficients for the optical system, which has zero aberration coefficients for the object at infinity. The mean value of transverse ray aberrations (coordinates of the centroid of the spot diagram) , and the diameter dc of the circle of confusion in the paraxial image plane can be expressed by the following formulas [21,22]21,22]21,22] and therefore we will not deal with it here.
Using Eq. (4) we obtainEq. (33) we obtainFigure 3 shows the imaging of points lying in the object plane η tilted with respect to optical axis by the real optical system OS with aberrations. The point A' is the paraxial image of the point A and the point C' lying in the paraxial image plane η' is the paraxial image of the point C. Planes η and η' satisfy Scheimpflug condition. The paraxial image height is. The homocentric bundle of rays emerging from the point C goes through the entrance pupil ζ and it is transformed by the optical system OS to the nonhomocentric bundle of rays, which has the energetic centre located at the point C” at the distance from the paraxial point C'. The chief ray of this bundle intersects the plane η' at the point D', which is the intersection of straight lines and . We obtain for the distance according to Fig. 3Eq. (36) is sufficiently accurate for practical cases. Equation (36) enables us to calculate the value of the shift of the energetic centre of the spot at the detector (lying in plane η') with respect to the position of this energetic centre in case of the ideal optical system without aberrations. As one can see if we use the optical system corrected for the object at infinity (e.g. photographic lens) in the laser triangulation sensor, then measuring of objects in the finite distance (measuring the finite distance) is affected by the measurement error.
We can write for the distance measurement error caused by the dependence of aberrations of the optical system on the object distance then according to Eq. (11)7,8,14,29]. The measuring error due to surface roughness is then given by30,31], where the properties of photodetectors are described, and in Ref , which is focused on optomechanical properties of optoelectronic devices. By a suitable adjustment many of above-mentioned effects can be reduced to an acceptable level. The adjustment process of optical systems and devices is described in detail in books [33–35].
Let us show the example of calculation of parameters of the laser triangulation sensor using the objective lens corrected for the object at infinity ( ). We choose, for example, , , dmax = 15 mm, and
The results of the error calculation for different object positions are given in Table 1, where is the error in the determination of the object position, is the shift of the position of the spot on the detector corresponding to the quantity and is the diameter of the circle of confusion in the plane ξ' with the center at the point C”. The linear dimensions in Table 1 are given in millimeters. Table 1 presents two cases of objective lenses with different f-numbers, and . As one can see from Table 1 the change of aberrations of the objective lens with f-number causes the error of the measured distance of for the case of 20 mm measuring range of sensor. As it can also be seen from Table 1 the error caused by the aberrations is higher than the error caused by the surface roughness of the measured object. By decreasing the aperture (increasing the f-number) the error of the objective lens reduces.
It was shown that the fulfillment of the so called Scheimpflug condition does not guarantee the sharp image of the object as it is usually declared, because due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. We performed a detailed theoretical analysis of the so called Scheimpflug imaging condition, i.e. the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system. The analysis was performed within the validity of the third order aberration theory. It was presented the analysis of the influence of mentioned effects on the accuracy of the laser triangulation sensors measurements. Formulas for the calculation of measurement errors and construction parameters of laser triangulation sensors were derived in our work.
This work has been supported by the Czech Science Foundation grant 13-31765S.
References and links
1. Improvements in Enlarging or like Cameras, British Patent No.1139, 1901.
2. Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for other Purposes, British Patent No.1196, 1904.
3. L. Larmore, Introduction to Photographic Principles (Dover Publications, 1965).
4. S. F. Ray, Applied Photographic Optics (Focal Press, 2002).
8. R. Leach, Optical Measurement of Surface Topography (Springer, 2011).
9. K. Harding, Handbook of Optical Dimensional Metrology (Taylor & Francis, 2013).
10. K. Žbontar, M. Mihelj, B. Podobnik, F. Povše, and M. Munih, “Dynamic symmetrical pattern projection based laser triangulation sensor for precise surface position measurement of various material types,” Appl. Opt. 52(12), 2750–2760 (2013). [CrossRef] [PubMed]
11. H.-Y. Feng, Y. Liu, and F. Xi, “Analysis of digitizing errors of a laser scanning system,” Precis. Eng. 25(3), 185–191 (2001). [CrossRef]
12. R.-T. Lee and F.-J. Shiou, “Multi-beam laser probe for measuring position and orientation of freeform surface,” Measurement 44(1), 1–10 (2011). [CrossRef]
13. J. Liu, L. Tian, and L. Li, “Light power density distribution of image spot of laser triangulation measuring,” Opt. Lasers Eng. 29(6), 457–463 (1998). [CrossRef]
14. Lei Shen, Dinggen Li, and Feng Luo, “A study on laser speckle correlation method applied in triangulation displacement measurement,” Optik (submitted) 2013. [CrossRef]
15. H. Wang, “Long-range optical triangulation utilising collimated probe beam,” Opt. Lasers Eng. 23(1), 41–52 (1995). [CrossRef]
16. V. Lombardo, T. Marzulli, C. Pappalettere, and P. Sforza, “A time-of-scan laser triangulation technique for distance measurements,” Opt. Lasers Eng. 39(2), 247–254 (2003). [CrossRef]
17. G. Wang, B. Zheng, X. Li, Z. Houkes, and P. P. L. Regtien, “Modelling and calibration of the laser beam-scanning triangulation measurement system,” Robot. Auton. Syst. 40(4), 267–277 (2002). [CrossRef]
18. B. Muralikrishnan, W. Ren, D. Everett, E. Stanfield, and T. Doiron, “Performance evaluation experiments on a laser spot triangulation probe,” Measurement 45(3), 333–343 (2012). [CrossRef]
19. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999).
20. H. Gross, Handbook of Optical Systems: Fundamentals of Technical Optics (Wiley 2005).
22. A. Miks and J. Novak, “Dependence of camera lens induced radial distortion and circle of confusion on object position,” Opt. Laser Technol. 44(4), 1043–1049 (2012). [CrossRef]
23. A. Miks, Applied Optics (Czech Technical University, 2009).
24. H. A. Buchdahl, An Introduction to Hamiltonian Optics (Cambridge University, 1970).
25. W. T. Welford, Aberrations of the Symmetrical Optical Systems (Academic Press, 1974).
26. M. Herzberger, Modern Geometrical Optics (Interscience, 1958).
27. M. Herzberger, Strahlenoptik (Verlag von Julius Springer, Berlin, 1931).
28. C. G. Wynne, “Primary aberrations and conjugate change,” Proc. Phys. Soc. 65B, 429–437 (1952).
30. F. Träger, Handbook of Laser and Optics (Springer, 2007).
31. B. E. A. Saleh and M. C. Teich, Fundamental of Photonics (John Wiley & Sons, 2007).
32. P. E. Yoder, Jr., Opto-Mechanical Systems Design (CRC, 2006).
33. M. M. Rusinov, Юстировка оптических приборов (Недра, 1969).
34. J. Picht, Meß - und Prüfmethoden der Optischen Fertigung (Akademie-Verlag, 1953).
35. F. Hansen, Justierung (VEB Verlag Technik, 1967).