Abstract

In this paper, we present a novel computational imaging system using a dual off-axis color filtered aperture (DCA) for distance estimation in a single-camera framework. The DCA consists of two off-axis apertures that are covered by red and cyan color filters. The two apertures generate misaligned color channels in which the amount of misalignment of points in the image plane are a function of the distance from the camera of the corresponding points in the object plane. The primary contribution of this paper is the derivation of a mathematical model of the relationship between the color shifting values and distance of an object from the camera when the camera parameters and the baseline distance between the two apertures in the DCA are given. The proposed computational imaging system can be implemented simply by inserting an appropriately sized DCA into any general optical system. Experimental results show that the DCA camera is able to estimate the distances of objects within a single-camera framework.

© 2013 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Y. Lim, J. Park, K. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express20, 23480–23488 (2012).
    [CrossRef] [PubMed]
  2. V. Aslantas, “A depth estimation algorithm with a single image” Opt. Express15, 5024–5029 (2007).
    [CrossRef] [PubMed]
  3. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt.41, 5488–5496 (2002).
    [CrossRef] [PubMed]
  4. T. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt.38, 370–381 (1999).
    [CrossRef]
  5. U. Dhond and J. Aggarwal, “Structure from stereo-a review,” IEEE Trans. Sys., Man, Cyber.19, 1498–1510 (1989).
    [CrossRef]
  6. D. Schastein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vis.47, 7–42 (2002).
    [CrossRef]
  7. C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis.9, 137–154 (1992).
    [CrossRef]
  8. N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
    [CrossRef]
  9. P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell.27, 406–417 (2005).
    [CrossRef] [PubMed]
  10. A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
    [CrossRef] [PubMed]
  11. S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in Proceeding of International Conference on Computer Analysis of Images and Patterns (Seville, 2011), pp. 889–897.
  12. P. Axelsson, “Processing of laser scanner data-algorithms and applications,” ISPRS Journal of Photogrammetry and Remote Sensing54, 138–147 (1999).
    [CrossRef]
  13. S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
    [CrossRef]
  14. L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graphics25, 907–915 (2006).
    [CrossRef]
  15. S. Nayar, “Computational cameras: redefining the image,” Computer3930–38 (2006).
    [CrossRef]
  16. C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process.20, 3322–3340 (2011).
    [CrossRef] [PubMed]
  17. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Kyoto, 2009), pp. 325–332.
  18. A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
    [CrossRef]
  19. Q. Dou and P. Favaro, “Off-axis aperture camera: 3D shape reconstruction and image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, Anchorage, 2008), pp. 1–7.
  20. V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
    [CrossRef]
  21. S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
    [CrossRef] [PubMed]
  22. S. Bae and F. Durand, “Defocus magnification,” Comput. Graph. Forum.26, 571–579 (2007).
    [CrossRef]
  23. S. Lee, J. Paik, and M. Hayes, “Distance estimation with a two or three aperture SLR digital camera,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Poznan, Poland, 2013).

2012 (2)

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

Y. Lim, J. Park, K. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express20, 23480–23488 (2012).
[CrossRef] [PubMed]

2011 (1)

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process.20, 3322–3340 (2011).
[CrossRef] [PubMed]

2007 (4)

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

S. Bae and F. Durand, “Defocus magnification,” Comput. Graph. Forum.26, 571–579 (2007).
[CrossRef]

V. Aslantas, “A depth estimation algorithm with a single image” Opt. Express15, 5024–5029 (2007).
[CrossRef] [PubMed]

2006 (2)

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graphics25, 907–915 (2006).
[CrossRef]

S. Nayar, “Computational cameras: redefining the image,” Computer3930–38 (2006).
[CrossRef]

2005 (1)

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell.27, 406–417 (2005).
[CrossRef] [PubMed]

2002 (2)

D. Schastein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vis.47, 7–42 (2002).
[CrossRef]

Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt.41, 5488–5496 (2002).
[CrossRef] [PubMed]

1999 (2)

T. Poon and T. Kim, “Optical image recognition of three-dimensional objects,” Appl. Opt.38, 370–381 (1999).
[CrossRef]

P. Axelsson, “Processing of laser scanner data-algorithms and applications,” ISPRS Journal of Photogrammetry and Remote Sensing54, 138–147 (1999).
[CrossRef]

1998 (1)

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
[CrossRef]

1996 (1)

S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
[CrossRef]

1992 (1)

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis.9, 137–154 (1992).
[CrossRef]

1989 (1)

U. Dhond and J. Aggarwal, “Structure from stereo-a review,” IEEE Trans. Sys., Man, Cyber.19, 1498–1510 (1989).
[CrossRef]

1987 (1)

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

Aggarwal, J.

U. Dhond and J. Aggarwal, “Structure from stereo-a review,” IEEE Trans. Sys., Man, Cyber.19, 1498–1510 (1989).
[CrossRef]

Asada, N.

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
[CrossRef]

Aslantas, V.

Axelsson, P.

P. Axelsson, “Processing of laser scanner data-algorithms and applications,” ISPRS Journal of Photogrammetry and Remote Sensing54, 138–147 (1999).
[CrossRef]

Bae, S.

S. Bae and F. Durand, “Defocus magnification,” Comput. Graph. Forum.26, 571–579 (2007).
[CrossRef]

Cho, D.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

Dhond, U.

U. Dhond and J. Aggarwal, “Structure from stereo-a review,” IEEE Trans. Sys., Man, Cyber.19, 1498–1510 (1989).
[CrossRef]

Dou, Q.

Q. Dou and P. Favaro, “Off-axis aperture camera: 3D shape reconstruction and image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, Anchorage, 2008), pp. 1–7.

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

S. Bae and F. Durand, “Defocus magnification,” Comput. Graph. Forum.26, 571–579 (2007).
[CrossRef]

Favaro, P.

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell.27, 406–417 (2005).
[CrossRef] [PubMed]

Q. Dou and P. Favaro, “Off-axis aperture camera: 3D shape reconstruction and image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, Anchorage, 2008), pp. 1–7.

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

Frauel, Y.

Freeman, W.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

Fujiwara, H.

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
[CrossRef]

Har, D.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

Hayes, M.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

S. Lee, J. Paik, and M. Hayes, “Distance estimation with a two or three aperture SLR digital camera,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Poznan, Poland, 2013).

Javidi, B.

Kanade, T.

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis.9, 137–154 (1992).
[CrossRef]

Kim, N.

Kim, S.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

Kim, T.

Kwon, K.

Lee, E.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

Lee, S.

S. Lee, J. Paik, and M. Hayes, “Distance estimation with a two or three aperture SLR digital camera,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Poznan, Poland, 2013).

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

Lim, Y.

Lin, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Kyoto, 2009), pp. 325–332.

Maik, V.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

Matsuyama, T.

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
[CrossRef]

Nayar, S.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process.20, 3322–3340 (2011).
[CrossRef] [PubMed]

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graphics25, 907–915 (2006).
[CrossRef]

S. Nayar, “Computational cameras: redefining the image,” Computer3930–38 (2006).
[CrossRef]

S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
[CrossRef]

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Kyoto, 2009), pp. 325–332.

Noguchi, M.

S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
[CrossRef]

Paik, J.

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

S. Lee, J. Paik, and M. Hayes, “Distance estimation with a two or three aperture SLR digital camera,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Poznan, Poland, 2013).

Park, J.

Pentland, A.

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

Poon, T.

Schastein, D.

D. Schastein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vis.47, 7–42 (2002).
[CrossRef]

Shin, J.

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

Sim, T.

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in Proceeding of International Conference on Computer Analysis of Images and Patterns (Seville, 2011), pp. 889–897.

Soatto, S.

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell.27, 406–417 (2005).
[CrossRef] [PubMed]

Szeliski, R.

D. Schastein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vis.47, 7–42 (2002).
[CrossRef]

Tomasi, C.

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis.9, 137–154 (1992).
[CrossRef]

Watanabe, M.

S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
[CrossRef]

Zhang, L.

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graphics25, 907–915 (2006).
[CrossRef]

Zhou, C.

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process.20, 3322–3340 (2011).
[CrossRef] [PubMed]

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Kyoto, 2009), pp. 325–332.

Zhuo, S.

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in Proceeding of International Conference on Computer Analysis of Images and Patterns (Seville, 2011), pp. 889–897.

ACM Trans. Graphics (2)

L. Zhang and S. Nayar, “Projection defocus analysis for scene capture and image display,” ACM Trans. Graphics25, 907–915 (2006).
[CrossRef]

A. Levin, R. Fergus, F. Durand, and W. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics26, 70–79, 2007.
[CrossRef]

Appl. Opt. (2)

Comput. Graph. Forum. (1)

S. Bae and F. Durand, “Defocus magnification,” Comput. Graph. Forum.26, 571–579 (2007).
[CrossRef]

Computer (1)

S. Nayar, “Computational cameras: redefining the image,” Computer3930–38 (2006).
[CrossRef]

IEEE Trans. Image Process. (2)

C. Zhou and S. Nayar, “Computational cameras: convergence of optics and processing,” IEEE Trans. Image Process.20, 3322–3340 (2011).
[CrossRef] [PubMed]

S. Kim, E. Lee, M. Hayes, and J. Paik, “Multifocusing and depth estimation using a color shift model-based computational camera,” IEEE Trans. Image Process.21, 4152–4166 (2012).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

S. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell.18, 1186–1198 (1996).
[CrossRef]

P. Favaro and S. Soatto, “A geometric approach to shape from defocus,” IEEE Trans. Pattern Anal. Mach. Intell.27, 406–417 (2005).
[CrossRef] [PubMed]

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell.9, 523–531 (1987).
[CrossRef] [PubMed]

IEEE Trans. Sys., Man, Cyber. (1)

U. Dhond and J. Aggarwal, “Structure from stereo-a review,” IEEE Trans. Sys., Man, Cyber.19, 1498–1510 (1989).
[CrossRef]

Int. J. Comput. Vis. (3)

D. Schastein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithm,” Int. J. Comput. Vis.47, 7–42 (2002).
[CrossRef]

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vis.9, 137–154 (1992).
[CrossRef]

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vis.26, 153–163 (1998).
[CrossRef]

ISPRS Journal of Photogrammetry and Remote Sensing (1)

P. Axelsson, “Processing of laser scanner data-algorithms and applications,” ISPRS Journal of Photogrammetry and Remote Sensing54, 138–147 (1999).
[CrossRef]

J. Imaging Sci. Technol. (1)

V. Maik, D. Cho, J. Shin, D. Har, and J. Paik, “Color shift model-based segmentation and fusion for digital autofocusing,” J. Imaging Sci. Technol.51, 368–379 (2007).
[CrossRef]

Opt. Express (2)

Other (4)

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in Proceeding of International Conference on Computer Analysis of Images and Patterns (Seville, 2011), pp. 889–897.

Q. Dou and P. Favaro, “Off-axis aperture camera: 3D shape reconstruction and image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, Anchorage, 2008), pp. 1–7.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, Kyoto, 2009), pp. 325–332.

S. Lee, J. Paik, and M. Hayes, “Distance estimation with a two or three aperture SLR digital camera,” in Proceedings of Advanced Concepts for Intelligent Vision Systems (Poznan, Poland, 2013).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

A general imaging system using a thin lens with an aperture aligned with the optical axis. Image formation of a point that is (a) within the plane of focus of the camera, and (b) outside the plane of focus of the camera.

Fig. 2
Fig. 2

An optical system with dual off-axis apertures.

Fig. 3
Fig. 3

Image formation of a general thin lens.

Fig. 4
Fig. 4

Image formation with an off-axis aperture.

Fig. 5
Fig. 5

An imaging system with an aperture at c = (0, cy, cz) and an equivalent aperture at ceq = (0, c y eq, 0).

Fig. 6
Fig. 6

A plot of Δy versus the distance z of an object from the camera when the plane of focus is set at 20 meters.

Fig. 7
Fig. 7

Geometry of the dual off-axis color filtered apertures.

Fig. 8
Fig. 8

Distance Resolution. (a) The resolution versus z and (b) the accuracy of a distance estimate as a percentage of the distance being estimated.

Fig. 10
Fig. 10

The color shifted images of an object located between 40 and 500 cm using a 50 mm lens.

Fig. 11
Fig. 11

The registered results of the color shifting between red and cyan channels.

Fig. 12
Fig. 12

Comparison of the color shifting values that were found experimentally with those that are given theoretically for a 50 millimeter lens with a plane of focus set to 115 cm.

Fig. 9
Fig. 9

Camera configuration using dual off-axis color filtered apertures. (a) Dual apertures in the DSLR camera lens, (b) the DCA camera with a Canon EF-S 18–55 mm lens and light source, and (c) the image acquired by the DCA camera with a Tamron AF 55–200 mm lens.

Fig. 13
Fig. 13

Comparison of the color shifting values that were found experimentally with those that are given theoretically for a 180 millimeter lens with a plane of focus set to 10 meters.

Tables (1)

Tables Icon

Table 1 Specifications for the DCA camera.

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

1 v 0 + 1 z 0 = 1 f .
z 0 = f v 0 v 0 f
v = f z z f .
b = d f z 0 | z z 0 | | z f | ,
c y π y ( v ) v = π y ( v ) π y ( v 0 ) v 0 v .
π y ( v 0 ) = v 0 v π y ( v ) + ( 1 v 0 v ) c y .
π y ( v ) = y z v ,
π y ( v 0 ) = y z v 0 + ( 1 v 0 v ) c y ,
c y eq π y ( v 0 ) v 0 = c y π y ( v 0 ) v 0 c z ,
c y eq = c y v 0 c z π y ( v 0 ) v 0 c z .
π y ( v 0 ) = v y z ( v 0 c z v c z ) + v v 0 v c z c y .
π x ( v 0 ) = v x z ( v 0 c z v c z ) + v v 0 v c z c x .
π ( p ) = v 0 z ( x , y ) ,
Δ y = v v 0 v c z Δ c y .
v 0 = f z 0 z 0 f and v = f z z f
Δ y = f 2 z 0 z ( z 0 f ) ( f z ( z f ) c z ) Δ c y ,
α = W H N 1 N 2 mm .
α = 0.0052 .
Δ y = f 2 α z 0 z ( z 0 f ) ( f z ( z f ) c z ) Δ c y .
z = f f z 0 Δ c y α c z ( z 0 f ) Δ y f 2 Δ c y + α ( z 0 f ) ( f c z ) Δ y .
c z = f z 1 ( z 0 f ) Δ y f ( z 0 z 1 ) Δ c y z 1 ( z 0 f ) Δ y .
z f α f Δ c y α c z Δ y ( f c z ) Δ y .
d d z Δ y = f 2 α ( z 0 f ) c z ( z 0 f ) f z 0 [ f z ( z f ) c z ] 2 Δ c y ,
R ( z , Δ c y ) = | d d z Δ y | 1 = α | z 0 f | f 2 [ f z ( z f ) c z ] 2 | c z ( z 0 f ) f z 0 | 1 | Δ c y | ,
R ( z , Δ c y ) α z 2 f 2 | f c z | Δ c y

Metrics