Abstract

We proposed a reconstruction method for the occluded region of three-dimensional (3D) object using the depth extraction based on the optical flow and triangular mesh reconstruction in integral imaging. The depth information of sub-images from the acquired elemental image set is extracted using the optical flow with sub-pixel accuracy, which alleviates the depth quantization problem. The extracted depth maps of sub-image array are segmented by the depth threshold from the histogram based segmentation, which is represented as the point clouds. The point clouds are projected to the viewpoint of center sub-image and reconstructed by the triangular mesh reconstruction. The experimental results support the validity of the proposed method with high accuracy of peak signal-to-noise ratio and normalized cross-correlation in 3D image recognition.

© 2010 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).
  2. P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
    [CrossRef]
  3. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993).
    [CrossRef]
  4. B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.
  5. G. Lippmann, “La photographie integrále,” C. R. Acad. Sci. Ser. IIc Chim. 146, 446–451 (1908).
  6. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [CrossRef] [PubMed]
  7. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004).
    [CrossRef] [PubMed]
  8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
    [CrossRef] [PubMed]
  9. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
    [CrossRef] [PubMed]
  10. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008).
    [CrossRef]
  11. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
    [CrossRef] [PubMed]
  12. M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010).
    [CrossRef]
  13. S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14(25), 12085–12095 (2006).
    [CrossRef] [PubMed]
  14. D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express 16(21), 16294–16304 (2008).
    [CrossRef] [PubMed]
  15. K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47(15), 2859–2869 (2008).
    [CrossRef] [PubMed]
  16. C. M. Do and B. Javidi, “3D integral imaging reconstruction of occluded objects using independent component analysis-based K-means clustering,” J. Disp. Technol. 6(7), 257–262 (2010).
    [CrossRef]
  17. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [CrossRef] [PubMed]
  18. M. Levoy, and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH ‘96 (Association for Computing Machinery, New Orleans, 1996), pp. 31–42.
  19. B. Lee, S. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001).
    [CrossRef]
  20. K. Hong, J. Hong, J.-H. Jung, J.-H. Park, and B. Lee, “Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging,” Opt. Express 18(11), 12002–12016 (2010).
    [CrossRef] [PubMed]
  21. B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17(1-3), 185–203 (1981).
    [CrossRef]
  22. A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
    [CrossRef] [PubMed]
  23. D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2432–2439.
  24. S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” in Proceedings of IEEE Conference on International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  25. H. Kim, J. Hahn, and B. Lee, “Mathematical modeling of triangle-mesh-modeled three-dimensional surface objects for digital holography,” Appl. Opt. 47(19), D117–D127 (2008).
    [CrossRef] [PubMed]
  26. J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, “Undistorted pickup method of both virtual and real objects for integral imaging,” Opt. Express 16(18), 13969–13978 (2008).
    [CrossRef] [PubMed]
  27. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006).
    [CrossRef] [PubMed]

2010 (3)

2009 (1)

2008 (6)

2007 (3)

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
[CrossRef] [PubMed]

2006 (2)

2005 (2)

A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

2004 (1)

2001 (1)

1997 (1)

1993 (1)

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993).
[CrossRef]

1981 (1)

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17(1-3), 185–203 (1981).
[CrossRef]

1908 (1)

G. Lippmann, “La photographie integrále,” C. R. Acad. Sci. Ser. IIc Chim. 146, 446–451 (1908).

Aloimonos, Y.

A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
[CrossRef] [PubMed]

Arai, J.

Athineos, S.

Baasantseren, G.

Benzie, P.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Chen, T.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Choi, H.

Do, C. M.

C. M. Do and B. Javidi, “3D integral imaging reconstruction of occluded objects using independent component analysis-based K-means clustering,” J. Disp. Technol. 6(7), 257–262 (2010).
[CrossRef]

Fermüller, C.

A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
[CrossRef] [PubMed]

Hahn, J.

Hong, J.

Hong, K.

Hong, S.-H.

Hopf, K.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Horn, B.

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17(1-3), 185–203 (1981).
[CrossRef]

Hoshino, H.

Hwang, D.-C.

Javidi, B.

Jung, J.-H.

Jung, S.

Kanade, T.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993).
[CrossRef]

Kang, J.-M.

Kim, E.-H.

Kim, E.-S.

Kim, H.

Kim, N.

Kim, S.-C.

Kim, Y.

Kubota, A.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Lee, B.

Lee, B.-G.

Lee, J.-J.

Lee, K.-J.

Lippmann, G.

G. Lippmann, “La photographie integrále,” C. R. Acad. Sci. Ser. IIc Chim. 146, 446–451 (1908).

Magnor, M.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Martinez-Corral, M.

Martínez-Corral, M.

Martinez-Cuenca, R.

Martínez-Cuenca, R.

Min, S.-W.

Ogale, A. S.

A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
[CrossRef] [PubMed]

Okano, F.

Okutomi, M.

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993).
[CrossRef]

Park, G.

Park, J.-H.

Passalis, G.

Piao, Y.

Pons, A.

Rakkolainen, I.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Saavedra, G.

Sainov, V.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Schunck, B.

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17(1-3), 185–203 (1981).
[CrossRef]

Sgouros, N.

Shin, D.-H.

Smolic, A.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Surman, P.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Tanimoto, M.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Theoharis, T.

Urey, H.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

von Kopylow, C.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Watson, J.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

Yuyama, I.

Zhang, C.

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

Zhang, M.

Appl. Opt. (7)

Artif. Intell. (1)

B. Horn and B. Schunck, “Determining optical flow,” Artif. Intell. 17(1-3), 185–203 (1981).
[CrossRef]

C. R. Acad. Sci. Ser. IIc Chim. (1)

G. Lippmann, “La photographie integrále,” C. R. Acad. Sci. Ser. IIc Chim. 146, 446–451 (1908).

IEEE Signal Process. Mag. (1)

A. Kubota, A. Smolic, M. Magnor, M. Tanimoto, T. Chen, and C. Zhang, “Multiview imaging and 3DTV,” IEEE Signal Process. Mag. 24(6), 10–21 (2007).

IEEE Trans. Circ. Syst. Video Tech. (1)

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV displays: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993).
[CrossRef]

A. S. Ogale, C. Fermüller, and Y. Aloimonos, “Motion segmentation using occlusions,” IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 988–992 (2005).
[CrossRef] [PubMed]

J. Disp. Technol. (1)

C. M. Do and B. Javidi, “3D integral imaging reconstruction of occluded objects using independent component analysis-based K-means clustering,” J. Disp. Technol. 6(7), 257–262 (2010).
[CrossRef]

J. Opt. Soc. Korea (1)

Opt. Express (7)

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008).
[CrossRef] [PubMed]

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005).
[CrossRef] [PubMed]

S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14(25), 12085–12095 (2006).
[CrossRef] [PubMed]

D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express 16(21), 16294–16304 (2008).
[CrossRef] [PubMed]

J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, “Undistorted pickup method of both virtual and real objects for integral imaging,” Opt. Express 16(18), 13969–13978 (2008).
[CrossRef] [PubMed]

R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14(21), 9657–9663 (2006).
[CrossRef] [PubMed]

K. Hong, J. Hong, J.-H. Jung, J.-H. Park, and B. Lee, “Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging,” Opt. Express 18(11), 12002–12016 (2010).
[CrossRef] [PubMed]

Opt. Lett. (1)

Other (4)

M. Levoy, and P. Hanrahan, “Light field rendering,” in Proceedings of SIGGRAPH ‘96 (Association for Computing Machinery, New Orleans, 1996), pp. 31–42.

B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.

D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2432–2439.

S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A database and evaluation methodology for optical flow,” in Proceedings of IEEE Conference on International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Supplementary Material (6)

» Media 1: MOV (545 KB)     
» Media 2: MOV (544 KB)     
» Media 3: MOV (671 KB)     
» Media 4: MOV (691 KB)     
» Media 5: MOV (606 KB)     
» Media 6: MOV (612 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1

Concept of the proposed method.

Fig. 2
Fig. 2

Two types of pickup process in integral imaging: (a) real mode pickup process with central depth plane and the other depth plane, (b) focal mode pickup process.

Fig. 3
Fig. 3

An example of elemental image to sub-image conversion: (a) computer generated elemental image set and the center elemental image (56 × 55 lens array, 19 × 19 pixels/lens, f = 3.3), (b) sub-image array and the center sub-image (19 × 19 sub-image array, 56 × 55 pixels/sub-image).

Fig. 4
Fig. 4

Principles of depth extraction and depth quantization problem in (a) elemental image based method, (b) sub-image based method.

Fig. 5
Fig. 5

The process of depth extraction using optical flows between center sub-image and x-directional sub-image sequence.

Fig. 6
Fig. 6

Comparison of the depth map of center sub-image using SSSD and optical flow: (a) the extracted depth map using SSSD, (b) extracted depth map using optical flow, (c) comparison of the ground truth and extracted depth maps obtained by different methods in y-z plane (x = 22).

Fig. 7
Fig. 7

Histogram of the number of pixels in depth map of center sub-image and depth threshold between the inflection points.

Fig. 8
Fig. 8

Point cloud representation and depth segmentation of center sub-image using depth threshold.

Fig. 9
Fig. 9

Reconstruction process of occluded object in point cloud representation: (a) depth extraction and segmentation of side sub-images, (b) reconstruction of center sub-image with resolution enhancement using triangular mesh reconstruction.

Fig. 10
Fig. 10

Triangular mesh reconstruction of center sub-image: (a) reconstruction of the vertex of occluded region in center sub-image (blue point) using neighbor projected points (red points) in the refinement window (S = 4), (b) the refining of all other vertices using the scanning of the window and triangular mesh reconstruction.

Fig. 11
Fig. 11

Triangular mesh reconstruction of occluded region in center sub-image: (a) triangular mesh representation of occluded center sub-image, (b) triangular mesh reconstruction of occluded region in center sub-image with resolution enhancement (S = 10).

Fig. 12
Fig. 12

Comparison of occluded sub-image array and reconstructed sub-image array in arbitrary viewpoints: (a) movie of occluded sub-image array (Media 1), (b) movie of reconstructed sub-image array with resolution enhancement (S = 10) (Media 2).

Fig. 13
Fig. 13

Analysis of the range of reconstruction in occluded area using the proposed method (odd number n and neff ): woc in red color is the maximum range of reconstruction with neff lens array, and blue color is the case of n which is smaller than neff lens array.

Fig. 14
Fig. 14

Experimental setup: (a) experimental setup for pickup process, (b) formation of target object and obstacle.

Fig. 15
Fig. 15

Experimental result: (a) rectified elemental image set, (b) center sub-image, (c) depth map of center sub-image from the optical flow with sub-pixel accuracy, (d) reconstructed center sub-image, (e) reconstructed center sub-image with resolution enhancement (S = 5), (f) reconstructed depth map of center sub-image with resolution enhancement (S = 5).

Fig. 17
Fig. 17

Experimental result at arbitrary viewpoints: (a) movie of occluded sub-images of box objects (Media 3), (b) movie of reconstructed sub-images of box objects with resolution enhancement (S = 5) (Media 4), (c) movie of occluded sub-images of hand object (Media 5), (d) movie of reconstructed sub-images of hand object with resolution enhancement (S = 5) (Media 6).

Fig. 16
Fig. 16

Experimental result of triangular mesh reconstruction: (a) triangular mesh representation of occluded center sub-image, (b) triangular mesh reconstruction of occluded region in center sub-image with resolution enhancement using proposed method (S = 5).

Tables (2)

Tables Icon

Table 1 Specification of the experimental setup

Tables Icon

Table 2 PSNR and NCC results of reconstructed object

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

D C D P = f g g f .
D d l , d p = f d l p l d P p p ,
d I d t = I x d x d t + I y d y d t + I t = 0.
u = d x d t , v = d y d t .
D O F ( p c , q c ) ( x , y ) = f p l 2 p p [ 1 n x 1 p = 1 n x | u ( p c , q c ) , ( p , q c ) ( x , y ) p c p | + 1 n y 1 q = 1 n y | v ( p c , q c ) , ( p c , q ) ( x , y ) q c q | ] ,
( x , y ) = ( x + p p ( p c p ) D O F ( x , y ) f p l , y + p p ( q c q ) D O F ( x , y ) f p l ) ,
n e f f = D o c f + w p l + 1 ,
d p e f f = f [ ( n 1 ) p l w ] 2 D o c p p ,
w o c = { D t p l 2 f ( n e f f 1 ) p l 2 + w 2 ,     n n e f f D t d p e f f p p f ( n 1 ) p l 2 + w 2 ,     n < n e f f .

Metrics