Abstract

The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.

© 2014 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Refocusing distance of a standard plenoptic camera

Christopher Hahne, Amar Aggoun, Vladan Velisavljevic, Susanne Fiebig, and Matthias Pesch
Opt. Express 24(19) 21521-21540 (2016)

Distance measurement based on light field geometry and ray tracing

Yanqin Chen, Xin Jin, and Qionghai Dai
Opt. Express 25(1) 59-76 (2017)

Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method

Tian-Jiao Li, Sai Li, Yuan Yuan, Yu-Dong Liu, Chuan-Long Xu, Yong Shuai, and He-Ping Tan
Opt. Express 25(7) 8274-8287 (2017)

References

  • View by:
  • |
  • |
  • |

  1. F. Ives, “US patent 725,567,” (1903).
  2. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
    [Crossref]
  3. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.
  4. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
    [Crossref]
  5. M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.
  6. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.
  7. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.
  8. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
    [Crossref]
  9. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref] [PubMed]
  10. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.
  11. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.
  12. C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
    [Crossref]
  13. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).
  14. C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
    [Crossref]
  15. T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
    [Crossref]
  16. C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
    [Crossref]
  17. E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).
  18. N. Konidaris, “Optical Prescriptions in Zemax,” (2014), https://sites.google.com/site/nickkonidaris/prescriptions .
  19. Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
    [Crossref]

2013 (1)

2008 (1)

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

1908 (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

Aggoun, A.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

Andalman, A.

Ben-Ezra, M.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Bergen, J. R.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

Bishop, T. E.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Broxton, M.

Cohen, N.

Curless, B.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Deisseroth, K.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Favaro, P.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Georgeiv, T.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

Goma, S.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

Gortler, S. J.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Grosenick, L.

Hahne, C.

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

Hecht, E.

E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Intwala, C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Isaksen, A.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Ives, F.

F. Ives, “US patent 725,567,” (1903).

Kung, S. Y.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref] [PubMed]

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

Lippmann, G.

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

McCormick, M.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

McMillan, L.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

Nayar, S.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

Peleg, S.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Perwass, C.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

Pritch, Y.

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

Salesin, D.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Wietzke, L.

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

Wu, C.

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Yang, S.

Zanetti, S.

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

Zheng, K. C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

IEEE Trans. Pattern Anal. Mach. Intell. (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

J. Phys. Théor. Appl. (1)

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[Crossref]

Journal of Display Technology (1)

C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth Mapping of Integral Images Through Viewpoint Image Extraction With a Hybrid Disparity Analysis Algorithm,” Journal of Display Technology 4, 101–108 (2008).
[Crossref]

Opt. Express (1)

Other (15)

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2009), pp. 1–8.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of Eurographics Symposium on Rendering, (2006), pp. 263–272.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing,M. S. Landy and J. A. Movshon, eds. (MIT Press, 1991), pp. 3–20.

M. Levoy and P. Hanrahan, “Lightfield rendering,” in Proceedings of ACM SIGGRAPH, (1996), pp. 31–42.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of ACM SIGGRAPH, (2000), pp. 297–306.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Lightfield photography with a hand-held plenoptic camera,” Stanford Tech. Report CTSR (2005), pp. 1–11.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH, (2006), pp. 924–934.
[Crossref]

T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in IEEE International Conference on Computational Photography, (2009).

C. Perwass and L. Wietzke, “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE8291, 829108 (2012).
[Crossref]

T. Georgiev, A. Lumsdaine, and S. Goma, “Plenoptic Principal Planes,” in Imaging and Applied Optics, OSA Technical Digest (CD) (Optical Society of America, 2011), paper JTuD3.
[Crossref]

C. Hahne and A. Aggoun, “Embedded FIR filter design for real-time refocusing using a standard plenoptic video camera,” in Digital Photography X, Proc. SPIE9023, 902305 (2014).
[Crossref]

E. Hecht, Optics, Fourth Edition (Addison Wesley, 2001).

N. Konidaris, “Optical Prescriptions in Zemax,” (2014), https://sites.google.com/site/nickkonidaris/prescriptions .

Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic disparity control in stereo panoramas (OmniStereo),” in Proceedings of IEEE Workshop on Omnidirectional Vision, (2000), pp. 54–61.
[Crossref]

F. Ives, “US patent 725,567,” (1903).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1

Planes of irradiance (Ref. [16], Fig. 1).

Fig. 2
Fig. 2

(a) Micro lens sj and a chief ray mi. (b) Collimated light rays traveling through the main lens.

Fig. 3
Fig. 3

Ray tracing intersection model (Ref. [16], Fig. 2).

Fig. 4
Fig. 4

Ray tracing intersection examples demonstrating Eqs. (7) and (8).

Fig. 5
Fig. 5

Ray tracing intersection model indicating the DOF for FIS a = 1.

Fig. 6
Fig. 6

(a–c) Tilt angles Φ i of the virtual lenses Fi; (d) Illustration of baseline ΔBg between virtual optical axes z′i.

Fig. 7
Fig. 7

Zemax screenshots: (a–b) Intersecting light beams at distances da± considering the pixel pitch; (c) Chief rays traveling through Double Gauss objective indicating the baseline ΔBg.

Fig. 8
Fig. 8

Matlab screenshots: (a) Paraxial ray tracing based on refraction at principal planes; (b) Close up of rays under a micro lens; (c) Close up of the ray intersection in object space.

Fig. 9
Fig. 9

1-D plot of (a) predicted and (b) simulated distances d.

Fig. 10
Fig. 10

Impact of varying lens parameters on the refocusing distance da.

Tables (4)

Tables Icon

Table 1 Objective lens parameters.

Tables Icon

Table 2 Comparison of refocusing distances da and da± with respect to the image sensor.

Tables Icon

Table 3 Refocusable distances da with different lens settings for fs and bUfU.

Tables Icon

Table 4 Virtual lens positions where fU = f100, bU1 = fU and bU2 = fU + 20 mm. The predicted depth position FU = 99.514 mm is given with respect to the principal plane H1 U .

Equations (37)

Equations on this page are rendered with MathJax. Learn more.

I b U ( s , t ) = 1 b U 2 L b U ( s , t , U , V ) A ( U , V ) cos 4 θ d U d V
I b U ( s ) = L b U ( s , U ) d U
I b U ( s ) = I f s ( u , s ) d u
E b U ( s ) = E f s ( u , s ) d u
1 f s = 1 a s + 1 b s
0 = lim a s ( 1 a s )
E 0 [ s 0 ] = E f s [ u 2 , s 0 ] + E f s [ u 1 , s 0 ] + E f s [ u 0 , s 0 ]
E 1 [ s 2 ] = E f s [ u 2 , s 2 ] + E f s [ u 1 , s 3 ] + E f s [ u 0 , s 4 ]
E a [ s j ] = i = c c E f s [ u m ^ 1 c + i , s j + a ( c i ) ] , a
k = j × m ^ + c + i
E 0 [ s 0 ] = E f s [ x 2 ] + E f s [ x 1 ] + E f s [ x 0 ]
E 1 [ s 2 ] = E f s [ x 8 ] + E f s [ x 10 ] + E f s [ x 12 ]
m i = Δ u × p p f s
n i , j = j × p m ^ + p m ^ 2 + Δ u × p p
f ^ i , j ( z ) = m i × z + n i , j , z [ 0 , U ]
U i , j = m i × ( f s + b U ) + n i , j
F i = m i × f U
q i , j = F i U i , j f U
f ^ i , j ( z ) = q i , j × z + U i , j , z [ U , )
f ^ c , e ( z ) = f ^ a c , e a ( z ) , z [ U , )
d a = f s + H 1 s H 2 s ¯ + b U + H 1 U H 2 U ¯ + z a
( Δ ) min 1.22 f λ A
D a = d a + d a
m i ± = Δ u × p p ± p p / 2 f s
s i , j ± = j × p m ^ + p m ^ / 2 ± p m ^ / 2
U i , j ± = m i ± × b U + s i , j ±
F i ± = m i ± × f U
q i , j ± = F i ± U i , j ± f U
f ^ i , j ± ( z ) = q i , j ± × z + U i , j ± , z [ U , )
f ^ 0 , 4 ± ( z ) = f ^ 2 , 2 ± ( z ) , z [ U , )
d a ± = f s + H 1 s H 2 s ¯ + b U + H 1 U H 2 U ¯ + z a ±
E u c + i [ s j ] = E [ u c + i , s j ]
Φ i = arctan ( m i × ( b U f U ) f U )
Δ B g = | F i F i + g |
R s = f s × ( n 1 )
d s = f s ( t s H 1 s H 2 s ¯ )
ERR = prediction simulation prediction × 100 .

Metrics