Abstract

In this paper, a new and convenient calibration algorithm is proposed for unsynchronized camera networks with a large capture volume. The proposed method provides a simple and accurate means of calibration using a small 3D reference object. Moreover, since the inaccuracy of the object is also compensated simultaneously, the manufacturing cost can be decreased. The extrinsic and intrinsic parameters are recovered simultaneously by capturing an object placed arbitrarily in different locations in the capture volume. The proposed method first resolves the problem linearly by factorizing projection matrices into the camera and the object pose parameters. Due to the multi-view constraints imposed on factorization, consistency of the rigid transformations among cameras and objects can be imposed. These consistent estimation results can be further refined using a non-linear optimization process. The proposed algorithm is evaluated via simulated and real experiments in order to verify that it is more efficient than previous methods.

© 2012 OSA

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd. ed. (Cambridge University Press, 2003).
  2. Z. Zhang, “A flexible new technique for camera calibration,” Tech. Rep. MSR-TR-98-71, Microsoft Corporation (1998).
  3. T. Ueshiba and F. Tomita, “Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices,” in “Proc. IEEE International Conference on Computer Vision,” (Nice, France, 2003), pp. 966–973.
    [CrossRef]
  4. T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
    [CrossRef]
  5. G. Kurillo, Z. Li, and R. Bajcsy, “Wide-area external multi-camera calibration using vision graphs and virtual calibration object,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–9.
    [CrossRef]
  6. H. Medeiros, H. Iwaki, and J. Park, “Online distributed calibration of a large network of wireless cameras using dynamic clustering,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–10.
    [CrossRef]
  7. X. Chen, J. Davis, and P. Slusallek, “Wide area camera calibration using virtual calibration objects,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head, SC, USA (2000), pp. 520–527.
  8. S. N. Sinha and M. Pollefeys, “Camera network calibration and synchronization from silhouettes in archived video,” Int. J. Comput. Vision87, 266–283 (2010).
    [CrossRef]
  9. E. Boyer, “On using silhouettes for camera calibration,” in “Proc. Asian Conference on Computer Vision,” Hyderabad, India (2006), pp. 1–10.
  10. J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
    [CrossRef] [PubMed]
  11. C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vision9, 137–154 (1992).
    [CrossRef]
  12. P. Sturm and B. Triggs, “A factorization based algorithm for multi-image projective structure and motion,” in “Proc. European Conference on Computer Vision,” Cambridge, UK (1996), pp. 709–720.
  13. D. Jacobs, “Linear fitting with missing data: Applications to structure from motion and to characterizing intensity images,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” San Juan, Puerto Rico (1997), pp. 206–212.
  14. D. Martinec and T. Pajdla, “Structure from many perspective images with occlusions,” in “Proc. European Conference on Computer Vision,” Copenhagen, Denmark, (2002), pp. 355–369.
  15. P. Sturm, “Algorithms for plane-based pose estimation,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head Island, SC, USA, (2000), pp. 706–711.
  16. M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
    [CrossRef] [PubMed]
  17. O. Faugeras, Three-Dimensional Computer Vision (The MIT Press, 1993).
  18. K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
    [CrossRef]
  19. A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
    [CrossRef]
  20. R. Hartley, “In defence of the 8-point algorithm,” in “Proc. International Conference on Computer Vision,” Sendai, Japan (1995), pp. 1064–1070.

2010

S. N. Sinha and M. Pollefeys, “Camera network calibration and synchronization from silhouettes in archived video,” Int. J. Comput. Vision87, 266–283 (2010).
[CrossRef]

J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
[CrossRef] [PubMed]

2005

M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
[CrossRef] [PubMed]

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
[CrossRef]

1999

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
[CrossRef]

1992

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vision9, 137–154 (1992).
[CrossRef]

1987

K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
[CrossRef]

Arun, K. S.

K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
[CrossRef]

Bajcsy, R.

G. Kurillo, Z. Li, and R. Bajcsy, “Wide-area external multi-camera calibration using vision graphs and virtual calibration object,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–9.
[CrossRef]

Blosten, S. D.

K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
[CrossRef]

Boyer, E.

M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
[CrossRef] [PubMed]

E. Boyer, “On using silhouettes for camera calibration,” in “Proc. Asian Conference on Computer Vision,” Hyderabad, India (2006), pp. 1–10.

Bulusu, N.

J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
[CrossRef] [PubMed]

Chen, X.

X. Chen, J. Davis, and P. Slusallek, “Wide area camera calibration using virtual calibration objects,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head, SC, USA (2000), pp. 520–527.

Davis, J.

X. Chen, J. Davis, and P. Slusallek, “Wide area camera calibration using virtual calibration objects,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head, SC, USA (2000), pp. 520–527.

Faugeras, O.

O. Faugeras, Three-Dimensional Computer Vision (The MIT Press, 1993).

Feng, W.-C.

J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
[CrossRef] [PubMed]

Fisher, R. B.

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
[CrossRef]

Fitzgibbon, A.

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
[CrossRef]

Hartley, R.

R. Hartley, “In defence of the 8-point algorithm,” in “Proc. International Conference on Computer Vision,” Sendai, Japan (1995), pp. 1064–1070.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd. ed. (Cambridge University Press, 2003).

Huang, T. S.

K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
[CrossRef]

Iwaki, H.

H. Medeiros, H. Iwaki, and J. Park, “Online distributed calibration of a large network of wireless cameras using dynamic clustering,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–10.
[CrossRef]

Jacobs, D.

D. Jacobs, “Linear fitting with missing data: Applications to structure from motion and to characterizing intensity images,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” San Juan, Puerto Rico (1997), pp. 206–212.

Kanade, T.

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vision9, 137–154 (1992).
[CrossRef]

Kassebaum, J.

J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
[CrossRef] [PubMed]

Kurillo, G.

G. Kurillo, Z. Li, and R. Bajcsy, “Wide-area external multi-camera calibration using vision graphs and virtual calibration object,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–9.
[CrossRef]

Li, Z.

G. Kurillo, Z. Li, and R. Bajcsy, “Wide-area external multi-camera calibration using vision graphs and virtual calibration object,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–9.
[CrossRef]

Martinec, D.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
[CrossRef]

D. Martinec and T. Pajdla, “Structure from many perspective images with occlusions,” in “Proc. European Conference on Computer Vision,” Copenhagen, Denmark, (2002), pp. 355–369.

Medeiros, H.

H. Medeiros, H. Iwaki, and J. Park, “Online distributed calibration of a large network of wireless cameras using dynamic clustering,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–10.
[CrossRef]

Pajdla, T.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
[CrossRef]

D. Martinec and T. Pajdla, “Structure from many perspective images with occlusions,” in “Proc. European Conference on Computer Vision,” Copenhagen, Denmark, (2002), pp. 355–369.

Park, J.

H. Medeiros, H. Iwaki, and J. Park, “Online distributed calibration of a large network of wireless cameras using dynamic clustering,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–10.
[CrossRef]

Pilu, M.

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
[CrossRef]

Pollefeys, M.

S. N. Sinha and M. Pollefeys, “Camera network calibration and synchronization from silhouettes in archived video,” Int. J. Comput. Vision87, 266–283 (2010).
[CrossRef]

Sinha, S. N.

S. N. Sinha and M. Pollefeys, “Camera network calibration and synchronization from silhouettes in archived video,” Int. J. Comput. Vision87, 266–283 (2010).
[CrossRef]

Slusallek, P.

X. Chen, J. Davis, and P. Slusallek, “Wide area camera calibration using virtual calibration objects,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head, SC, USA (2000), pp. 520–527.

Sturm, P.

M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
[CrossRef] [PubMed]

P. Sturm and B. Triggs, “A factorization based algorithm for multi-image projective structure and motion,” in “Proc. European Conference on Computer Vision,” Cambridge, UK (1996), pp. 709–720.

P. Sturm, “Algorithms for plane-based pose estimation,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head Island, SC, USA, (2000), pp. 706–711.

Svoboda, T.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
[CrossRef]

Tomasi, C.

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vision9, 137–154 (1992).
[CrossRef]

Tomita, F.

T. Ueshiba and F. Tomita, “Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices,” in “Proc. IEEE International Conference on Computer Vision,” (Nice, France, 2003), pp. 966–973.
[CrossRef]

Triggs, B.

P. Sturm and B. Triggs, “A factorization based algorithm for multi-image projective structure and motion,” in “Proc. European Conference on Computer Vision,” Cambridge, UK (1996), pp. 709–720.

Ueshiba, T.

T. Ueshiba and F. Tomita, “Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices,” in “Proc. IEEE International Conference on Computer Vision,” (Nice, France, 2003), pp. 966–973.
[CrossRef]

Wilczkowiak, M.

M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
[CrossRef] [PubMed]

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration,” Tech. Rep. MSR-TR-98-71, Microsoft Corporation (1998).

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd. ed. (Cambridge University Press, 2003).

IEEE Trans. Image Process.

J. Kassebaum, N. Bulusu, and W.-C. Feng, “3-d target-based distributed smart camera network localization,” IEEE Trans. Image Process.19, 2530–2539 (2010).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell.

M. Wilczkowiak, P. Sturm, and E. Boyer, “Using geometric constraints through parallelepipeds for calibration and 3D modelling,” IEEE Trans. Pattern Anal. Mach. Intell.27, 194–207 (2005).
[CrossRef] [PubMed]

K. S. Arun, T. S. Huang, and S. D. Blosten, “Least-squares fitting of two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.PAMI-9, 698–700 (1987).
[CrossRef]

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell.21, 476–480 (1999).
[CrossRef]

Int. J. Comput. Vision

C. Tomasi and T. Kanade, “Shape and motion from image streams under orthography: a factorization method,” Int. J. Comput. Vision9, 137–154 (1992).
[CrossRef]

S. N. Sinha and M. Pollefeys, “Camera network calibration and synchronization from silhouettes in archived video,” Int. J. Comput. Vision87, 266–283 (2010).
[CrossRef]

Presence: Teleop. Virt. Environ.

T. Svoboda, D. Martinec, and T. Pajdla, “A convenient multi-camera selfcalibration for virtual environments,” Presence: Teleop. Virt. Environ.14, 407–422 (2005).
[CrossRef]

Other

G. Kurillo, Z. Li, and R. Bajcsy, “Wide-area external multi-camera calibration using vision graphs and virtual calibration object,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–9.
[CrossRef]

H. Medeiros, H. Iwaki, and J. Park, “Online distributed calibration of a large network of wireless cameras using dynamic clustering,” in “Proc. ACM/IEEE International Conference on Distributed Smart Cameras,” (2008), pp. 1–10.
[CrossRef]

X. Chen, J. Davis, and P. Slusallek, “Wide area camera calibration using virtual calibration objects,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head, SC, USA (2000), pp. 520–527.

E. Boyer, “On using silhouettes for camera calibration,” in “Proc. Asian Conference on Computer Vision,” Hyderabad, India (2006), pp. 1–10.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd. ed. (Cambridge University Press, 2003).

Z. Zhang, “A flexible new technique for camera calibration,” Tech. Rep. MSR-TR-98-71, Microsoft Corporation (1998).

T. Ueshiba and F. Tomita, “Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices,” in “Proc. IEEE International Conference on Computer Vision,” (Nice, France, 2003), pp. 966–973.
[CrossRef]

P. Sturm and B. Triggs, “A factorization based algorithm for multi-image projective structure and motion,” in “Proc. European Conference on Computer Vision,” Cambridge, UK (1996), pp. 709–720.

D. Jacobs, “Linear fitting with missing data: Applications to structure from motion and to characterizing intensity images,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” San Juan, Puerto Rico (1997), pp. 206–212.

D. Martinec and T. Pajdla, “Structure from many perspective images with occlusions,” in “Proc. European Conference on Computer Vision,” Copenhagen, Denmark, (2002), pp. 355–369.

P. Sturm, “Algorithms for plane-based pose estimation,” in “Proc. IEEE International Conference on Computer Vision and Pattern Recognition,” Hilton Head Island, SC, USA, (2000), pp. 706–711.

R. Hartley, “In defence of the 8-point algorithm,” in “Proc. International Conference on Computer Vision,” Sendai, Japan (1995), pp. 1064–1070.

O. Faugeras, Three-Dimensional Computer Vision (The MIT Press, 1993).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1

Camera and 3D object pose parameters

Fig. 2
Fig. 2

The distribution of the relative error of abc and ccc. The variables, a, b, and c are diagonal elements of the upper triangular matrix obtained through RQ-decomposition of H(= KRS).

Fig. 3
Fig. 3

Rotation matrix Rf and translation vector tf representing the inaccurcy of the reference object.

Fig. 4
Fig. 4

The environments for the simulated experiments. (a) ENV1. (b) ENV2.

Fig. 5
Fig. 5

The visibility map for the object positions in ENV2 when the number of object positions is 25. The horizontal and vertical axes represent the object position index and camera index, respectively.

Fig. 6
Fig. 6

The magnitude of the singular values of rescaled for each environment of Table 2.

Fig. 7
Fig. 7

(a) 3D reference object used in real image experiments. (b) Automatic feature extraction results for some circle pairs. The blue lines represent the bi-tangents. The extracted features are represented with red +’s.

Fig. 8
Fig. 8

Two actual camera networks for evaluation of the proposed algorithm.

Fig. 9
Fig. 9

Example of captured images for Network1.

Fig. 10
Fig. 10

Visualization of the calibration results for Network1. (a) Top and (b) side views are shown, respectively.

Fig. 11
Fig. 11

Example of captured images for Network2.

Fig. 12
Fig. 12

The visibility map for the object positions in Network2. The horizontal and vertical axes represent the object position index and the camera index, respectively.

Fig. 13
Fig. 13

Visualization of the calibration results for Network2. (a) Top and (b) side views are shown, respectively.

Tables (7)

Tables Icon

Table 1 Results of the simulated experiments for performance evaluation. σ, W, and P represent the noise standard deviation in pixels, the object width in mm, and the number of object positions, respectively. RPE is the re-projection error in pixels. Efu is error for fu. Rotation error ER is measured by the Frobenius norm of the discrepancy between the true and estimated rotation matrix. Et is the norm of error for the translational vector in mm.

Tables Icon

Table 2 Results of the simulated experiments used to compare the methods in Sections 5.1 and 5.2. RPE1 is the re-projection error obtained after the factorization step and RPE2 after the refinement step.

Tables Icon

Table 3 Results of the simulated experiments on object inaccuracy and radial distortion. (θ̂f, ‖f‖) represent the pose discrepancy of the object’s faces from an original drawing. Each face rotated around a random axis with angle θ̂f(°) and moved in a random direction f (mm). (k1, k2) are the lens distortion parameters. Ef is the norm of error for f. (Ek1, Ek2) are errors for (k1, k2).

Tables Icon

Table 4 Results of the simulated experiments for the method of [4] using the features detected for the proposed method. C and P represent the number of cameras and object positions, respectively. The case in which calibration results cannot be obtained from the method is indicated by ”×”.

Tables Icon

Table 5 The specifications of the actual camera network environments. C represents the number of cameras used in the experiment. P represents the number of object positions.

Tables Icon

Table 6 Results of the real image experiments for consistency evaluation. These results are obtained from the subgroups of the cameras, {0,2,4} and {1,3}. Compare these results with those of Table 7.

Tables Icon

Table 7 Results of the real image experiments for consistency evaluation. These results are obtained from all the cameras. Compare these results with those of Table 6.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

x K i [ R i t i ] X M i X ,
X = [ S j v j 0 T 1 ] X ref = N j X ref ,
x M i N j X ref = P i j X ref ,
[ 0 T X ref k T v X ref k T X ref k T 0 T u X ref k T v X ref k T u X ref k T 0 T ] p = 0
Λ p = 0 .
k I i j x ˜ i j k x ^ i j k ( P i j , X ref k ) 2 ,
λ i j P ˜ i j = M i N j = P i j .
W ˜ = [ P ˜ 1 1 P ˜ 1 n P ˜ m 1 P ˜ m n ] .
[ λ 1 1 P ˜ 1 1 λ 1 n P ˜ 1 n λ m 1 P ˜ m 1 λ m n P ˜ m n ] = [ P 1 1 P 1 n P m 1 P m n ] = [ M 1 M m ] [ N 1 N n ]
Y ˜ = [ ρ 1 1 H ˜ 1 1 ρ 1 n H ˜ 1 n ρ m 1 H ˜ m 1 ρ m n H ˜ m n ] = [ a 1 K 1 R 1 a m K m R m ] [ S 1 S n ] .
det ( a i K i R i ) = 1 ,
det ( ρ i j H ˜ i j ) = 1 .
ρ i j = det ( H ˜ i j ) 1 / 3 .
Y ˜ = U 3 m × 3 n D 3 n × 3 n V 2 n × 3 n T .
Y ¯ = U ¯ 3 m × 3 D { V ¯ 2 n × 3 D } T = U ^ 3 m × 3 V ^ 2 n × 3 T ,
Y ¯ = ( U ^ 3 m × 3 T ) ( T 1 V ^ 2 n × 3 T ) ,
T T T = V j T V j .
ρ i j h ˜ i j = μ i j ( K i R i v j + K i t i ) ,
H ˜ i j H ˜ i l ( H ˜ k l ) 1 H ˜ k j ,
i = 1 m j = 1 n k I i j x ˜ i j k x ^ i j k ( K i , R i , t i , S j , v j , X ref k ) 2 ,
i = 1 m j = 1 n k I i j x ˜ i j k x ^ i j k ( K i , R i , t i , S j , v j , X ref k ( R f ( k ) , t f ( k ) ) ) 2 ,
x = x + x [ k 1 ( x 2 + y 2 ) + k 2 ( x 2 + y 2 ) 2 ]
y = y + y [ k 1 ( x 2 + y 2 ) + k 2 ( x 2 + y 2 ) 2 ] ,

Metrics