Abstract

In recent years, many methods have been put forward to improve the image matching for different viewpoint images. However, these methods are still not able to achieve stable results, especially when large variation in view occurs. In this paper, an image matching method based on affine transformation of local image areas is proposed. First, local stable regions are extracted from the reference image and the test image, and transformed to circular areas according to the second-order moment. Then, scale invariant features are detected and matched in the transformed regions. Finally, we use epipolar constraint based on the fundamental matrix to eliminate wrong corresponding pairs. The goal of our method is not to increase the invariance of the detector but to improve the final performance of the matching results. The experimental results demonstrate that compared with the traditional detectors the proposed method provides significant improvement in robustness for different viewpoint images matching in the 2D scene and 3D scene. Moreover, the efficiency is greatly improved compared with affine scale invariant feature transform (Affine-SIFT).

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
    [CrossRef]
  2. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vis. 74, 59–73 (2007).
    [CrossRef]
  3. B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
    [CrossRef]
  4. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (Plessey, 1988), pp. 147–152.
  5. K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60, 63–86 (2004).
    [CrossRef]
  6. S. Smith and J. Brady, “Susan: a new approach to low-level image-processing,” Int. J. Comput. Vis. 23, 45–78 (1997).
    [CrossRef]
  7. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
    [CrossRef]
  8. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
    [CrossRef]
  9. K. Mikolajczyk, “Interest point detection invariant to affine transformations,” Ph.D dissertation (Institut National Polytechnique de Grenoble, 2002).
  10. J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
    [CrossRef]
  11. T. Tuytelaars and L. V. Gool, “Matching widely separated views based on affine invariant regions,” Int. J. Comput. Vis. 59, 61–85 (2004).
    [CrossRef]
  12. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.
  13. A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2003), pp. III-513-16.
  14. J. M. Morel and G. Yu, “ASIFT: a new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2, 1–31 (2009).
    [CrossRef]
  15. G. Yu and J. M. Morel, “A fully affine invariant image comparison method,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2009), pp. 1597–1600.
  16. Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
    [CrossRef]
  17. A. Baumberg, “Reliable feature matching across widely separated views,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 774–781.
  18. K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
    [CrossRef]
  19. P.-E. Forssen and D. G. Lowe, “Shape descriptors for maximally stable extremal regions,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  20. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).
  21. http://www.robots.ox.ac.uk/~vgg/research/affine .
  22. http://www.ipol.im/pub/algo/my_affine_sift/ .
  23. http://cmp.felk.cvut.cz/~wbsdemo/demo/ .

2012 (1)

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

2011 (1)

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

2010 (1)

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

2009 (1)

J. M. Morel and G. Yu, “ASIFT: a new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2, 1–31 (2009).
[CrossRef]

2008 (1)

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

2007 (1)

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vis. 74, 59–73 (2007).
[CrossRef]

2005 (1)

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

2004 (4)

K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60, 63–86 (2004).
[CrossRef]

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

T. Tuytelaars and L. V. Gool, “Matching widely separated views based on affine invariant regions,” Int. J. Comput. Vis. 59, 61–85 (2004).
[CrossRef]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[CrossRef]

1997 (1)

S. Smith and J. Brady, “Susan: a new approach to low-level image-processing,” Int. J. Comput. Vis. 23, 45–78 (1997).
[CrossRef]

Barla, A.

A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2003), pp. III-513-16.

Baumberg, A.

A. Baumberg, “Reliable feature matching across widely separated views,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 774–781.

Bay, H.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

Brady, J.

S. Smith and J. Brady, “Susan: a new approach to low-level image-processing,” Int. J. Comput. Vis. 23, 45–78 (1997).
[CrossRef]

Brown, M.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vis. 74, 59–73 (2007).
[CrossRef]

Chen, W.

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

Chum, O.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

Dalal, N.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

Dong, L. X.

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

Ess, A.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

Forssen, P.-E.

P.-E. Forssen and D. G. Lowe, “Shape descriptors for maximally stable extremal regions,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Frahm, J-M.

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

Genc, Y.

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

Gool, L. V.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

T. Tuytelaars and L. V. Gool, “Matching widely separated views based on affine invariant regions,” Int. J. Comput. Vis. 59, 61–85 (2004).
[CrossRef]

Harris, C.

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (Plessey, 1988), pp. 147–152.

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).

Huang, K.

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

Kadir, T.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

Kratochvil, B. E.

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

Lowe, D. G.

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vis. 74, 59–73 (2007).
[CrossRef]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[CrossRef]

P.-E. Forssen and D. G. Lowe, “Shape descriptors for maximally stable extremal regions,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Matas, J.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

Mikolajczyk, K.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60, 63–86 (2004).
[CrossRef]

K. Mikolajczyk, “Interest point detection invariant to affine transformations,” Ph.D dissertation (Institut National Polytechnique de Grenoble, 2002).

Morel, J. M.

J. M. Morel and G. Yu, “ASIFT: a new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2, 1–31 (2009).
[CrossRef]

G. Yu and J. M. Morel, “A fully affine invariant image comparison method,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2009), pp. 1597–1600.

Nelson, B. J.

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

Odone, F.

A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2003), pp. III-513-16.

Pajdla, T.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

Pollefeys, M.

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

Schaffalitzky, F.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

Schmid, C.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60, 63–86 (2004).
[CrossRef]

Sinha, S. N.

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

Smith, S.

S. Smith and J. Brady, “Susan: a new approach to low-level image-processing,” Int. J. Comput. Vis. 23, 45–78 (1997).
[CrossRef]

Stephens, M.

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (Plessey, 1988), pp. 147–152.

Tan, T.

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

Triggs, B.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

Tuytelaars, T.

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

T. Tuytelaars and L. V. Gool, “Matching widely separated views based on affine invariant regions,” Int. J. Comput. Vis. 59, 61–85 (2004).
[CrossRef]

Urban, M.

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

Verri, A.

A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2003), pp. III-513-16.

Yu, G.

J. M. Morel and G. Yu, “ASIFT: a new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2, 1–31 (2009).
[CrossRef]

G. Yu and J. M. Morel, “A fully affine invariant image comparison method,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2009), pp. 1597–1600.

Yu, Y.

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

Zhang, L.

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

Zisserman, A.

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).

Comput. Vis. Image Underst. (1)

H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).
[CrossRef]

IEEE Trans. Image Processing (1)

Y. Yu, K. Huang, W. Chen, and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Trans. Image Processing 21, 229–240 (2012).
[CrossRef]

Image Vis. Comput. (1)

J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22, 761–767 (2004).
[CrossRef]

Int. J. Comput. Vis. (6)

T. Tuytelaars and L. V. Gool, “Matching widely separated views based on affine invariant regions,” Int. J. Comput. Vis. 59, 61–85 (2004).
[CrossRef]

M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vis. 74, 59–73 (2007).
[CrossRef]

K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60, 63–86 (2004).
[CrossRef]

S. Smith and J. Brady, “Susan: a new approach to low-level image-processing,” Int. J. Comput. Vis. 23, 45–78 (1997).
[CrossRef]

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004).
[CrossRef]

K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65, 43–72 (2005).
[CrossRef]

J. Microsc. (1)

B. E. Kratochvil, L. X. Dong, L. Zhang, and B. J. Nelson, “Image-based 3D reconstruction using helical nanobelts for localized rotations,” J. Microsc. 237, 122–135 (2010).
[CrossRef]

Mach. Vis. Appl. (1)

S. N. Sinha, J-M. Frahm, M. Pollefeys, and Y. Genc, “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22, 207–217 (2011).
[CrossRef]

SIAM J. Imaging Sci. (1)

J. M. Morel and G. Yu, “ASIFT: a new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2, 1–31 (2009).
[CrossRef]

Other (11)

G. Yu and J. M. Morel, “A fully affine invariant image comparison method,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 2009), pp. 1597–1600.

K. Mikolajczyk, “Interest point detection invariant to affine transformations,” Ph.D dissertation (Institut National Polytechnique de Grenoble, 2002).

P.-E. Forssen and D. G. Lowe, “Shape descriptors for maximally stable extremal regions,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2000).

http://www.robots.ox.ac.uk/~vgg/research/affine .

http://www.ipol.im/pub/algo/my_affine_sift/ .

http://cmp.felk.cvut.cz/~wbsdemo/demo/ .

C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (Plessey, 1988), pp. 147–152.

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 886–893.

A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2003), pp. III-513-16.

A. Baumberg, “Reliable feature matching across widely separated views,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 774–781.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1.

Transformation between a pair of correspondent local areas.

Fig. 2.
Fig. 2.

Two images with viewpoint change.

Fig. 3.
Fig. 3.

The relationship among the general framework, ASIFT, ISIFT and the proposed method.

Fig. 4.
Fig. 4.

Reference images of the datasets used for the experimental evaluation.

Fig. 5.
Fig. 5.

Number of correct matches of different viewpoint images in the four image datasets with approaches SIFT, Harris-Affine, Hessian-Affine, MSER, and MM-SIFT, respectively.

Fig. 6.
Fig. 6.

Partial matching results of the proposed MM-SIFT among the four image groups. (a) Shows the result of the frontal view and 60 deg in graf. (b) Shows the result of the frontal view and 80 deg in magazine. (c) Shows the result of the reference image and the test image with viewpoint angle of 80 deg in Adam. (d) Shows the result of wash.

Tables (3)

Tables Icon

Table 1. Test Result of the Two Images in Fig. 2

Tables Icon

Table 2. Average Numbers of Matches over the Image Pairs ( m / n )

Tables Icon

Table 3. Computation Times for ASIFT and MM-SIFT for the Datasets of Fig. 4 ( s )

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

s [ u , v , 1 ] T = M 1 M 2 [ X , Y , Z , 1 ] T ,
{ u 2 = a 11 u 1 + a 12 v 1 + b 1 v 2 = a 21 u 1 + a 22 v 1 + b 2 .
A = H λ R 1 ( ψ ) T ( θ ) R 2 ( ϕ ) ,
μ L = B T μ R B ,
μ L = λ L E , μ R = λ R E .
( λ L / λ R ) E = B T B .
B = H λ R 1 ( ψ ) .
[ H ( X X g ) ] T [ H ( X X g ) ] = r 2 ,
( X X g ) T μ 1 ( X X g ) = 1 ,
H = r [ μ 20 ( μ 20 μ 02 μ 11 2 ) ] 1 / 2 [ ( μ 20 μ 02 μ 11 2 ) 1 / 2 0 μ 11 μ 20 ] .
G ( F ) = { ( p , q ) S | d 2 ( q , F p ) + d 2 ( p , F T q ) < ε 2 } ,
1 + ( | Γ t | 1 ) 180 ° / 72 ° K × K = 1 + 5 × 2.5 3 × 3 = 1.5
6.5 × SIFT feature-computation + 27.25 × SIFT feature-comparison ,
SIFT feature-computation + SIFT feature-comparison ,

Metrics