Abstract
We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.
© 2004 Optical Society of America
Full Article | PDF ArticleMore Like This
Antonis A. Argyros and Manolis I. A. Lourakis
Appl. Opt. 43(2) 366-378 (2004)
Zheng Zhang and Hock Soon
Appl. Opt. 51(23) 5686-5697 (2012)
Zhijiang Zhang and Zhihao Pan
Appl. Opt. 43(12) 2449-2455 (2004)