SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

This article is part of the series Anthropocentric Video Analysis: Tools and Applications.

Open Access Research Article

Monocular 3D Tracking of Articulated Human Motion in Silhouette and Pose Manifolds

Feng Guo1 and Gang Qian12*

Author Affiliations

1 Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-9309, USA

2 Arts, Media and Engineering Program, Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-8709, USA

For all author emails, please log on.

EURASIP Journal on Image and Video Processing 2008, 2008:326896  doi:10.1155/2008/326896

The electronic version of this article is the complete one and can be found online at: http://jivp.eurasipjournals.com/content/2008/1/326896


Received:1 February 2007
Revisions received:24 July 2007
Accepted:29 January 2008
Published:18 February 2008

© 2008 The Author(s).

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a robust computational framework for monocular 3D tracking of human movement. The main innovation of the proposed framework is to explore the underlying data structures of the body silhouette and pose spaces by constructing low-dimensional silhouettes and poses manifolds, establishing intermanifold mappings, and performing tracking in such manifolds using a particle filter. In addition, a novel vectorized silhouette descriptor is introduced to achieve low-dimensional, noise-resilient silhouette representation. The proposed articulated motion tracker is view-independent, self-initializing, and capable of maintaining multiple kinematic trajectories. By using the learned mapping from the silhouette manifold to the pose manifold, particle sampling is informed by the current image observation, resulting in improved sample efficiency. Decent tracking results have been obtained using synthetic and real videos.

Publisher note

To access the full article, please see PDF.