Consensus Skeleton for Non-Rigid Space-Time Registration

Computer Graphics Forum (Special Issue of Eurographics 2010)

Qian Zheng1    Andrei Sharf1    Andrea Tagliasacchi2   Baoquan Chen1   Hao Zhang2   Alla Sheffer3   Daniel Cohen-Or4  
1SIAT, Chinese Academy of Sciences,China
2Simon Fraser University,Canada
3University of British Columbia, Canada
4Tel Aviv University,Israel


Figure 1: Consensus skeleton extraction from a topologically complex scene of two dancing mannequins (left). Top: single- view scans of four dancing poses with initially extracted skeletons. Bottom: consensus skeletons deformed to individual poses. Note that the deformed consensus skeleton is generally smoother and better connected than independently extracted skeletons.


We introduce the notion of consensus skeletons for non-rigid space-time registration of a deforming shape. Instead of basing the registration on point features, which are local and sensitive to noise, we adopt the curve skeleton of the shape as a global and descriptive feature for the task. Our method uses no template and only assumes that the skeletal structure of the captured shape remains largely consistent over time. Such an assumption is generally weaker than those relying on large overlap of point features between successive frames, allowing for more sparse acquisition across time. Building our registration framework on top of the low-dimensional skeletontime structure avoids heavy processing of dense point or volumetric data, while skeleton consensusization provides robust handling of incompatibilities between per-frame skeletons. To register point clouds from all frames, we deform them by their skeletons, mirroring the skeleton registration process, to jump-start a non-rigid ICP. We present results for non-rigid space-time registration under sparse and noisy spatio-temporal sampling, including cases where data was captured from only a single view.







Figure 2: Another example of our c-skeleton pipeline. Left to right: From an initial set of deforming point clouds, we extract skeletons per frame. We compute the consensus skeleton (middle) and deform it back to the frames’ poses. Using the skeleton correspondence, we can deform the point clouds into a common consensus pose and register them together (right).


Figure 3: Overview of our consensusization workflow. Left to right: We compute the c-skeleton from a set of noisy skeletons (left-middle); we deform the c-skeleton onto original frame poses and compute the skeleton-driven point cloud registration. On the right, we show the registered superimposed point cloud (top), and the final ICP perfect registration (bottom).



We thank Zhan Song, Ke Xie and Fei-Long Yan for capturing the mannequin data, and Gao-Jin Wen, Liang-Liang Nan, Zhang-Lin Cheng and Yang-Yan Li for the fruitful discussions. This work was supported in part by the following grants: National Natural Science Foundation of China (60903116, 60902104), National High-tech R&D Program of China (2009AA01Z302), CAS Visiting Professorship for Senior International Scientists, CAS Fellowship for Young International Scientists, Shenzhen Basic Research Foundation (JC200903170443A), Shenzhen Science & Technology Foundation (SY200806300211A) and NSERC (No. 611370).



  title = {Consensus Skeleton for Non-Rigid Space-Time Registration},
  author = {Qian Zheng and Andrei Sharf and Andrea Tagliasacchi and Baoquan Chen and Hao Zhang and Alla Sheffer and Daniel Cohen-Or},
  journal = {Computer Graphcis Forum (Special Issue of Eurographics)},
  volume = {29},
  number = {2},
  pages = {635--644},
  year = {2010}