Skeleton-Aware Networks for Deep Motion Retargeting

SIGGRAPH 2020

1 Beijing Film Academy
2Peking University
3Hebrew University
4ETH Zurich
5Tel-Aviv University

Beijing Film Academy

Peking University

Hebrew University

ETH Zurich

Tel-Aviv University

Peking University

**Denotes Equal Contribution
Paper
Code
Data

Overview Video

Abstract & Method

We introduce a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs. Importantly, our approach learns how to retarget without requiring any explicit pairing between the motions in the training set. We leverage the fact that different homeomorphic skeletons may be reduced to a common primal skeleton by a sequence of edge merging operations, which we refer to as skeletal pooling.

Our main technical contribution is the introduction of novel differentiable convolution, pooling, and unpooling operators, which are skeleton-aware, meaning that they explicitly account for the skeleton's hierarchical structure and joint adjacency.

Our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons. Thus, retargeting can be achieved simply by encoding to, and decoding from this latent space.

Cross-Structural Motion Retargeting

Our method can retarget motion to asymmetric characters or character with missing/extra bones:

Intra-Structural Retargeting - Comparison

Our framework can be used also for retargeting of skeletons with the same structure, but different proportions. Here our method is compared to a naive adaptation of CycleGAN [Zhu et al. 2017] to the motion domain and to NKN of Villegas et al. [2018]. The outputs are overlaid with the ground truth (green skeleton):

Cross-Structural Retargeting - Comparison

Our method is compared to a naive adaptation of CycleGAN [Zhu et al. 2017] to the motion domain and to a cross-structural version of NKN of Villegas et al. [2018]. The outputs are overlaid with the ground truth (green skeleton):

BibTeX

@article{aberman2020skeleton,
  author = {Aberman, Kfir and Li, Peizhuo and Lischinski, Dani and Sorkine-Hornung, Olga and Cohen-Or, Daniel and Chen, Baoquan},
  title = {Skeleton-Aware Networks for Deep Motion Retargeting},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {39},
  number = {4},
  pages = {62},
  year = {2020},
  publisher = {ACM}
}