Computer Graphics Forum 2025
Xiaotang Zhang1,
Ziyi Chang1,
Qianhui Men2,
Hubert Shum1†
1Durham University
2University of Bristol
† Corresponding Author
We propose a real-time method for reactive motion synthesis based on the known trajectory of input character, predicting instant reactions using only historical, user-controlled motions. Our method handles the uncertainty of future movements by introducing an intention predictor, which forecasts key joint intentions to make pose prediction more deterministic from the historical interaction. The intention is later encoded into the latent space of its reactive motion, matched with a codebook which represents mappings between input and output. It samples a categorical distribution for pose generation and strengthens model robustness through adversarial training. Unlike previous offline approaches, the system can recursively generate intentions and reactive motions using feedback from earlier steps, enabling real-time, long-term realistic interactive synthesis. Both quantitative and qualitative experiments show our approach outperforms other matching-based motion synthesis approaches, delivering superior stability and generalizability. In our method, user can also actively influence the outcome by controlling the moving directions, creating a personalized interaction path that deviates from predefined trajectories.
- Install Unity3D Hub and import
Unity3Dproject. - Open scene
Unity3D/Assets/Demo/Motion Synthesis/ReactionSynthesis. - Load pre-trained models from
PyTorch/Checkpointsto the__main__function ofPyTorch/Models/CodebookMatching/Inference.py. - Run
Inference.pyin python and wait a few seconds for loading checkpoints. - Click the play button in Unity3D and wait a few seconds for socket connection.
The Unity3D visualization tool is based on AI4Animation and the network's backbone is from Codebook Matching. We gratefully acknowledge Sebastian Starke for his outstanding open-source contributions.
@inproceedings{zhang2025real,
title={Real-time and Controllable Reactive Motion Synthesis via Intention Guidance},
author={Zhang, Xiaotang and Chang, Ziyi and Men, Qianhui and Shum, Hubert and others},
booktitle={Computer Graphics Forum},
year={2025},
organization={Wiley}
}
