MotionMixer: MLP-based 3D Human Body Pose Forecasting

In this work, we present MotionMixer, an efficient 3D human body pose forecasting model based solely on multi-layer perceptrons (MLPs). MotionMixer learns the spatial-temporal 3D body pose dependencies by sequentially mixing both modalities. Given a stacked sequence of 3D body poses, a spatial-MLP extracts fine grained spatial dependencies of the body joints. The interaction of the body joints over time is then modelled by a temporal MLP. The spatial-temporal mixed features are finally aggregated and decoded to obtain the future motion. To calibrate the influence of each time step in the pose sequence, we make use of squeeze-and-excitation (SE) blocks. We evaluate our approach on Human3.6M, AMASS, and 3DPW datasets using the standard evaluation protocols. For all evaluations, we demonstrate state-of-the-art performance, while having a model with a smaller number of parameters. Our code is available at: https://github.com/MotionMLP/MotionMixer

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human Pose Forecasting Human3.6M MotionMixer MAR, walking, 400ms 0.58 # 4
MAR, walking, 1,000ms 0.73 # 4
Average MPJPE (mm) @ 1000 ms 111.0 # 6
Average MPJPE (mm) @ 400ms 59.3 # 6

Methods


No methods listed for this paper. Add relevant methods here