Learning Graph Convolutional Network for Skeleton-based Human Action Recognition by Neural Searching

11 Nov 2019  ·  Wei Peng, Xiaopeng Hong, Haoyu Chen, Guoying Zhao ·

Human action recognition from skeleton data, fueled by the Graph Convolutional Network (GCN), has attracted lots of attention, due to its powerful capability of modeling non-Euclidean structure data. However, many existing GCN methods provide a pre-defined graph and fix it through the entire network, which can loss implicit joint correlations. Besides, the mainstream spectral GCN is approximated by one-order hop, thus higher-order connections are not well involved. Therefore, huge efforts are required to explore a better GCN architecture. To address these problems, we turn to Neural Architecture Search (NAS) and propose the first automatically designed GCN for skeleton-based action recognition. Specifically, we enrich the search space by providing multiple dynamic graph modules after fully exploring the spatial-temporal correlations between nodes. Besides, we introduce multiple-hop modules and expect to break the limitation of representational capacity caused by one-order approximation. Moreover, a sampling- and memory-efficient evolution strategy is proposed to search an optimal architecture for this task. The resulted architecture proves the effectiveness of the higher-order approximation and the dynamic graph modeling mechanism with temporal interactions, which is barely discussed before. To evaluate the performance of the searched model, we conduct extensive experiments on two very large scaled datasets and the results show that our model gets the state-of-the-art results.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition Kinetics-Skeleton dataset GCN-NAS Accuracy 37.1 # 15
Skeleton Based Action Recognition NTU RGB+D GCN-NAS Accuracy (CV) 95.7 # 43
Accuracy (CS) 89.4 # 48

Methods