Sensor Data Augmentation by Resampling for Contrastive Learning in Human Activity Recognition

5 Sep 2021  ·  Jinqiang Wang, Tao Zhu, Jingyuan Gan, Liming Chen, Huansheng Ning, Yaping Wan ·

While deep learning has contributed to the advancement of sensor-based Human Activity Recognition (HAR), it is usually a costly and challenging supervised task with the needs of a large amount of labeled data. To alleviate this issue, contrastive learning has been applied for sensor-based HAR. Data augmentation is an essential part of contrastive learning and has a significant impact on the performance of downstream tasks. However, current popular augmentation methods do not achieve competitive performance in contrastive learning for sensor-based HAR. Motivated by this issue, we propose a new sensor data augmentation method by resampling, which simulates more realistic activity data by varying the sampling frequency to maximize the coverage of the sampling space. In addition, we extend MoCo, a popular contrastive learning framework, to MoCoHAR for HAR. The resampling augmentation method will be evaluated on two contrastive learning frameworks, SimCLRHAR and MoCoHAR, using UCI-HAR, MotionSensor, and USC-HAD datasets. The experiment results show that the resampling augmentation method outperforms all state-of-the-art methods under a small amount of labeled data, on SimCLRHAR and MoCoHAR, with mean F1-score as the evaluation metric. The results also demonstrate that not all data augmentation methods have positive effects in the contrastive learning framework.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods