Achieving Single-Sensor Complex Activity Recognition from Multi-Sensor Training Data

26 Feb 2020  ·  Lago Paula, Matsuki Moe, Inoue Sozo ·

In this study, we propose a method for single sensor-based activity recognition, trained with data from multiple sensors. There is no doubt that the performance of complex activity recognition systems increases when we use enough sensors with sufficient quality, however using such rich sensors may not be feasible in real-life situations for various reasons such as user comfort, privacy, battery-preservation, and/or costs. In many cases, only one device such as a smartphone is available, and it is challenging to achieve high accuracy with a single sensor, more so for complex activities. Our method combines representation learning with feature mapping to leverage multiple sensor information made available during training while using a single sensor during testing or in real usage. Our results show that the proposed approach can improve the F1-score of the complex activity recognition by up to 17\% compared to that in training while utilizing the same sensor data in a new user scenario.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Human-Computer Interaction

Datasets


  Add Datasets introduced or used in this paper