Sanchez Guinea, Alejandro ; Sarabchian, Mehran ; Mühlhäuser, Max (2022)
Improving Wearable-Based Activity Recognition Using Image Representations.
In: Sensors, 2022, 22 (5)
doi: 10.26083/tuprints-00021490
Article, Secondary publication, Publisher's Version
Text
sensors-22-01840-v2.pdf Copyright Information: CC BY 4.0 International - Creative Commons, Attribution. Download (983kB) |
Item Type: | Article |
---|---|
Type of entry: | Secondary publication |
Title: | Improving Wearable-Based Activity Recognition Using Image Representations |
Language: | English |
Date: | 2022 |
Place of Publication: | Darmstadt |
Year of primary publication: | 2022 |
Publisher: | MDPI |
Journal or Publication Title: | Sensors |
Volume of the journal: | 22 |
Issue Number: | 5 |
Collation: | 21 Seiten |
DOI: | 10.26083/tuprints-00021490 |
Corresponding Links: | |
Origin: | Secondary publication via sponsored Golden Open Access |
Abstract: | Activity recognition based on inertial sensors is an essential task in mobile and ubiquitous computing. To date, the best performing approaches in this task are based on deep learning models. Although the performance of the approaches has been increasingly improving, a number of issues still remain. Specifically, in this paper we focus on the issue of the dependence of today’s state-of-the-art approaches to complex ad hoc deep learning convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a combination of both, which require specialized knowledge and considerable effort for their construction and optimal tuning. To address this issue, in this paper we propose an approach that automatically transforms the inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad hoc deep learning models that combine RNNs and CNNs for activity recognition. We conducted an extensive evaluation considering seven benchmark datasets that are among the most relevant in activity recognition. Our results demonstrate that our approach is able to outperform the state of the art in all cases, based on image representations that are generated through a process that is easy to implement, modify, and extend further, without the need of developing complex deep learning models. |
Status: | Publisher's Version |
URN: | urn:nbn:de:tuda-tuprints-214908 |
Additional Information: | This article belongs to the Special Issue Sensors-Based Human Action and Emotion Recognition (s. verwandtes Werk) Keywords: human activity recognition; image representation; CNNs; IMU; inertial sensors; wearable sensors |
Classification DDC: | 000 Generalities, computers, information > 004 Computer science |
Divisions: | 20 Department of Computer Science > Telecooperation |
Date Deposited: | 07 Jun 2022 12:17 |
Last Modified: | 22 Aug 2022 09:39 |
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/21490 |
PPN: | 495422541 |
Export: |
View Item |