Abstract
Traditional machine learning approaches for recognizing modes of transportation rely heavily on hand-crafted feature extraction methods which require domain knowledge. So, we propose a hybrid deep learning model: Deep Convolutional Bidirectional-LSTM (DCBL) which combines convolutional and bidirectional LSTM layers and is trained directly on raw sensor data to predict the transportation modes. We compare our model to the traditional machine learning approaches of training Support Vector Machines and Multilayer Perceptron models on extracted features. In our experiments, DCBL performs better than the feature selection methods in terms of accuracy and simplifies the data processing pipeline. The models are trained on the Sussex-Huawei Locomotion-Transportation (SHL) dataset. The submission of our team, Vahan, to SHL recognition challenge uses an ensemble of DCBL models trained on raw data using the different combination of sensors and window sizes and achieved an F1-score of 0.96 on our test data.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers |
Publisher | ASSOC COMPUTING MACHINERY |
Pages | 1606-1615 |
Number of pages | 10 |
ISBN (Print) | 9781450359665 |
DOIs | |
State | Published - Oct 8 2018 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2022-06-24Acknowledgements: This research was supported in part by the NIH Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K) under award 1-U54EB020404-01, the U.S. Army Research Laboratory and the UK Ministry of Defence under Agreement Number W911 NF-16-3-0001, the National Science Foundation under awards #CNS-1636916 and 1640813, and the King Abdullah University of Science and Technology (KAUST) through its Sensor Innovation research program. Any findings in this material are those of the author(s) and do not reflect the views of any of the above funding agencies. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.