Publications

Detailed Information

DeepVehicleSense: An Energy-efficient Transportation Mode Recognition Leveraging Staged Deep Learning over Sound Samples

Cited 0 time in Web of Science Cited 1 time in Scopus
Authors

Lee, Sungyong; Lee, Jinsung; Lee, Kyunghan

Issue Date
2023-06
Publisher
Institute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Mobile Computing, Vol.22 No.6, pp.3270-3286
Abstract
IEEEIn this paper, we present a new transportation mode recognition system for smartphones called DeepVehicleSense, which is widely applicable to mobile context-aware services. DeepVehicleSense aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured from the built-in microphone while being on candidate transportations. To attain high energy efficiency, DeepVehicleSense adopts hierarchical accelerometer-based triggers that minimize the activation of the microphone of smartphones. Further, to achieve high accuracy and low latency, DeepVehicleSense makes use of non-linear filters that can best extract the transportation sound samples. For recognition of five different transportation modes, we design a deep learning based sound classifier using a novel deep neural network architecture with multiple branches. Our staged inference technique can significantly reduce runtime and energy consumption while maintaining high accuracy for the majority of samples. Through 263-hour datasets collected by seven different Android phone models, we demonstrate that DeepVehicleSense achieves the recognition accuracy of 97.44\% with only sound samples of 2 seconds at the power consumption of 35.08 mW on average for all-day monitoring.
ISSN
1536-1233
URI
https://hdl.handle.net/10371/184086
DOI
https://doi.org/10.1109/TMC.2022.3141392
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share