Publications

Detailed Information

Virtual Keyboards With Real-Time and Robust Deep Learning-Based Gesture Recognition

DC Field Value Language
dc.contributor.authorLee, Tae-Ho-
dc.contributor.authorKim, Sunwoong-
dc.contributor.authorKim, Taehyun-
dc.contributor.authorKim, Jin-Sung-
dc.contributor.authorLee, Hyuk-Jae-
dc.date.accessioned2022-09-30T05:49:17Z-
dc.date.available2022-09-30T05:49:17Z-
dc.date.created2022-08-17-
dc.date.issued2022-08-
dc.identifier.citationIEEE Transactions on Human-Machine Systems, Vol.52 No.4, pp.725-735-
dc.identifier.issn2168-2291-
dc.identifier.urihttps://hdl.handle.net/10371/184809-
dc.description.abstractIn head-mounted display devices for augmented reality and virtual reality, external signals are often entered using a virtual keyboard (VKB). Among various user interfaces for VKBs, hand gestures are widely used because they are fast and intuitive. This work proposes a gesture-recognition (GR)-based VKB algorithm that is accurate in any environment and operates in real time. Specifically, the proposed ambidextrous VKB layouts reduce the total finger travel distance on one-hand VKB layouts. Additionally, a fast typing action is proposed to use characteristics when previous and current keys are adjacent. To be robust in any environment, we utilize a deep learning (DL)-based GR method in the proposed VKB algorithm. To train DL networks, seven classes are defined and an automated dataset generation method is proposed to reduce the necessary time and effort. The proposed one-hand VKB layout with the fast typing action shows a 1.5x faster typing speed than the popular ABC keyboard layout. Furthermore, the proposed ambidextrous VKB layout brings an additional 52% improvement compared with the proposed one-hand VKB layout. The proposed DL-based GR method implemented on the well-known YOLOv3 machine learning framework shows a mean average precision rate of 95% for images including background colors similar to skin color. The proposed DL-based GR method for one-hand and ambidextrous VKBs achieves around 41 frames per second on a software platform, which allows real-time processing.-
dc.language영어-
dc.publisherIEEE Systems, Man, and Cybernetics Society-
dc.titleVirtual Keyboards With Real-Time and Robust Deep Learning-Based Gesture Recognition-
dc.typeArticle-
dc.identifier.doi10.1109/THMS.2022.3165165-
dc.citation.journaltitleIEEE Transactions on Human-Machine Systems-
dc.identifier.wosid000788982400001-
dc.identifier.scopusid2-s2.0-85129438652-
dc.citation.endpage735-
dc.citation.number4-
dc.citation.startpage725-
dc.citation.volume52-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorKim, Taehyun-
dc.contributor.affiliatedAuthorLee, Hyuk-Jae-
dc.type.docTypeArticle-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share