Publications

Detailed Information

Line as a Visual Sentence: Context-Aware Line Descriptor for Visual Localization

DC Field Value Language
dc.contributor.authorYoon, Sungho-
dc.contributor.authorKim, Ayoung-
dc.date.accessioned2023-12-11T00:42:28Z-
dc.date.available2023-12-11T00:42:28Z-
dc.date.created2021-11-08-
dc.date.issued2021-10-
dc.identifier.citationIEEE Robotics and Automation Letters, Vol.6 No.4, pp.8726-8733-
dc.identifier.issn2377-3766-
dc.identifier.urihttps://hdl.handle.net/10371/197699-
dc.description.abstractAlong with feature points for image matching, line features provide additional constraints to solve visual geometric problems in robotics and computer vision (CV). Although recent convolutional neural network (CNN)-based line descriptors are promising for viewpoint changes or dynamic environments, we claim that the CNN architecture has innate disadvantages to abstract variable line length into the fixed-dimensional descriptor. In this letter, we effectively introduce Line-Transformers dealing with variable lines. Inspired by natural language processing (NLP) tasks where sentences can be understood and abstracted well in neural nets, we view a line segment as a sentence that contains points (words). By attending to well-describable points on a line dynamically, our descriptor performs excellently on variable line length. We also propose line signature networks sharing the line's geometric attributes to neighborhoods. Performing as group descriptors, the networks enhance line descriptors by understanding lines' relative geometries. Finally, we present the proposed line descriptor and matching in a Point and Line Localization (PL-Loc). We show that the visual localization with feature points can be improved using our line features. We validate the proposed method for homography estimation and visual localization.-
dc.language영어-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleLine as a Visual Sentence: Context-Aware Line Descriptor for Visual Localization-
dc.typeArticle-
dc.identifier.doi10.1109/LRA.2021.3111760-
dc.citation.journaltitleIEEE Robotics and Automation Letters-
dc.identifier.wosid000706821900001-
dc.identifier.scopusid2-s2.0-85114747529-
dc.citation.endpage8733-
dc.citation.number4-
dc.citation.startpage8726-
dc.citation.volume6-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorKim, Ayoung-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.subject.keywordAuthorTransformers-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorImage segmentation-
dc.subject.keywordAuthorSimultaneous localization and mapping-
dc.subject.keywordAuthorLocation awareness-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorConvolutional neural networks-
dc.subject.keywordAuthorLocalization-
dc.subject.keywordAuthorSLAM-
dc.subject.keywordAuthordeep learning for visual perception-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share