Publications

Detailed Information

Fast Point Transformer

Cited 44 time in Web of Science Cited 57 time in Scopus
Authors

Park, Chunghyun; Jeong, Yoonwoo; Cho, Minsu; Park, Jaesik

Issue Date
2022
Publisher
IEEE
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol.2022-June, pp.16928-16937
Abstract
The recent success of neural networks enables a better interpretation of 3D point clouds, but processing a large-scale 3D scene remains a challenging problem. Most current approaches divide a large-scale scene into small regions and combine the local predictions together. However, this scheme inevitably involves additional stages for pre- and post-processing and may also degrade the final output due to predictions in a local perspective. This paper introduces Fast Point Transformer that consists of a new lightweight self-attention layer. Our approach encodes continuous 3D coordinates, and the voxel hashing-based architecture boosts computational efficiency. The proposed method is demonstrated with 3D semantic segmentation and 3D detection. The accuracy of our approach is competitive to the best voxel-based method, and our network achieves 129 times faster inference time than the state-of-the-art, Point Transformer, with a reasonable accuracy trade-off in 3D semantic segmentation on S3DIS dataset.
ISSN
1063-6919
URI
https://hdl.handle.net/10371/201294
DOI
https://doi.org/10.1109/CVPR52688.2022.01644
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Dept. of Computer Science and Engineering
Research Area Computer Graphics, Computer Vision, Machine Learning, Robotics

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share