Publications

Detailed Information

Probabilistic TSDF Fusion Using Bayesian Deep Learning for Dense 3D Reconstruction with a Single RGB Camera

Cited 2 time in Web of Science Cited 2 time in Scopus
Authors

Kim, Hanjun; Lee, Beomhee

Issue Date
2020-05
Publisher
IEEE
Citation
2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), pp.8623-8629
Abstract
In this paper, we address a 3D reconstruction problem using depth prediction from a single RGB image. With the recent advances in deep learning, depth prediction shows high performance. However, due to the discrepancy between training environment and test environment, 3D reconstruction can be vulnerable to the uncertainty of depth prediction. To consider the uncertainty of depth prediction for robust 3D reconstruction, we adopt Bayesian deep learning framework. Conventional Bayesian deep learning requires a large amount of time and GPU memory to perform Monte Carlo sampling. To address this problem, we propose a lightweight Bayesian neural network consisting of U-net structure and summation-based skip connections, which is performed in real-time. Estimated uncertainty is utilized in probabilistic TSDF fusion for dense 3D reconstruction by maximizing the posterior of TSDF value per voxel. As a result, global TSDF robust to erroneous depth values can be obtained and then dense 3D reconstruction from the global TSDF is achievable more accurately. To evaluate the performance of depth prediction and 3D reconstruction using our method, we utilized two official datasets and demonstrated the outperformance of the proposed method over other conventional methods.
ISSN
1050-4729
URI
https://hdl.handle.net/10371/186523
DOI
https://doi.org/10.1109/ICRA40945.2020.9196663
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share