Publications

Detailed Information

RGB-to-TSDF: Direct TSDF Prediction from a Single RGB Image for Dense 3D Reconstruction

Cited 4 time in Web of Science Cited 6 time in Scopus
Authors

Kim, Hanjun; Moon, Jiyoun; Lee, Beomhee

Issue Date
2019-11
Publisher
IEEE
Citation
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), pp.6714-6720
Abstract
In this paper, we present a novel method to predict 3D TSDF voxels from a single image for dense 3D reconstruction. 3D reconstruction with RGB images has two inherent problems: scale ambiguity and sparse reconstruction. With the advent of deep learning, depth prediction from a single RGB image has addressed these problems. However, as the predicted depth is typically noisy, de-noising methods such as TSDF fusion should be adapted for the accurate scene reconstruction. To integrate the two-step processing of depth prediction and TSDF generation, we design an RGB-to-TSDF network to directly predict 3D TSDF voxels from a single RGB image. The TSDF using our network can be generated more efficiently in terms of time and accuracy than the TSDF converted from depth prediction. We also use the predicted TSDF for a more accurate and robust camera pose estimation to complete scene reconstruction. The global TSDF is updated from TSDF prediction and pose estimation, and thus dense isosurface can be extracted. In the experiments, we evaluate our TSDF prediction and camera pose estimation results against the conventional method.
ISSN
2153-0858
URI
https://hdl.handle.net/10371/186935
DOI
https://doi.org/10.1109/IROS40897.2019.8968566
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share