Publications

Detailed Information

Refining Geometry from Depth Sensors using IR Shading Images

DC Field Value Language
dc.contributor.authorChoe, Gyeongmin-
dc.contributor.authorPark, Jaesik-
dc.contributor.authorTai, Yu-Wing-
dc.contributor.authorKweon, In So-
dc.date.accessioned2024-05-09T04:13:52Z-
dc.date.available2024-05-09T04:13:52Z-
dc.date.created2024-05-09-
dc.date.created2024-05-09-
dc.date.issued2017-03-
dc.identifier.citationInternational Journal of Computer Vision, Vol.122 No.1, pp.1-16-
dc.identifier.issn0920-5691-
dc.identifier.urihttps://hdl.handle.net/10371/201318-
dc.description.abstractWe propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements.-
dc.language영어-
dc.publisherKluwer Academic Publishers-
dc.titleRefining Geometry from Depth Sensors using IR Shading Images-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-016-0937-y-
dc.citation.journaltitleInternational Journal of Computer Vision-
dc.identifier.wosid000394421800001-
dc.identifier.scopusid2-s2.0-84988391768-
dc.citation.endpage16-
dc.citation.number1-
dc.citation.startpage1-
dc.citation.volume122-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorPark, Jaesik-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.subject.keywordAuthorRGB-D sensor-
dc.subject.keywordAuthorKinect-
dc.subject.keywordAuthorInfrared-
dc.subject.keywordAuthorIR-
dc.subject.keywordAuthorGeometry refinement-
dc.subject.keywordAuthorShading image-
dc.subject.keywordAuthorShape from shading-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • College of Engineering
  • Dept. of Computer Science and Engineering
Research Area Computer Graphics, Computer Vision, Machine Learning, Robotics

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share