Publications

Detailed Information

3D Reconstruction of Multiple Objects from Dynamic Scenes and Learning Based Depth Super Resolution : 동적 장면으로부터의 다중 물체 3차원 복원 기법 및 학습 기반의 깊이 초해상도 기법

DC Field Value Language
dc.contributor.advisor이경무-
dc.contributor.author신영민-
dc.date.accessioned2017-07-13T07:03:12Z-
dc.date.available2017-07-13T07:03:12Z-
dc.date.issued2014-02-
dc.identifier.other000000018233-
dc.identifier.urihttps://hdl.handle.net/10371/118987-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 이경무.-
dc.description.abstractIn this dissertation, a framework for reconstructing 3-dimensional shape of the multiple objects and the method for enhancing the resolution of 3-dimensional models, especially human face, are proposed. Conventional 3D reconstruction from multiple views is applicable to static scenes, in which the configuration of objects is fixed while the images are taken. In the proposed framework, the main goal is to reconstruct the 3D models of multiple objects in a more general setting where the configuration of the objects varies among views. This problem is solved by object-centered decomposition of the dynamic scenes using unsupervised co-recognition approach. Unlike conventional motion segmentation algorithms that require small motion assumption between consecutive views, co-recognition method provides reliable accurate correspondences of a same object among unordered and wide-baseline views. In order to segment each object region, the 3D sparse points obtained from the structure-from-motion are utilized. These points are relative reliable since both their geometric relation and photometric consistency are considered simultaneously to generate these 3D sparse points. The sparse points serve as automatic seed points for a seeded-segmentation algorithm, which makes the interactive segmentation work in non-interactive way. Experiments on various real challenging image sequences demonstrate the effectiveness of the proposed approach, especially in the presence of abrupt independent motions of objects.
Obtaining high-density 3D model is also an important issue. Since the multi-view images used to reconstruct 3D model or the 3D imaging hardware such as the time-of-flight cameras or the laser scanners have their own natural upper limit of resolution, super-resolution method is required to increase the resolution of 3D data. This dissertation presents an algorithm to super-resolve the single human face model represented in 3D point cloud. The point cloud data is considered as an object-centered 3D data representation compared to the camera-centered depth images. While many researches are done for the super-resolution of intensity images and there exist some prior works on the depth image data, this is the first attempt to super-resolve the single set of 3D point cloud data without additional intensity or depth image observation of the object. This problem is solved by querying the previously learned database which contains corresponding high resolution 3D data associated with the low resolution data. The Markov Random Field(MRF) model is constructed on the 3D points, and the proper energy function is formulated as a multi-class labeling problem on the MRF. Experimental results show that the proposed method solves the super-resolution problem with high accuracy.
-
dc.description.tableofcontentsAbstract i
Contents ii
List of Figures vii
List of Tables xiii
1 Introduction 1
1.1 3D Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Dissertation Goal and Contribution . . . . . . . . . . . . . . . . . . . 2
1.3 Organization of Dissertation . . . . . . . . . . . . . . . . . . . . . . . 3
2 Background 7
2.1 Motion Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Image Super Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Multi-Object Reconstruction from Dynamic Scenes 13
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Co-Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4.2 Integration of the Sub-Results . . . . . . . . . . . . . . . . . 25
3.5 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.6 Object Boundary Renement . . . . . . . . . . . . . . . . . . . . . . 28
3.7 3D Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.8 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.8.1 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . 32
3.8.2 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . 39
3.8.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4 Super Resolution for 3D Face Reconstruction 55
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4.1 Local Patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.2 Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.3 Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.1 Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.2 Building Markov Network . . . . . . . . . . . . . . . . . . . . 75
4.5.3 Reconstructing Super-Resolved 3D Model . . . . . . . . . . . 76
4.6 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.2 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . 81
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5 Conclusion 93
5.1 Summary of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Bibliography 97
국문 초록 107
-
dc.formatapplication/pdf-
dc.format.extent22841524 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectComputer Vision-
dc.subject3D Reconstruction-
dc.subjectDynamic Scenes-
dc.subjectCo-recognition-
dc.subjectMultiple Objects-
dc.subjectSuper-resolution-
dc.subjectPoint Cloud-
dc.subject.ddc621-
dc.title3D Reconstruction of Multiple Objects from Dynamic Scenes and Learning Based Depth Super Resolution-
dc.title.alternative동적 장면으로부터의 다중 물체 3차원 복원 기법 및 학습 기반의 깊이 초해상도 기법-
dc.typeThesis-
dc.contributor.AlternativeAuthorYoung Min Shin-
dc.description.degreeDoctor-
dc.citation.pages126-
dc.contributor.affiliation공과대학 전기·컴퓨터공학부-
dc.date.awarded2014-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share