Publications

Detailed Information

context-driven hybrid image inpainting

DC Field Value Language
dc.contributor.advisor김태환-
dc.contributor.author-
dc.date.accessioned2017-07-14T03:01:06Z-
dc.date.available2017-07-14T03:01:06Z-
dc.date.issued2015-08-
dc.identifier.other000000032285-
dc.identifier.urihttps://hdl.handle.net/10371/123174-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 김태환.-
dc.description.abstractImage inpainting, which is the filling-in of missing regions in an image, is one of the most important topics in the area of computer vision and image processing. The existing non-hybrid image inpainting techniques can be broadly classified into two types. One is the texture-based inpainting and the other is the structure-based inpainting. One critical drawback of those techniques is that their inpainting results are not effective for the images with a mixture of texture and structure features in terms of visual quality or processing time. However, the conventional hybrid inpainting algorithms, which aim at inpainting images with texture and structure features, do not effectively deal with the two items: (1) what is the most effective application order of the constituents? and (2) how can we extract a minimal sub-image that may contain best candidates of inpaint- ing source? In this work, we propose a new hybrid inpainting algorithm to address the two tasks fully and effectively. Precisely, our algorithm attempts to solve two key ingredients: (1) (right time) determining the best application order for inpainting textural and structural missing regions and (2) (right place) extracting the sub-image containing best candidates of source patches to be used to fill in a target region. Through experiments with diverse image test cases, it is shown that our algorithm integrating the enhancements has greatly improved the inpainting quality compared to that of the previous non-hybrid inpainting methods while even spending much shorter processing time compared to the conventional hybrid inpainting methods.-
dc.description.tableofcontentsAbstract i
Contents ii
List of Tables iv
List of Figures v
1 INTRODUCTION 1
2 Exemplar-based Inpainting: Review and Enhancement 7
2.1 Preliminary: A State-of-the-Art Exemplar-based Inpainting . . . . . . 7
2.2 Context-Driven Determination of Window Sizes . . . . . . . . . . . . 10
3 The Proposed Context-Driven Hybrid Inpainting 12
3.1 OverallFlow .............................. 12
3.2 Step1:Pre-processing ......................... 14
3.3 Step2:Exemplar-basedInpainting................... 15
3.4 Step3:Diffusion-basedInpainting ................... 18
4 Experimental Results
5 Conclusion
Abstract (In Korean) ................... 29
Acknowlegement ................... 30
-
dc.formatapplication/pdf-
dc.format.extent6671468 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectimage processing-
dc.subjecthybrid image inpainting-
dc.subjecttexture and structure distribution-
dc.subject.ddc621-
dc.titlecontext-driven hybrid image inpainting-
dc.typeThesis-
dc.description.degreeMaster-
dc.citation.pagesv, 30-
dc.contributor.affiliation공과대학 전기·컴퓨터공학부-
dc.date.awarded2015-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share