Publications

Detailed Information

Synthesizing and Editing Human Motion from Sparse User Inputs : 적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집

DC Field Value Language
dc.contributor.advisor이제희-
dc.contributor.author김종민-
dc.date.accessioned2017-07-13T07:05:06Z-
dc.date.available2017-07-13T07:05:06Z-
dc.date.issued2014-08-
dc.identifier.other000000021210-
dc.identifier.urihttps://hdl.handle.net/10371/119017-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.-
dc.description.abstractAn ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system.

As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data.

To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.
-
dc.description.tableofcontentsContents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Background 10
2.1 Performance Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Performance-based Interfaces for Character Animation . . . . . . . 11
2.1.2 Statistical Models for Motion Synthesis . . . . . . . . . . . . . . . 12
2.1.3 Retrieval of Motion Capture Data . . . . . . . . . . . . . . . . . . 13
2.2 Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Motion Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Geometry Deformation . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Realtime Performance Animation Using Sparse 3D Motion Sensors 17
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Sensor Data and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Motion Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.1 Online Local Model . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4.2 Kernel CCA-based Regression . . . . . . . . . . . . . . . . . . . . 25
3.4.3 Motion Post-processing . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4 Interactive Manipulation of Large-Scale Crowd Animation 40
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Crowd Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 Cage-based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3.1 Cage Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.2 Cage Representation . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4 Editing Crowd Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.1 Spatial Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.2 Temporal Manipulation . . . . . . . . . . . . . . . . . . . . . . . . 57
4.5 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5 Conclusion 69
Bibliography I
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIII
-
dc.formatapplication/pdf-
dc.format.extent17977012 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subject컴퓨터 그래픽스-
dc.subject인간 동작-
dc.subject데이터 기반 애니메이션-
dc.subject성능 애니메이션-
dc.subject인터랙티브 편집-
dc.subject기계학습-
dc.subject수치 최적화-
dc.subjectComputer Graphics-
dc.subjectHuman Motion-
dc.subjectData-driven Animation-
dc.subjectPerformance Animation-
dc.subjectInteractive Editing-
dc.subjectMachine Learning-
dc.subjectNumerical Optimization-
dc.subject.ddc621-
dc.titleSynthesizing and Editing Human Motion from Sparse User Inputs-
dc.title.alternative적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집-
dc.typeThesis-
dc.description.degreeDoctor-
dc.citation.pagesxiiii,70-
dc.contributor.affiliation공과대학 전기·컴퓨터공학부-
dc.date.awarded2014-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share