S-Space College of Engineering/Engineering Practice School (공과대학/대학원) Dept. of Electrical and Computer Engineering (전기·정보공학부) Theses (Ph.D. / Sc.D._전기·정보공학부)
Semantic Analysis for Human Motion Synthesis : 사람 동작 생성을 위한 의미 분석
- 공과대학 전기·컴퓨터공학부
- Issue Date
- 서울대학교 대학원
- Computer Graphics ; Character Animation ; Data-driven Motion Synthesis ; Motion Classification ; Machine Learning
- 학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이제희.
- One of main goals of computer-generated character animation is to reduce cost to create animated scenes.
Using human motion in makes it easier to animate characters, so motion capture technology is used as a standard technique.
However, it is difficult to get the desired motion because it requires a large space, high-performance cameras, actors, and a significant amount of work for post-processing.
Data-driven character animation includes a set of techniques that make effective use of captured motion data.
In this thesis, I introduce methods that analyze the semantics of motion data to enhance the utilization of the data.
To accomplish this, various techniques in other fields are integrated so that we can understand the semantics of a unit motion clip, the implicit structure of a motion sequence,
and a natural description of movements.
Based upon that understanding, we can generate new animation systems.
The first animation system in this thesis allows the user to generate an animation of basketball play from the tactics board.
In order to handle complex basketball rule that players must follow, we use context-free grammars for motion representation.
Our motion grammar enables the user to define implicit/explicit rules of human behavior and generates valid movement of basketball players.
Interactions between players or between players and the environment are represented with semantic rules, which results in plausible animation.
When we compose motion sequences, we rely on motion corpus storing the prepared motion clips and the transition between them.
It is important to construct good motion corpus to create natural and rich animations, but it requires the efforts of experts.
We introduce a semi-supervised learning technique for automatic generation of motion corpus.
Stacked autoencoders are used to find latent features for large amounts of motion capture data and the features are used to effectively discover worthwhile motion clips.
The other animation system uses natural language processing technology to understand the meaning of the animated scene that the user wants to make.
Specifically, the script of an animated scene is used to synthesize the movements of characters.
Like the sketch interface, scripts are very sparse input sources.
Understanding motion allows the system to interpret abstract user input and generate scenes that meet user needs.