Publications

Detailed Information

Synthesizing and Editing Human Motion from Sparse User Inputs : 적은 수의 사용자 입력으로부터 인간 동작의 합성 및 편집

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김종민

Advisor
이제희
Major
공과대학 전기·컴퓨터공학부
Issue Date
2014-08
Publisher
서울대학교 대학원
Keywords
컴퓨터 그래픽스인간 동작데이터 기반 애니메이션성능 애니메이션인터랙티브 편집기계학습수치 최적화Computer GraphicsHuman MotionData-driven AnimationPerformance AnimationInteractive EditingMachine LearningNumerical Optimization
Description
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 이제희.
Abstract
An ideal 3D character animation system can easily synthesize and edit human motion and also will provide an efficient user interface for an animator. However, despite advancements of animation systems, building effective systems for synthesizing and editing realistic human motion still remains a difficult problem. In the case of a single character, the human body is a significantly complex structure because it consists of as many as hundreds of degrees of freedom. An animator should manually adjust many joints of the human body from user inputs. In a crowd scene, many individuals in a human crowd have to respond to user inputs when an animator wants a given crowd to fit a new environment. The main goal of this thesis is to improve interactions between a user and an animation system.

As 3D character animation systems are usually driven by low-dimensional inputs, there is no method for a user to directly generate a high-dimensional character animation. To address this problem, we propose a data-driven mapping model that is built by motion data obtained from a full-body motion capture system, crowd simulation, and data-driven motion synthesis algorithm. With the data-driven mapping model in hand, we can transform low-dimensional user inputs into character animation because motion data help to infer missing parts of system inputs. As motion capture data have many details and provide realism of the movement of a human, it is easier to generate a realistic character animation than without motion capture data.

To demonstrate the generality and strengths of our approach, we developed two animation systems that allow the user to synthesize a single character animation in realtime and edit crowd animation via low-dimensional user inputs interactively. The first system entails controlling a virtual avatar using a small set of three-dimensional (3D) motion sensors. The second system manipulates large-scale crowd animation that consists of hundreds of characters with a small number of user constraints. Examples show that our system is much less laborious and time-consuming than previous animation systems, and thus is much more suitable for a user to generate desired character animation.
Language
English
URI
https://hdl.handle.net/10371/119017
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share