Publications

Detailed Information

Unifying Imperative and Symbolic Deep Learning Execution : 명령형과 심볼릭 그래프 기반 딥러닝 수행 방식의 통합

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

정은지

Advisor
전병곤
Issue Date
2021-02
Publisher
서울대학교 대학원
Keywords
imperativesymbolicdeep learning framework
Description
학위논문 (박사) -- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2021. 2. 전병곤.
Abstract
The rapid evolution of deep neural networks is demanding deep learning (DL) frameworks not only to satisfy the requirement of quickly executing large computations, but also to support straightforward programming models for quickly implementing and experimenting with complex network structures. However, existing frameworks fail to excel in both departments simultaneously, leading to diverged efforts for optimizing performance and improving usability.
This thesis presents systems to unify two existing paradigms in current deep learning frameworks, symbolic and imperative, to achieve the performance and programmability at the same time. First we present Janus, a system that combines the advantages from both sides by transparently converting an imperative DL program written in Python, the de-facto scripting language for DL, into an efficiently executable symbolic dataflow graph. Janus can convert various dynamic features of Python, including dynamic control flow, dynamic types, and impure functions, into elements of a symbolic dataflow graph.
Next, we propose Terra, an imperative-symbolic co-execution framework for imperative DL programs. As the usability of deep learning (DL) framework is getting more important, the imperative programming model has become an essential part of recent DL frameworks. However, optimizing individual operations in imperative programs has limited opportunities compared to optimizing them as a group in a symbolic graph format. Still, existing approaches that convert imperative DL programs into optimized symbolic graphs cannot provide a general solution due to their limited program coverage. Terra decouples the actual computation of DL operations from imperative programs and converts the DL operations into an optimized graph. Then the optimized graph and the skeleton imperative program are executed at the same time in a complementary manner to each other, so that we can achieve high performance of optimized graph execution while supporting the whole semantics of the original imperative program.
Among various DL models, we additionally delve into recursive neural networks (TreeNNs), which are important yet highly challenging to be represented as DL graphs. We introduce new DL abstractions, SubGraph and InvokeOp, which naturally capture any tree- or graph-like structure of the input data as DL graph elements. Then, we present our underlying system that supports the automatic differentiation of the abstractions and efficiently executes TreeNNs by running InvokeOps in parallel.
We implemented a system using the proposed Janus architecture, which additionally exploits recursive DL abstractions. Our evaluation show that Janus can achieve fast DL training by exploiting the techniques imposed by symbolic graph-based DL frameworks, while maintaining the simple and flexible programmability of imperative DL frameworks at the same time.
Language
eng
URI
https://hdl.handle.net/10371/175434

https://dcollection.snu.ac.kr/common/orgView/000000165877
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share