Publications
Detailed Information
Per-core file allocation method for eliminating the overhead of maintaining consistency between nodes in a distributed file system : 분산 파일 시스템에서 노드 간 컨시스턴시 유지 오버헤드의 제거를 위한 per-core file allocation 방법
Cited 0 time in
Web of Science
Cited 0 time in Scopus
- Authors
- Advisor
- 염헌영
- Major
- 공과대학 컴퓨터공학부
- Issue Date
- 2017-08
- Publisher
- 서울대학교 대학원
- Keywords
- GlusterFS ; Erasure code ; Cluster Lock
- Description
- 학위논문 (석사)-- 서울대학교 대학원 공과대학 컴퓨터공학부, 2017. 8. 염헌영.
- Abstract
- Distributed File System (DFS) is a file system that allows access to multiple
storage servers through a computer network. The modern DFS offers a variety
of functions such as load balancing, location transparency, high availability, and
fault tolerance. Among them, the fault tolerance is one of the most important
functions required to protect data from server and disk failures. GlusterFS of a
typical DFS supports replication, which replicates data for fault tolerance and
stores it on a separate server, and Erasure code that stores the parity on another
sever after encoding the data. When it comes to these two ways, it is essential to
maintain the data consistency between the server nodes in order to maintain the
consistency of the data because the information about one data is distributed
and stored across multiple server nodes. If the data consistency is not maintained,
each server node stores data with different contents, which leads to the
destruction of fault tolerance. Therefore, the GlusterFS uses a method to aci
quire a lock in all servers when performing each operation to solve the problem.
The reason for using this method is because file operations can be delivered as
intermixed between sever nodes. All file operations must be atomically applied
to the entire sever node. However, in a current implementation of the GlusterFS,
it can be operated in parallel in multiple io-thread and event-thread even in operations
on the same file, so that it requires a concurrency control. This can
cause up to two additional round trips as well as overheads such as managing
locks. Therefore, we propose a method to maintain data consistency between
server nodes without an additional concurrency control by keeping the order of
operations on the same file in the whole system by making the operations of
the same file performed on the same core all the time. In this way, we could
achieve mean 63% and up to 83% performance improvements in randread, and
mean 60% and up to 69% performance improvements in randwrite.
- Language
- English
- Files in This Item:
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.