S-Space College of Engineering/Engineering Practice School (공과대학/대학원) Dept. of Electrical and Computer Engineering (전기·정보공학부) Theses (Ph.D. / Sc.D._전기·정보공학부)
OS I/O path optimizations for flash solid-state drives
플래시 SSD를 위한 운영체제 I/O경로 최적화
- 공과대학 전기·컴퓨터공학부
- Issue Date
- 서울대학교 대학원
- 학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 염헌영.
- Flash memory technology, in the form of flash solid-state drives (flash SSDs), is steadily replacing prior storage technology based on the value of affordable microsecond level random access memory with high bandwidth.
While the cost per bit is now comparable with HDDs, the superior performance of SSDs are now replacing HDDs from our storage systems.
However, due to the high latency variability, flash SSDs are yet to be a strong candidate to replace or complement DRAM based in-memory systems which are commonly seen in modern data centers under latency sensitive applications.
Because of this variance, it can be challenging to meet both the IOPS and latency requirements for these latency sensitive applications.
While the latency of flash SSDs are small enough to make software overheads of an I/O request not negligible, latency variance increases the overheads by magnifying the impact of context switches which harms both IOPS and latency capability of an I/O path.
Also, the latency variance of flash SSDs is exposed in an uncontrolled fashion towards the applications which harm service level throughput of the data center.
Such impact of variance has to be tolerated or controlled within the I/O path.
To this end, this dissertation presents a set of host side OS I/O path optimizations which address the impact of latency variance of flash SSDs with the goal of using flash SSDs under latency sensitive applications.
New I/O path designs based on two distinct approaches, which are 1) exploiting additional resource based on parallelism of multiple CPU cores or SSDs, and 2) exploiting the enhanced interactions between the host and the SSD are proposed to cope with the variance.
While prior research has limitations on sacrificing one of either IOPS or latency to address variance, our I/O path designs achieve both IOPS and latency by trading additional resources.
To reduce the software overhead caused by the variance, we implemented optimized AHCI based flash SSD device driver which enhances the IOPS capability of the I/O path by reducing the impact of context switches within the I/O completion path.
This device driver achieved 100% IOPS enhancement over the original Linux I/O path.
Also, an SSD extension was implemented on an SSD prototype platform which can further reduce the latency of individual I/O requests by overlapping scheduling delays with the actual I/O time.
Here, the extension was able to introduce average 7us of latency reduction per I/O request without diminishing in system parallelism.
To address the leakage of the latency variance, we developed a key-value storage engine as a flash SSD backend for a Memcached.
The negative impact of latency spikes caused by write oriented operations was isolated from foreground read operations by exploiting redundant data copies placed on multiple SSDs.
While this read-write separation technique provided moderate impact cutting the tail latency of the key value store to millisecond levels, dramatic reduction of was demonstrated by exploiting SSDs with the capability of controlling internal I/O operations such as garbage collection.
The extensions achieved latency under 1 ms at the 99.9999th percentiles from the storage engine level.