S-Space College of Engineering/Engineering Practice School (공과대학/대학원) Dept. of Electrical and Computer Engineering (전기·정보공학부) Theses (Ph.D. / Sc.D._전기·정보공학부)
Write Avoidance Schemes for Non-Volatile Memory based Last-Level Cache : 비휘발성 메모리 기반의 최종 레벨 캐시를 위한 쓰기 회피 기법
- 공과대학 전기·컴퓨터공학부
- Issue Date
- 서울대학교 대학원
- Cache memories ; Emerging technologies ; Heterogeneous (hybrid) memory systems ; Low-power design ; Cache coherence ; Cache partitioning
- 학위논문 (박사)-- 서울대학교 대학원 : 컴퓨터공학부, 2016. 2. 신현식.
- Non-volatile memory (NVM) is considered to be a promising memory technology for last-level caches (LLC) due to its low leakage of power and high
storage density. However, NVM has some drawbacks including high dynamic energy when modifying NVM cells, long latency for write operations,
and limited write endurance. To overcome these problems, the thesis focuses on two approaches: cache coherence and NVM capacity management policy
for hybrid cache architecture (HCA).
First, we review existing cache coherence protocols under the condition of NVM-based LLCs. Our analysis reveals that the LLCs perform unnecessary write operations
because legacy protocols have very pay little attention to reducing the number of write accesses to the LLC. Therefore, a write avoidance cache coherence protocol (WACC)
is proposed to reduce the number of write operations to the LLC.
In addition, novel HCA schemes are proposed to efficiently utilize SRAM in the thesis. Previous studies on HCA have concentrated on detecting
write-intensive blocks and placing them into the SRAM ways. However, unlike other studies, a dynamic way adjusting algorithm (DWA) and a
linefill-aware cache partitioning (LCP) calculate the optimal size of NVM ways and SRAM ways in order to minimize the NVM write counts and assigning
the corresponding number of NVM ways and SRAM ways to cores.
The simulation results show that WACC achieves a 13.2% reduction in the dynamic energy consumption. For HCA schemes, the dynamic energy
consumption of DWA and LCP is reduced by 26.9% and 37.2%, respectively.