Publications

Detailed Information

Bit Transpose Unit Design for Approximate Memory Architecture : 근사적 메모리 구조를 위한 Bit Transpose Unit 설계

DC Field Value Language
dc.contributor.advisorHyuk-Jae Lee-
dc.contributor.author응웬헝후이-
dc.date.accessioned2018-12-03T01:35:58Z-
dc.date.available2018-12-03T01:35:58Z-
dc.date.issued2018-08-
dc.identifier.other000000151966-
dc.identifier.urihttps://hdl.handle.net/10371/143671-
dc.description학위논문 (석사)-- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2018. 8. Hyuk-Jae Lee.-
dc.description.abstract딥러닝은 인공 지능 분야에서 가장 성공적인 방법 중 하나이다. 머신러닝 알고리즘의 발달과 GPU의 개발은 모두 다양한 딥러닝 응용 프로그램의 성공에 기여 해 왔다. 딥러닝 응용 프로그램이 점점 커지게 되면 DRAM으로의 접근도 많아진다. 현대의 DRAM은 DRAM의 전체 전력 소비에서 리프레시 전력의 영향을 조절해야 하는 문제가 있다. [8]은 컨볼루션 신경망에서 약간의 정확도를 희생하는 대신 리프레시 전력의 70 % 이상을 절약하는 방법을 제안했다. 그러나 이 논문은 실제로 리프레시 전력을 측정하지 않고 예측된 전력의 감소만을 보여준다. 본 연구에서는 근사 데이터를 이용하기 위한 중요한 부분인 Bit Transpose Unit을 실제로 측정한다. DRAM 아키텍처의 변경 사항 또한 실제 시뮬레이션에서 표시되고 측정된다. 또한 Bit Transpose Unit의 문제를 해결하기 위한 방안을 제안한다. [8]의 방식을 실제로 구현 했을 때의 결과를 McSimA +와 DRAMSim2 시뮬레이션 결과로부터 구할 수 있다.-
dc.description.abstractAbstract



Deep Learning is one of the most successful methods in the artificial intelligence field. Both mature of machine learning algorithm and the development of GPU contribute to the triumph of many deep learning applications. When the deep learning applications became more heavily, the more requirement for the accessing in the DRAM also increase. Recent DRAM generation met the problem to control the influence of refresh power in the total power consumption of the DRAM. A novel idea [8] proposed a method to sacrifice a little accuracy in Convolutional Neural Network but it helps to save at least 70% percent of refresh power consumption. However, in the paper, the authors only show the prediction of the amount of saving power without real estimation. In this thesis, the real estimate of Bit Transpose Unit, which is an important part to work with approximate data. The modification in the DRAM architecture also displayed and estimated in the real simulation. A solution also proposed to overcome some problem of Bit Transpose Unit. The real estimation received from the simulation in both McSimA+ and DRAMSim2 simulations give an evidence for the real operation of the idea in [8].
-
dc.description.tableofcontentsTable of Contents

Abstract 3

Chapter 1: Introduction 9

Chapter 2: Background 11

2.1. Convolutional Neural Network 11

2.2. AlexNet, VGG, and GoogleNet 12

2.3. Different Refresh Rate Solution 15

2.4. McSimA+ and DRAMSim2 Simulations 16

2.4.1. McSimA+ Simulation 16

2.4.2. DRAMSim2 Simulation 16

Chapter 3: Bit Transpose Unit 18

3.1. Overall Computer Architecture with Bit Transpose Unit. 18

3.2. Bit Transpose Unit 19

3.3. DRAM Architecture with Approximate and Precise Data 23

3.4. Hit/Miss Criterion of Bit Transpose Unit. 26

3.5. Reading/Writing Operation with Bit Transpose Unit 28

3.6. Reading/Writing Operation with Two Bit Transpose Units 32

3.7. Two-Bit Transpose Units with Truncated Data 35

Chapter 4: Experiments and Results 38

4.1. Modification in McSimA+ 39

4.2. Modification in the DRAMSim2 Simulation 40

4.3. Simulation with Precise Data 42

4.4. Simulation with Bit Transpose Unit 43

4.5. Simulation with Two Bit Transpose Units 44

4.6. Simulation Two Bit Transpose Units plus Truncated Data 45

4.7. Open-page Policy versus Close-page Policy 47

4.8. Estimate Bit Transpose Power and Area 48

Chapter 5: Conclusion and Future Work 51

Reference 52

Abstract in Korean 53
-
dc.formatapplication/pdf-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subject.ddc621.3-
dc.titleBit Transpose Unit Design for Approximate Memory Architecture-
dc.title.alternative근사적 메모리 구조를 위한 Bit Transpose Unit 설계-
dc.typeThesis-
dc.contributor.AlternativeAuthorNGUYEN HUY HUNG-
dc.description.degreeMaster-
dc.contributor.affiliation공과대학 전기·정보공학부-
dc.date.awarded2018-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share