Publications

Detailed Information

Low-complexity block turbo code decoding for soft-decision error correction : 연판정 오류정정을 위한 낮은 복잡도의 블록 터보부호 복호화 연구

DC Field Value Language
dc.contributor.advisor성원용-
dc.contributor.authorJunhee Cho-
dc.date.accessioned2017-07-13T07:16:58Z-
dc.date.available2017-07-13T07:16:58Z-
dc.date.issued2016-08-
dc.identifier.other000000137030-
dc.identifier.urihttps://hdl.handle.net/10371/119216-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 성원용.-
dc.description.abstractAs the throughput needed for communication systems and storage devices increases, high-performance forward error correction (FEC), especially soft-decision (SD) based technique, becomes essential. In particular, block turbo codes (BTCs) and low-density parity check (LDPC) codes are considered as candidate FEC codes for the next generation systems, such as beyond-100Gbps optical networks and under-20nm NAND flash memory devices, which require capacity-approaching performance and very low error floor. The BTCs have definite strengths in diversity and encoding complexity because they generally employ a two-dimensional structure, which enables sub-frame level decoding for the row or column code-words. This sub-frame level decoding gives a strong advantage for parallel processing. The BTC decoding throughput can be improved by applying a low-complexity algorithm to the small level decoding or by running multiple sub-frame decoding modules simultaneously. In this dissertation, we develop high-throughput BTC decoding software that pursuits these advantages.

The first part of this dissertation is devoted to finding efficient test patterns in the Chase-Pyndiah algorithm. Although the complexity of this algorithm linearly increases according to the number of the test patterns, it naively considers all possible patterns containing least reliable positions. As a result, consideration of one more position nearly doubles the complexity. To solve this issue, we first introduce a new position selection criterion that excludes some of the selected ones having a relatively large reliability. This technique excludes the selection of sufficiently reliable positions, which greatly reduces the complexity. Secondly, we propose a pattern selection scheme considering the error coverage. We define the error coverage factor that represents the influence on the error-correcting performance and compute it by analyzing error events. Based on the computed factor, we select the patterns with the greedy algorithm. By using these methods, we can flexibly balance the complexity and the performance.

The second part of this dissertation is developing low-complexity soft-output processing methods needed for BTC decoding. In the Chase-Pyndiah algorithm, the soft-output is updated in two different ways according to whether competing code-words exist on the updating positions or not. If the competing code-words exist, the Euclidean distance between the soft-input signal and the code-words that are generated from the test patterns is used. However, the cost of distance computation is very high and linearly increases with the sub-frame length. We identify computationally redundant positions and optimize the computing process by ignoring them. If the competing ones do not exist, the reliability factor that should be pre-determined by an extensive search is demanded. To avoid this, we propose adaptive determination methods, which provides even better error-correcting performance. In addition, we investigate the Pyndiah's soft-output computation and find its drawbacks that appear during the approximation process. To remove the drawbacks, we replace the updating method of the positions that are expected to be seriously damaged by the approximation with the reliability factor-based one, which is much simpler, even though they have the competing words.

This dissertation also develops a graphics processing unit (GPU) based BTC decoding program. In order to hide the latency of arithmetic and memory access operations, this software applies the kernel structure that processes multiple BTC-words and allocates multiple sub-frames to each thread-block. Global memory access optimization and data compression, which demands less shared memory space, are also employed. For efficient mapping of the Chase-Pyndiah algorithm onto GPUs, we propose parallel processing schemes employing efficient reduction algorithms and provide step-by-step parallel algorithms for the algebraic decoding.

The last part of this dissertation is devoted to summarizing the developed decoding method and comparing it with the decoding of the LDPC convolutional code (CC), which is currently reported as the most powerful candidate for the 100Gbps optical network. We first investigate the complexity reduction and the error rate performance improvement of the developed method. Then, we analyze the complexity of the LDPC-CC decoding and compare it with the developed BTC decoding for the 20% overhead codes.

This dissertation is intended to develop high-throughput SD decoding software by introducing complexity reduction techniques for the Chase-Pyndiah algorithm and efficient parallel processing methods, and to emphasize the competitiveness of the BTC. The proposed decoding methods and parallel processing algorithms verified in the GPU-based systems are also applicable to hardware-based ones. By implementing hardware-based decoders that employ the developed methods in this dissertation, significant improvements on the throughputs and the energy efficiency can be obtained. Moreover, thanks to the wide rate coverage of the BTC, the developed techniques can be applied to many high-throughput error correction applications, such as the next-generation optical network and storage device systems.
-
dc.description.tableofcontentsChapter 1 Introduction 1
1.1 Turbo Codes 1
1.2 Applications of Turbo Codes 4
1.3 Outline of the Dissertation 5

Chapter 2 Encoding and Iterative Decoding of Block Turbo Codes 7
2.1 Introduction 7
2.2 Encoding Procedure of Shortened-Extended BTCs 9
2.3 Scheduling Methods for Iterative Decoding 9
2.3.1 Serial Scheduling 10
2.3.2 Parallel Scheduling 10
2.3.3 Replica Scheduling 11
2.4 Elementary Decoding with Chase-Pyndiah Algorithm 13
2.4.1 Chase-Pyndiah Algorithm for Extended BTCs 13
2.4.2 Reliability Computation of the ML Code-Word 17
2.4.3 Algebraic Decoding for SEC and DEC BCH Codes 20
2.5 Issues of Chase-Pyndiah Algorithm 23

Chapter 3 Complexity Reduction Techniques for Code-Word Set Generation of the Chase-Pyndiah Algorithm 24
3.1 Introduction 24
3.2 Adaptive Selection of LRPs 25
3.2.1 Selection Constraints of LRPs 25
3.2.2 Simulation Results 26
3.3 Test Pattern Selection 29
3.3.1 The Error Coverage Factor of Test Patterns 30
3.3.2 Greedy Selection of Test Patterns 33
3.3.3 Simulation Results 34
3.4 Concluding Remarks 34

Chapter 4 Complexity Reduction Techniques for Soft-Output Update of the Chase-Pyndiah Algorithm 37
4.1 Introduction 37
4.2 Distance Computation 38
4.2.1 Position-Index List Based Method 39
4.2.2 Double Index Set-Based Method 42
4.2.3 Complexity Analysis 46
4.2.4 Simulation Results 47
4.3 Reliability Factor Determination 49
4.3.1 Refinement of Distance-Based Reliability Factor 51
4.3.2 Adaptive Determination of the Reliability Factor 51
4.3.3 Simulation Results 53
4.4 Accuracy Improvement in Extrinsic Information Update 54
4.4.1 Drawbacks of the Sub-Optimal Update 55
4.4.2 Low-Complexity Extrinsic Information Update 58
4.4.3 Simulation Results 59
4.5 Concluding Remarks 61

Chapter 5 High-Throughput BTC Decoding on GPUs 64
5.1 Introduction 64
5.2 BTC Decoder Architecture for GPU Implementations 66
5.3 Memory Optimization 68
5.3.1 Global Memory Access Reduction 68
5.3.2 Improvement of Global Memory Access Coalescing 68
5.3.3 Efficient Shared Memory Control with Data Compression 70
5.3.4 Index Parity Check Scheme 73
5.4 Parallel Algorithms with the CUDA Shuffle Function 77
5.5 Implementation of Algebraic Decoder 78
5.5.1 Galois Field Operations with Look-Up Tables 78
5.5.2 Error-Locator Polynomial Setting with the LUTs 81
5.5.3 Parallel Chien Search with the LUTs 84
5.6 Simulation Results 85
5.7 Concluding Remarks 89

Chapter 6 Competitiveness of BTCs as FEC codes for the Next-Generation Optical Networks 91
6.1 Introduction 91
6.2 The Complexity Reduction of the Modified Chase-Pyndiah Algorithm 92
6.2.1 Summary of the Complexity Reduction 92
6.2.2 The Error-Correcting Performance 94
6.3 Comparison of BTCs and LDPC-CCs 97
6.3.1 Complexity Analysis of the LDPC-CC Decoding 97
6.3.2 Comparison of the 20% Overhead BTC and LDPC-CC 100
6.4 Concluding Remarks 101

Chapter 7 Conclusion 102

Bibliography 105

국문 초록 113
-
dc.formatapplication/pdf-
dc.format.extent10440166 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectTurbo codes-
dc.subjectsoft-decision error correction-
dc.subjectChase-Pyndiah algorithm-
dc.subjectblock turbo decoding-
dc.subjectiterative decoding-
dc.subject.ddc621-
dc.titleLow-complexity block turbo code decoding for soft-decision error correction-
dc.title.alternative연판정 오류정정을 위한 낮은 복잡도의 블록 터보부호 복호화 연구-
dc.typeThesis-
dc.contributor.AlternativeAuthor조준희-
dc.description.degreeDoctor-
dc.citation.pages115-
dc.contributor.affiliation공과대학 전기·컴퓨터공학부-
dc.date.awarded2016-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share