Please use this identifier to cite or link to this item:
|uhm_phd_9519442_r.pdf||Restricted for viewing only||4.53 MB||Adobe PDF||View/Open|
|uhm_phd_9519442_uh.pdf||For UH users only||4.47 MB||Adobe PDF||View/Open|
|Title:||Decoding of linear block codes based on ordered statistics|
|Authors:||Fossorier, Marc P.C.|
Error-correcting codes (Information theory)
|Abstract:||This dissertation presents a novel approach to soft decision decoding for binary linear block codes. The basic idea of this approach is to achieve a desired error performance progressively in a number of stages. For each decoding stage, the error performance is highly bounded and the decoding is terminated at the stage where either near-optimum error performance or a desired level of error performance is achieved. As a result, more flexibility in the trade-off between performance and decoding complexity is provided. The proposed decoding is based on the reordering of the received symbols according to their reliability measure. In the dissertation, we evaluate the statistics of the noise after ordering. Based on these statistics, we derive two monotonic properties which dictate the reprocessing strategy. Each codeword is decoded in two steps: (1) hard-decision decoding based on reliability information and (2) reprocessing of the hard-decision decoded codeword in successive stages until the desired performance is achieved. The reprocessing is based on the monotonic properties of the ordering and is carried out using a cost function. A new resource test highly related to the reprocessing strategy is introduced to reduce the number of computations for each reprocessing stage. For short codes of lengths N ≤ 32 or medium codes with 32 < N ≤ 64 with rate R ≥ 0.6, near-optimum bit error performance is achieved in two stages of reprocessing with at most a computation complexity of o(K2) constructed code-words, where K is the dimension of the code. For longer codes, three or more reprocessing stages are required to achieve near-optimum decoding. However, most of the coding gain is obtained within the first two reprocessing stages for error performances of practical interest. The proposed decoding algorithm applies to any binary linear code, does not require any data storage and is well suitable for parallel processing. Furthermore, the maximum number of computations required at each reprocessing stage is fixed, which prevents buffer overflow at low SNR. Further reductions of the number of computations based on probabilistic or decomposable properties of the code considered are also investigated. The generalization of the algorithm to other channels is discussed. Finally, the discrete time channel associated with the AWGN model after ordering is greatly simplified. This simplification allows an important reduction of the number of computations required to evaluate the theoretical error performance of any algorithm based on a partial or total ordering.|
|Description:||Thesis (Ph. D.)--University of Hawaii at Manoa, 1994.|
Includes bibliographical references (leaves 177-182).
xvii, 182 leaves, bound ill. 29 cm
|Rights:||All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.|
|Appears in Collections:||Ph.D. - Electrical Engineering|
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.