Conference paper Open Access
Larue, Guillaume; Dufrene, Louis-Adrien; Lampin, Quentin; Paul, Chollet; Ghauch, Hadi; Rekaya, Ghaya
Abstract - Neural belief propagation decoders were recently introduced by Nachmani et al. as a way to improve the decoding performance of belief propagation iterative algorithm for short to medium length linear block codes. The main idea behind these decoders is to represent belief propagation as a neural network, enabling adaptive weighting of the decoding process. In the present paper an efficient recurrent neural network architecture, based on gating and weights sharing mechanisms, is proposed to perform blind neural belief propagation decoding without prior knowledge of the coding scheme used by the encoder. The proposed architecture is able to learn to decode BCH (15,11) and BCH (15,7) codes at least at the level of performance of a standard belief propagation algorithm and even to outperform it in the case of BCH (15,11) code thanks to NBP approach. A particular emphasis is given to the interpretability and complexity of the proposed model to ensure scalability to larger codes.