Dieses Bild zeigt  Sebastian Cammerer

Herr M. Sc.

Sebastian Cammerer

Wissenschaftlicher Mitarbeiter
Institut für Nachrichtenübertragung


+49 711 685-67941

Pfaffenwaldring 47
70569 Stuttgart
Raum: 2.344



Channel Coding: Spatially Coupled LDPC Codes and Polar Codes

Nowadays, block LDPC codes are widely used as forward error correction (FEC) in several important standards such as the DVB-S2, the WiMAX and the IEEE 802.11n standard. LDPC codes are usually decoded by the low-complexity belief-propagation algorithm (BP). In general this decoder is very powerful and simple to construct for every arbitrary LDPC code. It allows to build a soft-in/soft-out (SOSI) decoder with a highly parallel architecture. However, the practical codes with respect to hardware complexity, number of decoding iterations and the finite-length performance suffer from a gap between the BP performance and the maximum a posteriori (MAP) performance, which results in a gap to the channel capacity.
Spatially coupled low-density parity-check (SC-LDPC) codes can achieve the channel capacity under low-complexity belief propagation (BP) decoding, which means they reach the MAP performance under low-complexity BP decoding. These codes usually show a very low error floor, which makes them a promising candidate for modern communication standards. For practical finite coupling lengths however, there is a non-negligible rate-loss because of termination effects.

In order to simulate such low error probabilities, we currently develop a simulation cluster based on graphic cards (GPU) and the Nvidia CUDA programming language. This setup allows to simulate an arbitrary LDPC code / polar code with bit error rates (BER) up to 10 -9 - 10 - 10 within a feasible period of time.

One goal of my research is to investigate and understand the effects of spatial coupling and use this knowledge to design codes close to the capacity with very good properties and a low decoding complexity.


Machine Learning for Communications
We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes.  Although  it  is  possible  to  achieve  maximum a  posteriori (MAP) bit error rate (BER) performance for both code families and  for  short  codeword  lengths,  we  observe  that  (i)  structured codes  are  easier  to  learn  and  (ii)  the  neural  network  is  able  to generalize  to  codewords  that  it  has  never  seen  during  training for  structured, but not  for  random codes.  These results provide some evidence that neural networks can learn a form of decoding algorithm,  rather  than  only  a  simple  classifier.  

Deep  learning-based  channel  decoding  is  doomed  by  the curse of dimensionality: for a short code of length N=100 and  rate r= 0.5, 250 different  codewords  exist,  which are  far  too  many  to  fully  train  any  neural network  (NN) in  practice.  The  only  way  that  a  NN  can  be  trained  for practical  blocklengths is,  if  it  learns  some  form  of  decoding algorithm which can infer the full codebook from training on a small fraction of codewords. However, to be able to learn a decoding algorithm, the code itself must have some structure which is based on a simple encoding rule, like in the case of convolutional or algebraic codes. The goal of our work is to shed some light on the question whether structured codes are easier  to  “learn”  than  random  codes,  and  whether  a  NN  can decode codewords that it has never seen during training.


Always looking for Studentes interested in

- Channel Coding (LDPC codes, Polar codes)

- Machine learning/Deep Learning for Communications

- GPU Programming (C/C++ and CUDA)


Winter term 2015/2016: Exercises Modern Error Correction

Summer term 2016: Übungen Nachrichtentechnik 2

Winter term 2016/2017: Exercises Modern Error Correction

Zum Seitenanfang