(In English)

Deep Learning Applications for Communications (DLACOM)

Applications of machine learning in communications (Summer Semester)

Due to the current COVID-19 outbreak no in-class sessions will be held until further notice. However, video recordings of the lectures as well as annotated lecture notes will be provided on ILIAS.


Artificial intelligence and machine learning are currently considered as the most important universal technology of our era, like electricity and the combustion engine; their applications now extend into almost every industry and research domain. Particularly deep learning has led to many recent breakthroughs in various domains, such as computer vision, natural language processing, and speech recognition. The goal of this course is to introduce key concepts of deep learning which will be exemplified with applications in communications, such as an entirely neural network-based communications system that does not use any traditional algorithm. For now, deep learning for communications is a novel field that offers many attractive interdisciplinary research questions at the interface between machine learning, communications engineering, and information theory. The course is hence a great opportunity to learn about the cutting-edge research in communications and deep learning. A strong focus is put on practical implementation with state-of-the-art deep learning libraries and each lecture will be accompanied by a Jupyter notebook with code examples.

Contents and Educational Objectives

1. Introduction (JH)
" Hype around artificial intelligence and deep learning
" Some historical remarks
" Role of machine learning for future communications systems
" Course environment: Git, Docker, Jupyter
2. A Primer on Deep Learning (JH)
" Neural networks
" Universal approximation theorem & Approximation and estimation bound
" Backpropagation
" Stochastic gradient descent
" Gradient descent optimization algorithms
" Capacity, overfitting, and underfitting
" Regularization
3. Introduction to Python and Tensorflow (SD&SC)
4. Deep learning-based User Localization (SD)
5. Convolutional Neural Networks: Modulation Classification (JH)
6. Deep Unfolding: Neural Belief Propagation (SC)
7. Recurrent neural networks: Decoding of Convolutional Codes (StB)
8. Residual Nets: Deep MIMO Detection (JH)
9. Autoencoders: Learning to Communicate (SC)
10. Using neural networks in Software-defined Radio (SD)
11. Generative Adversarial Networks: Channel Modeling (SD)
12. Open Ends, Summary and Outlook (JH)

Course Information

3 ECTS Credits (beginning with SS2020)

Lecturer Dr.-Ing. Jakob Hoydis,Sebastian Cammerer, Sebastian Dörner and Prof. Dr.-Ing. Stephan ten Brink
Time Slot Thursday, 14:00-15:30
Lecture Hall 2.348 (ETI2)
Weekly Credit Hours 2
Lecturer Sebastian Cammerer and Sebastian Dörner and Tim Uhlemann
Time Slot TBD
Lecture Hall 2.348 (ETI2)
Weekly Credit Hours TBD, will be interleaved with lecture

Jakob Hoydis

Stephan ten Brink
Prof. Dr.-Ing.

Stephan ten Brink


To the top of the page