Block-Seminar on Deep Learning for Bio-Medical Data Analysis

apl. Prof. Olaf Ronneberger (DeepMind)

In this seminar you will learn about relevant bio-medical research fields and the most recent methods (mainly based on deep learning) that have been already applied to bio-medical data, or that have a large potential in this field. Especially generative models and unsupervised methods have a large potential to learn concepts from large non-annotated data bases (see a recent blog post from DeepMind on "Unsupervised learning: the curious pupil"). For each paper there will be one person, who performs a detailed investigation of a research paper and its background and will give a presentation. The presentation is followed by a discussion with all participants about the merits and limitations of the respective paper. You will learn to read and understand contemporary research papers, to give a good oral presentation, to ask questions, and to openly discuss a research problem.

Due to the Corona crisis, the seminar will be entirely held online, i.e., the student presentations will be given with a teleconferencing tool using screen sharing, as will be the discussions of the papers.

(2 SWS)
25. March, 9:00 - 18:00
Online via teleconference (unless university reopens)
Contact person: David Hoffmann

Beginning: Watch the following lectures before November 5:
About the seminar
Giving a good presentation
Proper scientific behavior
If you want to participate, register in HisInOne for the course, attend the Zoom meeting on November 5 11:00, and send an email with your name and your paper priorities (B1-B10, favorite paper first) to David Hoffmann before November 9.

Mid-Semester Meeting: 22. January, 10:15-12:00
Video conference link will be in the e-mails
Introduction to Neural Networks by apl. Prof. Olaf Ronneberger (DeepMind)

ECTS Credits: 4

Recommended semester:

6 (Bachelor), any (Master)
Requirements: Background in computer vision

Remarks: This course is offered to both Bachelor and Master students. The language of this course is English. All presentations must be given in English.

There is a related Seminar on Current Works in Computer Vision offered by Prof. Thomas Brox

Topics will be assigned for both seminars via a preference voting (detailed information will follow). Please register for the seminar online before the first meeting. If you could not register still come to our introductory online meeting to see if there are papers free. If there are more interested students than places, places will be assigned by a mixture of motivation in the first meeting and priority suggestions of the system. The date of registration is NOT important. In particular, we want to avoid that people grab a topic and then jump off during the semester. Please have a coarse look at all available papers to make an informed decision before you commit. The listed papers are not yet sorted by the time of presentation.

Please get in contact with your advisor as soon as possible, and at least 4 weeks before your presentation

Submit your presentation outline to your advisor at least 2 weeks before your presentation and meet with your advisor.

Submit your presentation slides to your advisor at least 1 week before your presentation and meet again.

All participants must read all papers and answer a few questions. The questions will be available here. The answers must be sent to the corresponding advisor until t.b.a.. We highly recommend to read and understand all papers first, before you start to prepare your presentation.

Slides of the introductory lecture
Powerpoint template for your presentation (optional)


ID Paper Student   Advisor Slides  
B1 Bootstrap your own latent: A new approach to self-supervised Learning in combination with a more berief paper: BYOL works even without batch statistics Lorraine Coelho Sudhanshu Mittal
B2 Unsupervised Learning of Visual Features by Contrasting Cluster Assignments Kai Haase Max Argus
B3 Fast training of contrastive learning with intermediate contrastive loss Christian Handschuh Yassine Marrakchi
B4 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Simon Knuijver Maria Bravo
B5 Generative Pretraining from Pixels Arjun Krishnakumar Tonmoy Saikia
B6 Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images Asmaa khalid Artemij Amiranashvili
B7 Implicit Neural Representations with Periodic Activation Functions Mehran Ahkami Jan Bechtold
B8 Hybrid Models for Open Set Recognition Simon Schrodi Silvio Galesso
B9 Big Self-Supervised Models are Strong Semi-Supervised Learners Kinan Alzouabi Sudhanshu Mittal
B10 CrossTransformers: spatially-aware few-shot transfer Jan Ole von Hartz Tonmoy Saikia