Block-Seminar on Deep Learning

apl. Prof. Olaf Ronneberger (Google DeepMind)

In this seminar you will learn about recent developments in deep learning with a focus on images and videos and their combination with other modalities like language. The surprising emerging capabilities of large language models (like GPT-4) open up new design spaces. Many classic computer vision tasks can be translated into the language domain and can be (partially) solved there. Understanding the current capabilities, the shortcomings and approaches in the language domain will be essential for the future Computer Vision research. So the selected papers this year focus on the key concepts used in todays large language models as well as the approaches to combine computer vision with language.

For each paper there will be one person, who performs a detailed investigation of a research paper and its background and will give a presentation (time limit is 35-40 minutes). The presentation is followed by a discussion with all participants about the merits and limitations of the respective paper. You will learn to read and understand contemporary research papers, to give a good oral presentation, to ask questions, and to openly discuss a research problem. The maximum number of students that can participate in the seminar is 10.

The introduction meeting (together with Thomas Brox's seminar) will be in person, while the mid semester meeting will be online. The block seminar itself will be in person to give you the chance to practise your real-world presentation skills and to have more lively discussions

Contact person: David Hoffmann

(2 SWS)
(date tba, two days, in person, between mid of July and end of September)

Beginning: If you want to participate, attend the mandatory introduction meeting (Will be held jointly with Seminar on Current Works in Computer Vison) on April, 17th, 14:00, register in HisInOne, and submit your paper preferences before April, 22nd.

Mid-Semester Lecture: (date tba, 2 hours, via video conference) Introduction to Generative models by apl. Prof. Olaf Ronneberger (Google DeepMind)

Recommended semester:

6 (Bachelor), any (Master)
Requirements: Background in computer vision

Remarks: This course is offered to both Bachelor and Master students. The language of this course is English. All presentations must be given in English.

Topics will be assigned for both seminars via a preference voting. If there are more interested students than places, first priority will be given to students who attended the intrdocution meeting. Afterwards, we follow the assignments of the HisInOne system. We want to avoid that people grab a topic and then jump off during the semester. Please have a coarse look at all available papers to make an informed decision before you commit. If you don't attend the meeting (or not send a paper preference) but choose this seminar together with only other overbooked seminars in HisInOne, you may end up without a seminar place this semester.

Students who just need to attend (failed SL from previous semester), need not send a preference for a paper, but just reply with "SL only".


GPT4(V)ision example


from Thomas Brox's seminar:


Barbera, Enrico
ID Paper Comment / project page Student Advisor
B1Intriguing properties of generative classifiersSaiger, SvenSimon Schrodi
B2Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalitiesDiener, ArthurArian Mousakhan
B3SODA: Bottleneck Diffusion Models for Representation LearningKundu, RounackLeonhard Sommer
B4Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language ModelsGaillard, YaelleDavid Hoffmann
B5Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual UnderstandingMossio, GiacomoSimon Ging
B6Memory Consolidation Enables Long-Context Video UnderstandingMackert, MichelJelena Bratulic
B7Gemma: Open Models Based on Gemini Research and TechnologyMraz, MartinMax Argus
B8When Do We Not Need Larger Vision Models?Barbera, EnricoArian Mousakhan
B9Mixture-of-Depths: Dynamically allocating compute in transformer-based language modelsBaumann, HannesSimon Ging
B10Scaling Laws for Data Filtering - Data Curation cannot be Compute AgnosticJansen HendrikSilvio Galesso