Block-Seminar on Deep Learning
apl. Prof. Olaf Ronneberger (Google DeepMind)In this seminar you will learn about recent developments in deep learning with a focus on images and videos and their combination with other modalities like language. The surprising emerging capabilities of large language models (like GPT-4) open up new design spaces. Many classic computer vision tasks can be translated into the language domain and can be (partially) solved there. Understanding the current capabilities, the shortcomings and approaches in the language domain will be essential for the future Computer Vision research. So the selected papers this year focus on the key concepts used in todays large language models as well as the approaches to combine computer vision with language.
For each paper there will be one person, who performs a detailed investigation of a research paper and its background and will give a presentation (time limit is 35-40 minutes). The presentation is followed by a discussion with all participants about the merits and limitations of the respective paper. You will learn to read and understand contemporary research papers, to give a good oral presentation, to ask questions, and to openly discuss a research problem. The maximum number of students that can participate in the seminar is 10.
The introduction meeting (together with Thomas Brox's seminar) will be in person, while the mid semester meeting will be online. The block seminar itself will be in person to give you the chance to practise your real-world presentation skills and to have more lively discussions
Contact person: David Hoffmann
|
GPT4(V)ision example |
Material
from Thomas Brox's seminar:
- Giving a good presentation
- Proper scientific behavior
- Powerpoint template for your presentation (optional)
Schedule
Thursday, 29th August 2024
Time | ID | Paper | Student | Advisor |
09:30 | B7 | Gemma: Open Models Based on Gemini Research and Technology | Mraz, Martin | Max Argus |
10:30 | B9 | Mixture-of-Depths: Dynamically allocating compute in transformer-based language models | Baumann, Hannes | Simon Ging |
11:30 | B8 | When Do We Not Need Larger Vision Models? | Barbera, Enrico | Arian Mousakhan |
12:30 | Lunch break | |||
13:30 | B6 | Memory Consolidation Enables Long-Context Video Understanding | Mackert, Michel | Jelena Bratulic |
14:30 | B2 | Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities | Diener, Arthur | Arian Mousakhan |
Friday, 30th August 2024
Time | ID | Paper | Student | Advisor |
09:30 | B4 | Visual Program Distillation: Distilling Tools and Programmatic Reasoning into Vision-Language Models | Gaillard, Yaelle | David Hoffmann |
10:30 | B5 | Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding | Mossio, Giacomo | Simon Ging |
11:30 | B10 | Scaling Laws for Data Filtering - Data Curation cannot be Compute Agnostic | Jansen Hendrik | Silvio Galesso |
12:30 | Lunch break | |||
13:30 | B1 | Intriguing properties of generative classifiers | Saiger, Sven | Simon Schrodi |
14:30 | B3 | SODA: Bottleneck Diffusion Models for Representation Learning | Kundu, Rounack | Leonhard Sommer |