Object segmentation in videos by analysing point trajectories
Grant BR 3815/5-1 |
Project members
Prof. Dr. Thomas BroxPeter Ochs
Abstract
Supervised learning currently dominates the learning of object representations. This requires manually annotated (ideally segmented) training images. Large datasets are beneficial for the performance. Since the acquisition of such annotated data is tedious, though, there are large efforts to solve the recognition problem with as little data as possible. Such a task seems artificial regarding the fact that a child has already seen about two billion images at the age of two years. In this project, the natural ordering of images in videos is supposed to be exploited for segmenting objects in a fully automatic way. This replaces the largest part of the annotation effort by an unsupervised approach. In particular, objects should be segmented by their motion. In contrast to earlier works, videos get analysed for longer periods of several seconds to take into account the temporally varying quality of the motion signal.Publications
See also our research page on Image and Video segmentation.
IEEE International Conference on Computer Vision (ICCV), 2015
IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(6): 1187 - 1200, Jun 2014
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2012
Object segmentation in video: a hierarchical variational approach for turning point trajectories into dense regions
Paper
Code/data
IEEE International Conference on Computer Vision (ICCV), 2011