Next-Flow: Hybrid Multi-Tasking with Next-Frame Prediction to Boost Optical-Flow Estimation in the Wild

Technical Report , arXiv:1612.03777, 2016
Download the publication : 1612.03777v1.pdf [2.4Mo]  

Abstract: CNN-based optical flow estimators have attracted attentions recently, mainly due to their impressive speed. As successful as they've been on synthetic datasets, they are still far behind the classical methods in real-world scenarios, mainly due to lack of flow ground-truth. In the current work, we seek to boost CNN-based flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction. To this end we train the network in a hybrid way, providing it with a mixture of synthetic and real videos. With the help of a novel time-variant multi-tasking architecture, the network decides which of the tasks to learn at each point of time, depending on the availability of ground-truth. Our proposed method improves optical flow estimation in real scenes dramatically. We also experiment with prediction of "next-flow" instead of estimation of the current flow, which is intuitively more related to the task of next-frame prediction and improve the results even more. We report and evaluate results both qualitatively and quantitatively. The latter is done by training a single-stream action classifier on the estimated flow fields on UCF101 & HMDB51 and demonstrate high improvements of accuracy over the baseline: 10.2% and 9.6% respectively. Also as a side product of this work, we report significant improvements over state-of-the-art results in the task of next-frame prediction.

Images and movies


BibTex references

  author       = "Nima Sedaghat",
  title        = "Next-Flow: Hybrid Multi-Tasking with Next-Frame Prediction to Boost Optical-Flow Estimation in the Wild",
  institution  = "arXiv:1612.03777",
  month        = " ",
  year         = "2016",
  url          = "http://lmb.informatik.uni-freiburg.de//Publications/2016/Sed16"

Other publications in the database