DeMoN: Depth and Motion Network for Learning Monocular Stereo

Technical Report , arXiv:1612.02401, 2016
Abstract: In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.

Other associated files : depthmotionnet.pdf [2.7MB]   depthmotionnet_supplement.pdf [3.9MB]  

Images and movies


See also

BibTex reference

  author       = "B.Ummenhofer and H.Zhou and J.Uhrig and N.Mayer and E.Ilg and A.Dosovitskiy and T.Brox",
  title        = "DeMoN: Depth and Motion Network for Learning Monocular Stereo",
  institution  = "arXiv:1612.02401",
  month        = " ",
  year         = "2016",
  note         = "http://lmb.informatik.uni-freiburg.de/people/ummenhof/depthmotionnet/",
  url          = "http://lmb.informatik.uni-freiburg.de/Publications/2016/UZUMIDB16"

Other publications in the database