DeepTAM: Deep Tracking and Mapping with Convolutional Neural Networks

International Journal of Computer Vision, 128: 756-769, 2020
Abstract: We present a system for dense keyframe-based camera tracking and depth map estimation that is entirely learned. For tracking,we estimate small pose increments between the current camera image and a synthetic viewpoint. This formulation significantlysimplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a largenumber of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volumecentered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image toupdate the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approachyields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that theperformance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classicand deep learning powered dense depth algorithms.
Paper Publisher's link Project page

BibTex reference

  author       = "H. Zhou and B. Ummenhofer and T.Brox",
  title        = "DeepTAM: Deep Tracking and Mapping with Convolutional Neural Networks",
  journal      = "International Journal of Computer Vision",
  volume       = "128",
  pages        = "756-769",
  month        = " ",
  year         = "2020",
  url          = "http://lmb.informatik.uni-freiburg.de/Publications/2020/ZUB20"

Other publications in the database