DeMoN: Depth and Motion Network
for Learning Monocular Stereo

Abstract

In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.


Downloads

Videos

Example
Input image pair
Reconstructed scene

Find more point cloud examples and comparisons below:
(Click on the rightmost element to control the viewpoint once the point clouds are loaded.)


References

B. Ummenhofer et al., “DeMoN: Depth and Motion Network for Learning Monocular Stereo“, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [BibTex]