Conditional Visual Servoing for Multi-Step Tasks
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022
Abstract: Visual Servoing has been effectively used to move a robot into specific target locations or to track a recorded demonstration. It does not require manual programming, but it is typically limited to settings where one demonstration maps to one environment state. We propose a modular approach to extend visual servoing to scenarios with multiple demonstration sequences. We call this conditional servoing, as we choose the next demonstration conditioned on the observation of the robot. This method presents an appealing strategy to tackle multi-step problems, as individual demonstrations can be combined flexibly into a control policy. We propose different selection functions and compare them on a shape-sorting task in simulation. With the reprojection error yielding the best overall results, we implement this selection function on a real robot and show the efficacy of the proposed conditional servoing. For videos of our experiments, please check out our project page: this https URL.
Paper
Images and movies
BibTex reference
@InProceedings{AB22, author = "S. Izquierdo and M. Argus and T. Brox", title = "Conditional Visual Servoing for Multi-Step Tasks", booktitle = "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", month = " ", year = "2022", url = "http://lmb.informatik.uni-freiburg.de/Publications/2022/AB22" }