Understanding and robustifying differentiable architecture search
International Conference on Learning Representations (ICLR), 2020
Abstract: Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling.
Paper
Images and movies
BibTex reference
@InProceedings{SMB20, author = "A. Zela and T. Elsken and T. Saikia and Y. Marrakchi and T. Brox and F. Hutter", title = "Understanding and robustifying differentiable architecture search", booktitle = "International Conference on Learning Representations (ICLR)", month = " ", year = "2020", url = "http://lmb.informatik.uni-freiburg.de/Publications/2020/SMB20" }