Home
Uni-Logo
 

Towards improving robustness of compressed CNNs

J. Hoffmann, S. Agnihotri, Tonmoy Saikia, Thomas Brox
ICML Workshop on Uncertainty and Robustness in Deep Learning (UDL), 2021
Abstract: High capacity CNN models trained on large datasets with strong data augmentation are known to improve robustness to distribution shifts. However, in resource constrained scenarios, such as embedded devices, it is not always feasible to deploy such large CNNs. Model compression techniques, such as distillation and pruning, help reduce model size, however their robustness trade-offs are not known. In this work, we evaluate several distillation and pruning techniques to better understand their influence on out-of-distribution performance. We find that knowledge distillation and pruning combined with data augmentation help transfer much of the robustness to smaller models.
Paper

Images and movies

 

BibTex reference

@InProceedings{SB21,
  author       = "J. Hoffmann and S. Agnihotri and T. Saikia and T. Brox",
  title        = "Towards improving robustness of compressed CNNs",
  booktitle    = "ICML Workshop on Uncertainty and Robustness in Deep Learning (UDL)",
  month        = " ",
  year         = "2021",
  url          = "http://lmb.informatik.uni-freiburg.de/Publications/2021/SB21"
}

Other publications in the database