On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

arXiv

Abstract

Neural networks are known to be highly sensitive to adversarial examples. These may arise due to different factors, such as random initialization, or spurious correlations in the learning problem. To better understand these factors, we provide a precise study of robustness and generalization in different scenarios, from initialization to the end of training in different regimes, as well as intermediate scenarios, where initialization still plays a role due to “lazy” training. We consider overparameterized networks in high dimensions with quadratic targets and infinite samples. Our analysis allows us to identify new trade-offs between generalization and robustness, whereby robustness can only get worse when generalization improves, and vice versa. We also show how linearized lazy training regimes can worsen robustness, due to improperly scaled random initialization. Our theoretical results are illustrated with numerical experiments.

Featured Publications