Mixup is a generic and straightforward data augmentation principle. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples.

This repository contains the implementation used for the results in our paper mixup: Beyond Empirical Risk Minimization.