Minimal Modifications of Deep Neural Networks using Verification

International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR)


Deep neural networks (DNNs) are revolutionizing the way complex systems are designed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behavior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.

Latest Publications