Towards verification-aware knowledge distillation for neural-network controlled systems: invited paper

Files
ICCAD19_v2.pdf(13.78 MB)
Accepted manuscript
Date
2019-11
Authors
Fan, Jiameng
Huang, Chao
Li, Wenchao
Chen, Xin
Zhu, Qi
Version
Accepted manuscript
OA Version
Citation
Jiameng Fan, Chao Huang, Wenchao Li, Xin Chen, Qi Zhu. 2019. "Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems: Invited Paper." 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 2019-11-04 - 2019-11-07. https://doi.org/10.1109/iccad45719.2019.8942059
Abstract
Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools
Description
License