Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately,
neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is
possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural
networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an
arbitrary neural network, and increase its robustness, reducing the success rate of current attacks ability to find
adversarial examples.In this project, we demonstrate that defensive distillation does not significantly increase
the robustness of neural networks by introducing three new attack algorithms that are successful on both
distilled and undistilled neural networks with 100%..........
[1]. 2017 IEEE Symposium on Security and Privacy, Towards Evaluating the RobustnessofNeural Networks, Nicholas Carlini David
Wagner, University of California, Berkeley.
[2]. ANDOR, D., ALBERTI, C., WEISS, D., SEVERYN, A., PRESTA, A., GANCHEV, K., PETROV, S., AND COLLINS, M.
Globally normalized transition-based neural networks.arXiv preprint arXiv:1603.06042 (2016).
[3]. BASTANI, O., IOANNOU, Y., LAMPROPOULOS, L., VYTINIOTIS, D., NORI, A., AND CRIMINISI, A. Measuring neural net
robustness with constraints. arXiv preprint arXiv:1605.07262 (2016).
[4]. BOJARSKI, M., DEL TESTA, D., DWORAKOWSKI, D., FIRNER, B., FLEPP, B., GOYAL, P., JACKEL, L. D., MONFORT,
M., MULLER, U., ZHANG, J., ET AL. End to end learning for self-driving cars. ArXiv preprint arXiv:1604.07316 (2016).
[5]. CARLINI, N., MISHRA, P., VAIDYA, T., ZHANG, Y., SHERR, M., SHIELDS, C., WAGNER, D., AND ZHOU, W. Hidden
voice commands. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX (2016).