Adversarial examples --- images optimized to mislead deep networks, while being virtually identical to normal images to the human eye --- have raised questions regarding the robustness and security of deep neural networks. In this talk we discuss adversarial attacks formally and informally, and summarize the debate on: are adversarial images a big catastrophic failure? an amusing curiosity? something in-between? We explain how different types of attacks are generated for image classifiers and discuss a why making models resistant to those attacks has proved to difficult.
Eduardo Valle is a professor at the School of Electrical and Computing Engineering (FEEC) at the State University of Campinas (UNICAMP) since 2010. His research interest include Multimedia Information Retrieval, Content-based Information Retrieval, Large-Scale Machine Learning, Computer Vision. He is particularly interested in applications of Machine Learning for Education and Health.