 We heard that within the scope of cyber security AI can be viewed in two different ways. AI in the hands of cyber attackers and AI in the hands of cyber defenders. I will present a third view. Hackers fooling AI systems, a domain known as adversarial AI. Due to the increase in the use of machine learning algorithms in cyber security solution, hackers realize that they too should adopt to AI. Today hackers are using machine learning algorithms to find loopholes in other machine learning-based system. By doing so, they have started an AI arms race. Fooling AI system is not very hard. After all, we are not talking about real human intelligence, but merely a computation on intelligence. In particular, machine learning algorithms realize on past data and assume that future data share it characteristics. Hackers abuse this assumption by manipulating the training data. For example, spam detection system are trained on past email that were manually labeled to either spam or not. Machine learning can generalize from these examples and automatically classify future cases. Obviously, the more training data it has, the more accurate it becomes. Spam detection system are trained to look for incriminating content by analyzing the email text. To avoid detection, spammer can off-escape their content. For example, by deliberately misspelling certain suspicious words. To the machine, the revised email looks like a non-spam email. Actually, adversarial AI can be applied to other domains as well, for example, machine vision system. If the following check is analyzed by handwriting recognition system, the total amount will be expected probably correctly. However, by adding some adversarial noise, for example, in the digit 9, we can fool the system and make it think it is the digit 8. Let's look into a concrete cybersecurity example. Many AI-based cyber-solution are analyzing the sequences of system calls in order to characterize an application. The sequence consists of requests issued by an application toward the operating system. Recurrent UI network can be used to analyze these sequences and decide if the inspected application is malicious or not. This kind of cybersecurity solution can be compromised using two-step attack. First, we train a surrogate model by querying the existing model. In particular, we synthesize fake cases, send them to the current model, and then use their classification in order to train our own surrogate model. Once the surrogate model is obtained, we can take an existing malicious code and gradually modify it. After each modification, we check if the modified code can still be detected by the surrogate model. The process continues until the modified code is misclassified as benign. This kind of attack is considered to be a black box attack because it requires no prior knowledge about the security model, adding then the ability to query it as a black box, meaning creating an input and getting the classification as an output. Most black box attacks relies on a concept known as transferability. This mean examples that were crafted against one model will probably be effective against other models as well. This transferability property will hold even if the other models are trained using different architecture or even different algorithms. But multiple classifier system can be used as a countermeasure to adversarial attacks. Instead of training a single model, we train multiple models and aggregate their prediction. It's much harder to bypass such a system since more than one model should be evaded in order to make the entire ensemble ineffective. I'm going to now focus on the common type of attack known as evasion attack in which an attacker tries to evade a detection system. In poisoning attack, the hacker tries to actually abuse the fact that many machine learning algorithms are retrained on data that have been collected during their operation. In this scenario, the hacker inject carefully crafted examples in order to continue the training set. Even an hacker can flaw the system with extreme cases. As a result, the learning process will be overwhelmed and the entire system will be compromised. The AI community has focused until now on making AI models accurate. However, in the meantime, they have left these models vulnerable to adversarial attacks representing a concrete threat to AI safety. So my message today is that practitioners should make AI models resilient to adversarial attacks before embedding this technology in the real physical world. Thank you.