 Hello everyone welcome to Card Village. Today the first presentation will be from Tencent Blade Team. Yesterday they have won the Card CTF so let's welcome. Good morning everyone. Today I want to talk to you about the transferable and adversary applications and before the talk I want to introduce you our team members Bruce Ho, Dr. Zhou Wen and Meng Yongtang and Aiyun Zhongchen. We also attacked the last year's non-targeted adversary attack and defense of NIPPS competitions and today is only a brief introduction about our methods so if you are interested you can find more in our ECCV paper. We all know that deep neural network is easy to fall but the black box attack is still a hard job so our message have two basic idea. The first one is the maximizing the distance in the intermediate feature maps that can improve the attack transfer ability and the second one is the way introducing a smooth regularization of our adversary applications which can make it more efficient when the network is well designed. It may be some defense methods such as denoist and adversary trade neural network so in this situation the report is very important so first I will show our results. You can say that this method is our methods. This is the last feature maps of the neural network. We see that the distance of our adversary examples is far away from the original images and we also compare two basic methods that is FGSM and BIM. We can say that in the feature space they are very close from each other so the large distance will make our transfer to different models because different models have different architectures and parameters so the distance of the feature map is very important. How do we do it? The first row is maximizing the loss of our adversary examples from the original label and the second row is our is is one to do is maximizing the distance of the feature maps and the t-faction is our polynomialization which can which can decrease the contribution of large values and make the feature map more stable and the last one is a smooth regularization which can punish the discontinual noise and can improve the robustness. Also we compare the different ways they influence the recognition accuracy. Sorry I'm a little bit nervous. We say that ending our smooth linearization can benefit a lot when the neural network is the adversary trade. We can say that the recognition accuracy is decreased a lot the white one and maximizing the feature maps distance can decrease all of those networks no matter that is a white box attack or the black box attack. So finally we choose the ways that is a trade-off that we choose the last one yes this one so we make all the network have not bad performance so that is the optimization step. We do it several times and our experiment the k is equal to 5 so we compare our methods to some basic line methods and also a state-of-edge message and we found that if the neural network have too many layers that only maximizing the loss function is not enough so maximizing the feature maps distance can get better results because such as the rest network that is our very very deep network so ending the feature maps distance we can make a good performance and the robustness that is influenced by the smooth regularization and the way say that for all the networks no matter white box or black box. We got the best result compared with all the other methods so motivation by our attack methods we designed our defense methods in the attack we maximizing yes we maximizing the feature maps distance but for defense we want to minimizing this loss so this is the original input the key image and this is some image generated by adversary methods so we put it together into the neural network and minimizing the distance of the feature maps for all layers and then end with the last loss factor and the softmax loss and together that is our final loss factor fourth so to do this we just say that as all the attack model architecture and parameter is different from us but if we maximizing the map distance and we will make it's more close to the original one so we maybe get the right label at the last so like so today the idea our method idea is easy to understand or we only use the so one way for sorry for a track we want to maximizing the distance of feature maps and for defense minimizing the distance and a smooth regularization is very important when someone of when one more people notice that the security of AI method the security of different neural network so a smooth regularization can make your noise more smooth and maybe there are denoised filters and some well-designed neural network smooth regularization can make your attract more efficient so today that is my talk and here are some references about all the methods I compared with my methods and thank you