 Hello everyone, I'm Minghao. I'm here to present our papers on our PDA of cryo-inficient hot label, Adafceler attack. This work is done jointly with Semmergeant Patrick Pieng-Ustian Chou-Rui. This paper is about generating Adafceler examples so that's first to take a look at what are the Adafceler examples. They are examples that look identical to the original one however cause the model such as deep neural networks to make a mistake. An image of Bego can be misclassified as a piano by adding a very small petition. Let's formally define the problem first. With a k-way classified F in our original example X0, we want to find an example that is very close to X0 but the model will generate a different label. There are different settings for generating Adafceler examples depending on the accessibility of the fixed model. In white box setting the attacker has full access to the model and can back blockade to achieve the gradient of the input and then your screen is sent to solve the problem. However, a more practical setting is our label black box setting. While the model is unknown and the attacker can only query the prediction label instead. Chen at AL proposes OBD attack in a prefaced paper. That is a boundary reformulation method to turn the hard label problem into an optimization problem. Instead of directly searching for the prohibition, it constructs a mapping function between a searching direction set up and is corresponding distance to the boundary from the original example g-setup. By minimizing g-setup, we can find a shorted distance from the original point to the decision boundary without finding the optimal auxiliary examples. However, in every traditional estimating the gradient of theta, Chen at AL's master require a lot of queries in order to get an accurate estimation of g-setup and the g-setup plus beta u, which is done by binary search and fine-grained search. Can we do it better to make it more query efficient? Inspired by the first gradient sign method FJSM and PGD attack, we find that the sign of the gradient is already powerful enough to generate such an example. However, it will be meaningless if we first estimate the gradient in OBD attack and then take the sign. Hence, we propose a single query oracle to estimate the gradient sign by only one query. The intuition is showing the figure. Given last iteration's direction theta and distance g-setup, we draw arc centered as the original example x0 with the radian g-setup. So the new direction would be theta plus epsilon u. If the end of that is out of the boundary, then we would say this new direction is better than the theta since we have a smaller function value. So the gradient sign in this direction should be positive. Similarly, if it is inside the boundary, then the gradient sign in this direction should be negative. Therefore, by only one query, we can determine the gradient sign in a direction. Hence, we can replace a gradient estimation with a new estimator based on the sign of the direction load derivative leading to a sign OBD attack. We have concatenated experiments on different datasets and find our method using significantly less queries than other methods. The graphs show median distortion as attack progresses. The distortion achieved by sign OBD is much less than other attacks in a given number of queries. We are about five to ten times faster than exciting state-of-art high-level attacks like OBD attack and boundary attack. To be specific for internet dataset, sign OBD achieves a median distortion of 2.9 in about 30,000 queries. We also have a tax request much more than 160,000 queries to achieve the distortion of 4.0. And we could also generate high quality adversarial examples using around 3,000 queries as shown in the figure. By just around 3,000 queries, we can generate a current image that are very close to the original one. But the model thought it is a cat instead. So in conclusion, we propose sign OBD as state-of-art query-efficient black box attack to generate adversarial example.