 So, the next talk is about iterative and moral universal approach for finding loss in security production by Huichang Gao, Iris Shiro, Yun Moon, and Gomao Chen, and Jin Shang-lai, Goumin Yang, sorry for my background. And Huichang, Huichang will focus. Thanks for introduction. Good afternoon. Before the presentation, sorry, I would like to say hi to my son in front of the camera who is looking in from YouTube. Hello, John, I'm here. Thanks for waiting. This work is joined with Iris Shiro, Yun Moon, Gomao Chen, Jin Shang-lai, and Goumin Yang. I'm from the University of Wollongong when submitting this paper. But now the first author has been back to China. About this work, I would like to introduce its overview first. In the rental model, we can prove a cryptosystem such as public encryption in security mode and a computational hardware program such as the CDH program. The solution to the computational hardware program comes from one of the queries to the renderer code made by the adversary. When the decisioner variant of this program is also hard, the simulator doesn't know which query contains the correct solutions. Therefore, the simulator has to gently pick one query as the solution. Finding loss refers to finding an incorrect solution from how the query is made by the adversary. To reduce iterate renderer code or compact renderer code to address the finding loss towards type reduction. There are two types of security reduction. In security reduction, we will use the adversary attack to solve a hard problem. In the first reduction is the unforgeability security based on the computational hard problem. For example, in the digital signature scheme, we use the forged signature to solve a computational hard problem. The second one is the security based on the digital hard problem. For example, in the public encryption, we use the average gas on the challenge type test to solve a digital hard problem. And we know the computational hard problem is always hard than its decisioner variant. So a nature question is can we have an in security reduction based on the computational hard problem. In this security model and reduction, the average output is just 0 or 1, but the simulator's output is a solution to a conventional hard problem. It seems impossible to carry out such a security reduction because the average gas, 0 or 1, cannot provide sufficient information for the simulator to find a correct solution from an exponential size solution space. However, using general commode, we can have this kind of in a C3 reduction. This one is possible. Suppose a hash function h is treated as a random oracle. In this kind of general code proof, when the average makes a query on a string x to a general oracle, we have two next property. The first one is that hx is uniformly random and it depends on x. We use this one for some kinds of perfect security analysis. And the second one is that hx is controlled by the simulator. This one is the most tricky part in the security reduction. Considering this following sample test, compose three components, g to x, g to y, and the hash on g to power x, y, x, all the message. Suppose an average trade can distinguish the encrypted message from 0 or m1 in the random oracle mode. We can construct a simulator to solve the CDH problem, even g to a, g to b, and aims to compute g to a, b. The simulation is pretty easy. We'll just set the chain of cyber test equal to g to power a, g to power b, and the random string r. Then we have the following observations. First, no query on g to a, b to the random oracle. No break on the cyber test, because according to the property of the random oracle, this will be a one-time pack. According to the assumption that the average can break or distinguish the encrypted message, we have g to a, b, we appear in one of the hash queries made by the average trade. So we have one of hash queries is the solution to the CDH problem. Now, the question is, suppose the average made the following query to the random oracle, q1, q2, qq. The question is, which query is the solution equal to g to a, b? We know that when the dh problem is easy, the simulator can run the test on each query until you find the correct solution. Therefore, the success probability of finding the correct solution from the query set is probably one. But if the dh problem is hard, then we have the simulator has to randomly pick one query as the solution. Therefore, the success probability of finding the correct solution is just one over q. And unfortunately, the number of hash queries shown in polynomial time can be as large as 2 to 60. Therefore, this kind of reduction must be a lose reduction. So we divide how to find the correct solution from the average secure set. And we call this problem a finding problem. And the reduction has a finding loss if the probability of finding the correct solution is less than one. So in this work, we focus on, just focus on the retrieval case there. The decision variant of a computational heart problem is also hard. And, okay, the INT, the INT CH3 reduction, sorry, the INT CH3 reduction can be summarized as this kind of framework. The simulator is first given an instance that it will use or it knows, okay, or it knows to simulate a scheme for the average rate. And then the average rate will make a set of queries, a set of queries to the random oracle. And one of the hash queries is the challenge query for breaking scheme. Finally, the simulator will use the query set to find the solution to the instance. This is the framework used in this kind of INT CH3 reduction. Now, let CIP be a solution to an instance under a computational heart problem, P. So we have notation CIP, a solution, instance I, and the problem P. Now, before disclosing this kind of simulation to the average rate, we have that the average rate is given a scheme. And we make a set of queries where one of the queries is the challenge query for breaking the scheme, okay. But once we're disclosing this kind of simulation to the average rate, or just disclosing this kind of simulation, it's equivalent that the average rate is given an instance, and the average rate will make a set of queries where this kind of queries including a challenge query equal to the solution to the instance. It means that we will use that one to solve the heart problem. So the traditional approach, so we see these kinds of finding loss can be described as this. The simulator is given an instance I under problem P, and it will use this instance I to simulate a scheme for the average rate. Then the average rate will make a set of queries where one of the queries is the challenge query equal to this one. Actually it's the solution to the heart problem under instance I. And in this case, this is the approach, a traditional approach of solving with finding loss. And in this case, the simulator can only solve the heart problem with probability 1 over Q, because the simulator doesn't know which QA contains the correct solution to this heart problem P, heart problem I. And in Euclid 2008, Gash, Keats, and Schrupp proposed a new computational heart problem called the twin development problem. And this new heart problem is as hard as the CDH problem, while the CDH problem is also hard. And the advantage of this scheme is that our scheme based on the twin development problem have no finding loss in security reduction. And the call on the CKS approach is the trapezoid test. And given an instance I1, suppose there is just a particularly constructed instance I2 in the trapezoid test agreement such that these kinds of trapezoid tests with two inputs, Q1 and Q2, and this kind of trapezoid test agreement were returned to if and only if. Q1 is the solution to instance I1 and Q2 is the solution to instance I2, except with a manageable property. Suppose there is just such a particularly constructed instance in the trapezoid test, then the CKS approach associated with finding loss can be described as this. The simulator is given an instance I1 instead of using instance I2 similar scheme. The simulator will use two instance I1 and I2 to simulate a scheme for the adversary. And the adversary will make a several query. One of the query is the challenge query. And this challenge query has two components composed of two solutions to both solutions I1 and I2. In this case, we can prove that the simulator can solve the hard problem with success probability one. I mean one is just to find the solution for the query set. If there is a trapezoid test on solution to a given instance I1 and a created instance I2 here, in this kind of approach, the simulator sets I1 equal to I and I2 is the created instance for trapezoid test. And in this approach, there's no finding loss because the simulator can run the trapezoid test on a query set and only the challenge query can pass the trapezoid test. Therefore, there's no finding loss in this kind of reduction. And the summary of this CK2 approach is that this one is very smart and very easy in understanding. This approach, however, requires a trapezoid test. This is the condition. And the proposed trapezoid test can be adopted to many, at least many computational hard problems. But this one is dependent on the proposed trapezoid test. It can be adopted to many hard problems. But this one is also the limitation and also our motivation. We want to propose a framework that can be applied independent of hard problems. Our approach is finding loss. Sorry, it's associated with iterated render local. So what is iterated render local? Suppose the adversary needs to make a challenge query in order to use its output to break a scheme. In the traditional render local model, this change query actually is associated with one special input to render local. Just one special input to render local. But in the iterated render local model, this kind of the change query actually is associated with an special input to render local. And it must be queued in this kind of iteration way. Each input to the render local must be composed of new input plus all our query response on previous input. And this is the framework of this kind of iterated render local. And we define the iterated query in this kind of approach. Each iterated query has three components. ABAC actually is a response of a hard query or a sort of empty stream. And Q is a way to make any abstract chosen by the adversary iteration time. So each query has three components response with an iteration time. And we define the change query here. We say that the change query is associated with an special input. Here each special input actually is a solution to a distinct instance under the same promen P containing it with its index. Suppose this one input 1 actually is an index i. And the change query here define actually its last input to render local to if it's output. And so actually it's a Q star n here. It's last input here. And we can use this kind of formula to describe or define the Q star n the main contribution of this work. So we prove it in this way. Suppose the simulator is given an instance i under the problem P. Then the simulator will use an instance under the same promen P to simulate a scheme for the average rate. And the average rate will make several changes. So we can prove the simulator can solve the problem with success probability 1 over n times Q to power 1 over n. And this is the comparing. We can see that even Q is as large as 2 to 60. And this is not as big as 1 in the CKS approach. And these are the main difference in three series. Ours can be adopted for or problem. But it has a small finding loss. And finding efficiency refers to the time cost of picking generation of challenge Q-rate. And okay the main the next one is to explain how to prove we can have such a high probability to prove it we need to represent our Q-rate using a tree. We will use an edge to denote an input and it's n-node to denote it's output of an anti string because and Q2 use the response of Q1 and Q3 using the response of Q2 then we can represent these three queries here this is edge for Q1 this was the response of Q1 Q2 response of Q2 and Q3 and response of Q3 and then all iterative queries made by the three in this way with level n and we have these three properties first all queries with the same iteration some time edges and the same level i and all query with the same response edges from the same node for number these three these three query must have the same response because they already have the same response and iteration time and taking this one as an example suppose p is the CDH problem instance i is G2 AI G2 B here G and G2 B are relevant for instance and the solution is G2 and dash edge and level i to denote a query whose way is not equal to G2 power AIB and learn this is what a tree looks like made by the adversary and this tree is totally decided by the adversary but we have all the following three properties for whatever this kind of tree looks like each level has a solid edge and the same level must be from different nodes because this edge is the weight from the same node they cannot have they must have different weight and if that kind of challenge query is just in the query set they are must they are just one red and solid part from this root to a leaf actually this one is the value oh sorry and how to prove it okay given instance I the simulator will work as the follows the simulator need to choose an integer D in one and N and set I D equal to I okay and then we generally choose other instance such that all other and so we have that the solution is this one is set in I instance D so this solution will appear in one of edges and level D only level D only and we will use known solution at level from levels D pass 1 to N just from the it's downside level D pass 1 to N to field usually is curious and we learn we are going to pick one query from a query as a valid query and this is the definition a query is a valid query if it's weight it's equal to G2 AIP and a query is a candidate query if there is a dread and solid pass from this response node from its N node to a D node and this is an example suppose D is true the simulation doesn't know which query is a valid query but according to the definition we have these two queries are usually secured because there is no dread and solid pass to leave and these three queries are candidate queries compared to the traditional approach in traditional approach we just don't need to choose a query from the query set but in our approach we don't need to choose one query from candidate query and level D only and in this work we use the level one to have a smaller than this probability holds for or I then we prove that the episode must make more than Q queries but if there just Q and most then there must I start such that this kind of lower bound probability of finding a correct solution this one is an example suppose D is true and Q is 8 we need to prove that the lower bound probability is this one and this is the probability for some I start to be at least 1 over 8 suppose this Q is set made by the episode this is the set made by the episode but here it's very easy to see that when D is true it's easy to see that the probability for this one is 3 over 5 so this probability is also large then this is the second example the only difference is one this one is very Q this is the Q set made by the episode but here it's also large then the probability okay and so this are the main difference of this approach to apply this we just give a framework okay and to apply the series we must have that the scheme must be simulated using the generated instance this one is in this work we saw two applications the first one is a generic combustion for key encapsulation mechanics we can construct one way cam to in cam with a small finding loss in the remote without expanding cyber test side but this one with a small finding loss we will have a long private secret key in the number n 10 10 times okay conclusion we introduce the finding loss in production this one is very special production and we propose to reduce the finding loss so this are the main difference compared to the two applications of this approach one is in encryption and one is key change thanks so you said your method works for all problems but you need kind of this parallelizable nature right you need to be able to build a scheme yes I mean for this kind of yes I mean for this kind of scheme the scheme must be satisfied I mean that if you discode the simulation right that's to satisfy this kind of framework these two conditions otherwise this kind of approach doesn't work but let's say this one is independent of the assumption question we see just before the conclusion conclusion okay just for this one we have a key right this one because actually in our construction we use an instance right actually each instance you can see that each instance is used to simulate an independent scheme and we try to combine all them together that's why we use one instance used to simulate a scheme and there's a key right so finally