 My talk is about robust sequencing with almost optimal size and security against Russian adversaries. This is a joint work we searched for. Here is the outline of my talk. First, I will introduce some background about the robust sequencing scheme. Then I will take a review of our previous approach. After that, I will introduce our new approach and give some remark on our new approach. Also, there are some open problems. First, we are interested in the TN sequential scheme. What is this TN sequential scheme? We have a sequence S. We want to share it on my own parties. There are two parameters we are interested in. First is privacy. We call it T-Privacy. If any parties learn nothing about S, T-Privacy will be reconstruction. If any T-Privacy parties can completely recover S. There are the famous Shamir sequential schemes. You can realize this TN sequential sharing scheme. Besides this TN sequential scheme, you want the sequential scheme to be robust. Look at this graph. We have a known as the dealer D and known as the reconstruction R and N parties P1 to the PN. So the dealer D first take an input, second S. Then he generates N shares to N parties. Each party receives a share. And so diversity, he can corrupt parties. Now the PN is corrupted. What will happen in this reconstruction phase? And in the reconstruction, do you reconstruct R? Ask each party to send their share. And the PN, because he is corrupted, he will send a modified share. And so the reconstruct R, and he can still recover the second S in the presence of some corrupted shares. If they are not too many shares are corrupted, he can still recover the second S. So this is called a reconstruction robust sequential scheme. The formal definition is the following. Yeah, you see that we have we secretary and second S and generate N share and send the adversary. He corrupted some of them. For example, he corrupt T shares. There's then the reconstructor, you can recover this S from these N shares in the presence of T corrupted shares. So in this case, and we the corrupts here can be at the most the number of corrupted shares can be at the most half of the number of parties, half N. So the interesting case is N equals to T plus one is called a special case. And then there exists a reverse sequential scheme. Yep, we allow some small error probability. Then we are interested in the overhead. We want overhead to be as small as possible. Here in this work, we consider Russian anniversary model. So what is the power of this Russian anniversary? What are they, what can they do? First, they can select parties corrupt. So actually he can advance the corrupt party they see the shares hold by this party. Besides that, and all the adversary, they can do that. But the Russian anniversary can do one more thing. This is, he can see the transmission between the other honest parties and the reconstructor. After he saw this transmission, and then he modified his shares based on what he see. This gives more power to the Russian anniversary. There are some non-results. First of the first result is a secretion scheme against the non-Russian anniversary with optimal overhead. The kappa is the security parameter. But there's another parameter log N, which is absorbed in this old tutor. So this old tutor means that there's a log N factor since we just ignore it because it's too small. Okay. So there are all previous approach, give present and robust sequential scheme achieving the Russian anniversary security, but with all, with not optimal overhead. Also we have the kappa times N to the absolute and be any small constant, but it's not log N small. So there are two independent works achieved the optimal overhead against the Russian anniversary. They are crypto 2020's works and our work, our kitty's work. And both of them achieved the optimal overhead, but it takes a completely different approach. So in their paper, they achieve, and they are round completely smaller. Actually they have two round, but they require communication between parties. And our work, I mean, have five rounds, but we don't need a communication between parties and reconstructor. But this is some difference between our two works. Okay. Let's first take a review of our previous approach because we will reuse some algorithm for our previous approach. So we need to introduce our previous approach in advance. So in our previous sharing scheme, we have given a second answer that the data do the following things. Firstly, share as my list of codable version of a champion segment scheme instead of the classical segment scheme. We use the forward with some code to do this list of coding to achieve the list of codable. And then we want to authenticate each share. So for SI, we want to authenticate SI. We use the MAC to authenticate it. Actually, but we don't want one party to authenticate the rest of all other parties share because it will incur a very big overhead. Instead, for each party, we only authenticate a small random subset of parties. This will make the share size very small. But then we need some random modification for the verification graphs to represent this authentication relation, which we will introduce later in our next slide. So what is the verification graph? This verification graph is a directed graph indicating the authentication relations. For example, in this graph, we see that P1 authenticate P2 and P3 because P1 has a direct edge to P2 and also P2 authenticate P3 because they are then directed edge from P2 to P3. We classify the parties into two types due to this verification graph. And we call it passive parties. If the corrupted party do not modify their share SI and the corrupted party is active, if they modify SI, then we could make this classification because after we make this classification, we have these following properties. First, because the passive party they do not modify SI actually, it behaves like an honest party. We cannot tell whether it's honest or passive if they do not corrupt some other SI. So in this case, the passive party can always pass verification from the honest parties. However, I mean the passive parties although they do not modify SI they can still pass information tell the adversary what the shares they hold. Actually, they are not completely honest. For the active parties, actually we can tell if it's active because they always fill the verification from the honest parties. Active parties and active parties and they can either pass or fill the verification from each other. It all depends on the strategy taken by the adversary. It varies, so you cannot tell. So what is the very construction scheme in our previous approach? We have three round transmission because we will consider rushing the adversary because they are so powerful they can see the transmission between the parties and honest parties and the reconstruction. We want to limit the power of this rush adversary so we introduce three round transmission. After these three round transmission the algorithm deciding whether the number of passive parties is small or big. If it is small, we use the graph algorithm to write code words. This graph algorithm is based on some random expander graph. If the number of passive parties is small then we start from an honest party. Then we use this random graph expansion property and it will soon identify lots of honest parties. This is the second kind of ideas behind this graph algorithm. If the passive party is big, large actually we see that most of the shares are not corrupted. Then we use this list coding algorithm because it works because the most many shares are not corrupted. This list coding algorithm will output a list of candidates including the write code words. Then we use some candidate in limit algorithm to find the write code words from this list. This is our reconstruction scheme in our previous approach. Let's introduce our new approach. Let's first make some comparison between our new approach and our previous approach. There are some common points we share some common points. The first is that we also divide the case between p is small and big. Why we divide the case? We use our graph algorithm in our previous approach to handle the case that p is small. That's why we divide the cases. We also reuse the candidate in limit algorithm for different purposes. Actually if the other is a right candidate in limit algorithm works if there are two conditions make it work. The first condition is that the right candidate is a write code word from this list. The second condition is that the number of cases actually is not too small. Then we also use the multiple random transmission to restore the power of version of diversity. Also there are some deviations. The first deviation actually is quite interesting that usually we want the verification graphs information to be sacred. So every party they will store verification graphs of their neighbors. I mean the people and the parties they need to verify this kind of information. But there is one verification graph we want to make the neighbors information to be public so no one can alter it. And the second deviation is that we make the cut also we also divide the case but with different threshold in our this work we use the n over log n to be divided between small and big. In our previous approach we use epsilon n that's the difference. Also we replace the list code algorithm with a new algorithm to handle the case that p is big. Why we don't use list code algorithm because we use a new threshold case. So for n over log n the number of party is only n over log n and the list code algorithm will output an exponential size that says we want to avoid that's why we cannot use the list code algorithm anymore instead we replace with a new algorithm. And we do enumeration for p starting from n over log n why we do this in enumeration because with our new algorithm it must take and correct input the number of the p and it must be correct otherwise the new algorithm will not output the right code word and that's why we invert all the p but how can we make sure that we can find the right code word if we have enumered all the p and there are some p they will output all the wrong candidates because we have thanks to this candidate so let's take an overview of our new algorithm we do this enumeration and after do this for each p and we will the step one we will find that the subset of v consists of many honest parties and some passive parties that means there are no active parties will be in v this is the set of v we look for we say that step one we will succeed if the following holds honest parties are the active parties by at least n over 4 log n this holds for because of this enumeration the second condition called active party cannot be authenticated by honest party it holds because of our MAC our MAC will make sure that this is always hold so there are the step two and so we have we already have a subset of v consist of many honest parties and some passive parties for the party not in v we want to know that if it's good or not it's honest or not honest how do we know we are trusted the parties instead of v because the i is honest or passive because also the honest party is majority so if if these parties is authenticated by many parties in v we think we trusted and we put it into our v step and then if we collect enough parties the number of parties in v is big enough that we can recover the secret so step two we will succeed if the following holds first honest parties or the number of passive parties in v by n over log n actually it will not always hold it depends if it's not hold we turn to step three it's hold and actually we step two already find the right code word the second point is that the subset of authenticated neighbors is not corrupted actually it means that the authentic verification graph is made public in this step and the second point is assumption of p is correct actually yeah if we assume that it's correct and then we write code word and we just assume that it's correct because we have step four actually so let's go to step three step three says that we use the graph algorithm taking the complement of v as the input set because of if step two is fails and step three we will succeed because step c succeeds if the number of passive parties in v is relatively small because if in v is big and in v bar is small so that's why either step two or step three at least one of them will succeed then we go to step four because we assume that p is correct the number of p the input the number of passive parties p is correct so if it's not correct we will I mean the candidate in initial reason we will not we will not qualify it it can only spot the right candidate code word also the step four succeeds if p is not too small actually this will be ensured by the enumeration so let's yeah let's go to the conclusion we present the robot secretion in against the Russian that achieves the optimal shell size so this is basically what we've done in this paper so there are some still some open problems and we are also interested in and we want to maybe explore it in the future and so the first open problem is that can we find a simpler and more practical solution to this problem because our current solution I mean it achieves optimal shell size but rather complicated also the crypto 2020's paper I think the LSS analysis and also the approach if they take is also relatively complicated so can we because the problem itself is very nature it's a nature problem so maybe we can find more nature solutions and simpler solutions to this problem the second question is that is it possible to design linear time reverse secretion scheme we know there is linear time secretion scheme so whether the robot we can achieve robustness in linear time yeah this is all this is my talk thank you for your attention