 Hello everyone, my name is Yilada Ari and today I will talk about two words of accessibility in CRS generation. This is joint work with Abanjon Anan, Yilada Asheron and Zipul Goyal. Let's start. First, I want to define non-interactivity or not. So, assume we have an NP language and we have two parties, povo and verifier. And the pover wants to convince the verifier that some statement is proved without revealing any additional information about the statement. In addition, we are talking about non-interactive proof systems. Therefore, the pover can send to the verifier only a single message. And we know that for achieving non-interactivity or knowledge, we need to require trusted data. Specifically in this talk, we will consider the common reference model. And what is this model? So, the common reference model, the parties share a trusted public string from a non-distribution. So, you can think about this string as public encryption or a permitment to some bits at the bottom. And the motivation to define this model is to achieve cryptographic privileges that we cannot achieve without this assumption. For example, non-interactivity or knowledge for NP, malicious to round the MPC, and we have more examples. The properties we require from such a closed system is as follows. So, the first property is completeness, which means that if the statement is true, then the verifier accepts with high probability. The second property is sound, which means that if the statement is full, then the verifier rejects with high probability. And the last property, which is the spatial property, is their own knowledge, which means that if the statement is true, then the verifier cannot learn any additional information about this statement. In the way we formalized it, we require that there exists a simulator such that the simulator, upon receiving only the input, only the statement can output the whole transcript, can output the CRS and the proof. So basically the simulator can output everything that the verifier can see during the proof. So all these work fantastic in the theoretical world. However, when you are talking in the real world, we have two main questions. The first question is who generates the CRS? We don't have any trusted public statements. And the second question is what happened if this CRS is maliciously generated? Again, we don't have any trusted statements. These questions have been starting for a long time. And the main answer is to consider weaker notion of security. For example, in the well-known work by the work and all, they consider that. In that we have witnessing distinguishability. And this proof can be made non-interactive without any setup, but we want to achieve zero knowledge. And for needy, we need trusted setup. So in the real world, who generates the CRS? The answer is MPC, multi-party computation. So in multi-party computation, we have multi-party to generate together the CRS. And if we have fraction of the party's debt, then we can trust the output. But we know that in some cases, this is not enough. And we need to destroy the computers, because even the computer itself can leak some trapdoor or some private information and then we cannot trust the output anymore. The second answer is a trusted party. But think about it. Do we really trust anyone in the real life? Do they really exist as the parties? So what can we do? If we have one party who generates the CRS, and this party is a malicious party, then maybe we have some trapdoor in the CRS. If the malicious party recovers private information but keeps the private information to itself, then we cannot do nothing, right? We cannot know even that this happened. So it's impossible to protect against. At least it seems impossible. And if the malicious party uses the private information, then we want to do something. We want to prove they act maliciously. There are many ways to use the private information. And in this talk, we focus on holding a party who tries to sell the private information accountable. We introduce the notion of accountability in CRS generation, and we study accountability for music to PC and specifically for OT. Informally, our results. For music, we get music for all in P, the satisfying accountability for two party computation. We have an impossibility results that there is this functionality that we cannot achieve accountability. And we also have positive results. And we shall understand the assumption that we can get two party computation for a loud class of functionalities that satisfy accountability. We want to focus on our scenario. So in our setting, we consider a party called authority who generates the CRS. So if this authority is an opponent's party, everything works great. The public can use the CRS, generate the music, and everything works. But this authority is a malicious party. So as we said, we are focused on authority who can sell the information. So let's see how it goes. So malicious authority generates CRS with threat dose. Then the public uses the CRS to generate a music and sends the music to the various files. Now the malicious authority can use the threat dose and extract private information from the proof. Now the malicious authority not only extracts the private information, the malicious authority also sells the private information for profit. So we have a third party that can query the malicious authority, the backdoor service. So the malicious authority basically set up some backdoor service. And this third party sends proof to the malicious authority and gets back the private information from the proof. So let's summarize the scenario. So when the malicious authority is malicious, then the authority can maliciously generate the CRS with threat dose, recover private information, and use the backdoor service to sell the private information for profit. And what is our goal? We want to use this backdoor service to generate a proof that the CRS was maliciously generated and the authority was this one. And how we do it? So specifically, we want to construct an extractor that can query the backdoor service and can use the backdoor service. And using it can generate a proof that the authority maliciously generated that. So now, if the malicious authority recognize the extractor and recognize the queries of the extractor, then the malicious authority won't open this query, won't answer this query. So we need somehow that the queries of the extractor look like a real query, look like a malicious query, like a third party we just want to buy private information. And how we do it? So we design a CRS generation protocol that satisfy an accountability property. So before we explain exactly what is the accountability property, I will explain the syntax of the protocol. For our legal attempt, GenCRS prove, verify, and judge. GenCRS prove and verify is just an easy proof system as you know. What is the judge? So the input of the judge is the CRS and an evidence itself. And the output of the judge is if the CRS is honest or corrupted. And what is their accountability? If the authority is malicious and sell the information, then we can use the backdoor service to generate a publicly verifiable proof. So basically, this is the proof that anyone can just verify and check. And for example, we can take this proof and convince the judge in a protocol. To complete their accountability property, we need to require that we cannot just blame an honest authority, right? Therefore, we define the information. So if the authority is honest, we can not generate the proof against the authority that is accepted by the judge. And how we formalize it? We said that the probability that the judge accepts a proof for an honest CRS is negligible. We say that the four algorithms GenCRS prove, verify, and judge has malicious authority to do it for music, if GenCRS prove and verify is an easy proof system, and GenCRS prove, verify, and judge satisfies accountability and defamation. So now I will explain how we define accountability and we define accountability via two experiments. So the first experiment models the real proof. We have an authority to generate CRS and send it to the experiment, so this CRS can be corrupted or not. Now the experiment samples some instance and generates the music proof using the CRS from the authority. Now the experiment sends to the authority the proof and the authority sends back some witness. The output of this experiment is one if and only if this witness is a valid witness. So why this is modeled the real world? So what is the first message in this experiment? So the first message is exactly in the real world when the malicious authority can generate CRS, and then the proof has some statements and witness, and the proof uses the CRS from the authority and generates a music and sends this music to the verifier. And what is the second part of the experiment? So in the second part, this is actually the backdoor service. And what happens in the backdoor service? So we have some party who sends proof to the malicious authority to the backdoor service and gets that witness. The second experiment is the extraction experiment. And in this experiment we have two parties, the authority and the extractor. Now again, the authority generates the less the CRS can be corrupted or not. The extractor samples some instance and generates the music proof using the CRS from the authority. And again, the extractor sends to the authority the proof and the authority sends back witness. But now we say that using the backdoor service, the extractor should generate the proof. So this is not enough. And now what's going on? Now we require that the extractor generates some evidence. And now the authority will be one if the judge will be convinced using this evidence now that CRS is corrupted. So what is the accountability property? For every authority that can success in the real experiment, there exists an extractor that can success in the extraction experiment. And what is our result? So as soon as we understand the assumption, we construct a music for NP. And I will show you now the high level of the construction and the construction satisfies accountability and defamation. The main idea of our construction is to add more information to the CRS. And the proof to the judge will be the ability to open them. So specifically here we had commitment to the CRS. And the proof will be the ability to open the commitment. And we know that from the security of the commitment we cannot just open the commitment. So we need to think. Our extractor can query and can ask the backdoor service. So the extractor can send the backdoor service proof and get back witnesses. So somehow from the obtained witness, the extractor should understand and has the ability to open the commitment from the CRS. So how can we do it? So first of all, we use a specific commitment scheme with specific property. We use a randomizable bit commitment. So in the high level, what does it mean? So you can take a commitment to some method, for example, to zero. And the opening of the commitment to zero is L. This is a random string L. Now we can sample out and re-randomize the commitment. And what is the output of the re-randomization is a commitment to the same method. We have the same method here, zero. But now the opening is L so out. So we don't know L, but we know R. So if we can open the new commitment, we will get L so out. And then we can go back and extract L, which is the opening of the original commitment. This is the high level of the idea. So let's see how it works. So note that this is a good example and this is not an anti-complete language. And this language is basically commitment to zero. So the pover has some. The statement of the pover is a commitment to zero. And the opening of the pover is some string. In addition, in the CRS, we have commitment to zero and the opening is L. So remember we need to get L. L is our proof to the judge. So how the extractor can use the back group service to open and get L. So let's see. So now more extractors can do. So the extractor samples out and takes the commitment from the CRS and re-randomizes the commitment. And now this will be the new statement of the extractor. So C-HAT now is the statement of C-HAT. And the extractor generates a new thing of C-HAT and sends it to the back group service, to the malicious authority. And the malicious authority sends to the extractor back. The opening, the weakness, also are now the extractor can extract L. And this is the proof to the judge. So the extractor sends to the judge L and of course the commitment from the CRS. And the judge just needs to check if this is a valid opening of the commitment. And output corrupted CRS. So accountability followed from perfect re-randomization. And why? Because the malicious authority, the back group service cannot distinguish that the string L so R is basically contains the opening of the commitment from the CRS. Because it's identically distributed as a fresh random string. So the back group service will answer this query. In addition, C-HAT is a valid statement, right? It's a statement to zero. This is the language. It will be a valid proof. It will be fine. So the malicious authority will answer to the extractor. So we have accountability from perfect re-randomization. And we have the definition three. What is the definition three? The definition three says that we cannot blame an honest party. We cannot blame an honest authority. So why we have the definition three? The definition three follows from the security of the commitment because we cannot just, without any help, open commitment. So this was the high level of the idea of allocation. But of course I cannot explain all the details. So I will give you a brief of some challenges. And you can go to the paper and find all the details in the paper. So first in the paper we extend this idea to an anti-conflict problem. In the major challenge that we didn't talk about it is how the extractor can generate an easy. The extractor doesn't know the witness. So I will remind you. So let's see. The extractor has new statement. What is the statement? Commitment to zero. And the witness is Elkstor R. But the extractor knows only the value of R. The extractor doesn't know L. So the extractor doesn't know the value of Elkstor R. So the extractor doesn't know the witness. So how can the extractor generate? This is a major challenge that we show how to construct it in the paper. And the last challenge I will mention is that the high level of our approach is to force the authority to add more information to the service. However, we have a problem here, right? Because if the authority is a malicious party, we need somehow to check that the additional information is valid, is okay. Because maybe the commitment to zero is a commitment to one or a commitment to garbage or just garbage and not commitment to anything. And we cannot use NISIC since NISIC will require CRE. So it's a circular problem. So again, all the details in the paper you can go and read. Now I want to give you a brief of more results. And as I said before, we extend accountability and we start accountability also for two party computations. We all know that we cannot achieve malicious two-round two-party computation in the plain model without any CRS or any trusted data. And we do know that in the CRS model we can achieve malicious two-round two-party computation. But again, we have the problem that if we have a corrupted authority who generates the CRS, then the authority can recover private information. So can we achieve accountability in CRS generation for two-party computation? First, we extend the definition and we introduce the definition of accountability for two-party computation. And in two-party computation, we have something interesting. In an interactive zero knowledge, we have a single message between the Puga and the Verge player. But in two rounds, we have two messages. So maybe the authority also controls one of the parties and not only control of the generation of the CRS. So the authority can be active during the protocol. And we ask if we can guarantee accountability even in this case. So we call it, we call this case two-unit accountability. We have two results for O.T. The first result is assuming I.O. we can construct O.T. that satisfies two-unit accountability and the definition we in the CRS model. In the summing standard assumption, we can construct two-round malicious O.T. in the CRS model that satisfies weak accountability, which is when the authority can control only the generation of the CRS. And for two-party computation, we have an impossibility result that says that there exists a two-party functionality that does not exist and a protocol that can satisfy accountability, even weak accountability. And we also have positive results that we have a large class of functions that, assuming standard assumption, we can construct accountability. We can construct a protocol that satisfies accountability and the definition we. Thank you for the listening and have a good day.