 Okay, thank you for the introduction. Hi everyone. So I'm going to talk a bit about the efficient designated non-interactive zero-knowledge proofs of knowledge, which is way too long for the name of a primitive, but whatever. It's a joint work with Perot Stados. So first of all, what is this way too long thing? So zero-knowledge proof, that's an interactive protocol between two bodies, a prover and a verifier, there is some puzzle, and the prover claims that there is a solution to the puzzle. And so at the end of the interaction, the verifier should be convinced if and only if there is a solution to the puzzle, and it should not learn anything about the solution, except from the fact that it exists. So a non-interactive zero-knowledge proof is a zero-knowledge proof which consists of a single flow from the prover to the verifier. Simple and proofs of knowledge, which partly not only knows the solution, so formally it is stated by saying that there should be an instructor which can extract from today, the only basics that we know of which are practical for language of interest and how probably security parameters in the puzzle are based on peri-based assumptions. And that's something we don't really like in crypto, because someday if like Paris breakdown, then we'd be lucky to be able to get all those nice things that we couldn't be out of business from those assumptions. So roughly at the same time, Dangao Fazio Nicolosi introduced a non-interactive framework, a compiler, to build this unique verifier, non-interactive zero-knowledge proofs. And so the hope, if a team is in, is that it's not in business. So you can build them without taking parents. And in fact this paper, DSN zero-knowledge proofs, built D-minus week starting from any C-mark protocol and that is typically homomorphic efficiency. So C-mark protocol is a three-move zero-knowledge proof with some probabilities, and at the end of the homomorphic operation, we have that from many assumptions. And so the hope is that even though we do not have disease which are probably secure and practical without parents, we will still have most of their applications using D-minuses instead of disease. And so if a disease break down, then we can still get many good applications without seeing parents. Sorry, parents break down. So that's it. So that will be the starting point of our work, this compiler for building D-minuses. So let me quickly recap how it works. It's quite nice and quite simple. You start from an entry protocol which has three-move, a so-called C-mark protocol. And so the first message is a commitment. The second message is what will challenge the new formula with a random value deep by the verifier, independently of the first flow. And the last message is some function which involves some value. Here I condense zero-knowledge proof that you can move it locally and this challenge is sent by the verifier. And specifically we look at sigma protocols where the last flow is computed as a linear function of the challenge with those values known to the program. And so the observation, which is at the heart of the transformation by Nanga-Fazio-Nikolosi, keeps the following. So suppose you have an additively homomorphic impression, so what it is, what is it? You can integrate with a public key and integrate to a separate key and you can evaluate how the cycle takes any linear function on your choice and you get an encryption of f applied to the program. If you have that, then we can make this interactive scheme non-interacting. Also, there is just an encrypted challenge before the start of the protocol. And to send it in advance put in some common reference train. With the encryption of the challenge that we have over here, it's not even a cheat because it's encrypted. So you cannot see the challenge in advance, you only have the cycle test. But I can still compute his first flow as before, and he can also still compute his last flow homomorphically inside the cycle test. So now you get something which consists of a single round, you have this encryption here, and to verify the proof, that's quite simple, the verifier just needs to know the decryption key, he will decrypt the cycle test really recovering each time it's broken as one, and applies a usual verification algorithm of the interactive neural network we started from. That's simple, that's elegant. It has a few caveits, let me just quickly go over them. The first one is that if you look at the soundness of this compiler, it could be generally be proven under a relatively non-standard assumption, a complexity-librating style assumption. Because even if it looks simple, when you really have that something, it's not so easy to understand why soundness holds. And a certain caveit which made a more important issue is that the proof that you obtain through this compiler are not proof of knowledge, which means that they prove that there is the witness for statement, but they do not prove that the programmer must know that there is evidence. And remember that our hope is that DB visits could be used to replace visits in many implications, but many implications of visit crucially rely on the fact that they can be proof of knowledge. If you are familiar with what Anneliou says in the case, if you ask an credential, you would like to authenticate to a server, and you would like to truly say to some list of authorized name or any more complex relation, but you would like to do it without distorting anything about your identity except from the file that you are an authorized user. For this kind of fact, you crucially rely on this proof of knowledge property that you would not have here, but that prevents a lot of scope of indication that you can get. And finally, your last issue is the soundness property. The soundness is, whatever you call, what does that mean? The usual soundness name for the music is as follow your keyword and we claim that you should not be able to send a proof which is accented by the verifier if this is a proof for an incorrect statement. For soundness this definition is fine. The issue is that when you look at this in the verifier music, because verification now involves some secret information. And so the soundness guarantee says nothing about the following issue. It could be that the prover could interact many times with this verifier, sending proofs and resulting feedbacks which aren't one bit of information whether to prove what accented or refused. And from this feedback, they use crucial information say disverification key. And then using this information it could maybe be able to forge subsequent proofs. But to think about it it goes about like the same it's essentially the same issue as with CBA security versus CBA security for a proofing schema. You can have CBA security so you cannot see what's inside of this but if you have access to a declaration or a call then the security could break down. Here's the same thing, you can have soundness but as soon as you're given access to a verification or a call the security breaks down and again that's not an issue because this verifier would not be used in different formations so it does not give anything more. So in any application where you'd like to really use the music it's way better to make sure that the soundness will be maintained even if the prover is allowed to receive feedback on whether the prover was accented or not. And so the soundness verifier to the different compiler is unbounded but there was a claim in the paper that the soundness would be probably unbounded and the claim was supported by a security analysis in an idealized model. In fact we found this claim to be flawed there was a mistake in the security analysis and even more than that there is an explicit attach because any proof produced from the DSP compiler there is an explicit attach that breaks it's unbounded soundness property that extracts the verification key after interacting polynomial many times with the verifier. So out of the three programs the first one was solved in a sense they wanted to work by my close work your status and your code who managed to get the variant of the compiler where the soundness would be based essentially on this non-bound assumption but in the security property the last we should remain and so what we do what we get in this work is a new way of building divinizis starting from the same idea essentially but where the soundness is not statistical no assumption at all and the zero null edge holds under a smaller assumption our divinizis are proof of null edge there is an extractor that can extract the witness of the proof entirely and the soundness is unbounded it is maintained even after polynomial interaction with an oracle for the verification algorithm how do you do this let's start from the divinizis that you would get with the divinizis compiler so remember there is this first rule there is this encryption on the challenge and there is now this last code which is encrypted abstracting out what we want even if the verifier will receive this value each time C0 plus C1 is the reason for the initial interactive proof so our problems are the issue is that there is not a way of making this information available to the verifier which does not which is to avoid putting this E at the plaintext of this encryption but instead what it is in the wrong coin now the information that we have in this case is an operational zero whereas the wrong coin is a short challenge we need some more properties for this work in particular we will need that the encryption scheme is also a homomorphic of our zero-on-on counts so here we have that why does this work now if the verifier wants to transmit this E times C0 plus C1 to the verifier we encrypt C0 with some amount of fun and encrypt C1 with a correlating amount of coin minus E R it can do that because it has this encryption on C0 and somehow this shows that the verifier who knows this E can cancel out the counts it can multiply this encryption of C homomorphically with E and it homomorphically with this one and you have this E times R that can cancel out with a minus E R and the verifier has an encryption of E times C0 plus C1 with no wrong coin anymore so if we use the scheme which is what we call it, which states that if you have an encryption of the plaintext but the wrong on the count is zero then you can extract the plaintext while all this is satisfied because we know it by the high amount of encryption scheme and its bias then does this allow the verifier to recover this E times C0 plus C1 and it does not require the secrecy of the scheme anymore it's not needed anymore the only thing the verifier needs is homomorphic modification so we remove this secret key from the verification key we only let this challenge and now the secret key will be used as an extraction tracker and why is it useful? because now with the secret key the extractor can obtain not only this E times C0 plus C1 but it can obtain each of C0 and C1 so you can extract all information to witness so why does this work? so we have an extractor that's given the secret key of the scheme and it can decrypt C0 and C1 why is it not a problem? I think that it's not a problem because even if the prover uses the secret key of the scheme it cannot cheat in fact, even if the prover is conventionally bounded it cannot cheat because quantified N and R be the order of the plaintext space the order of the plaintext space and so plaintext R over 7 order of the plaintext R over Z R and so suppose that the scheme satisfies a relation the presence of a non-divisor between M and R should be 1 so they should be confirmed this is not a core requirement it can be removed but it still implies analysis so let's suppose that the prover doesn't satisfy it the scheme of the relation is as follows when you have an encryption of 0 where R e is a random kind even if you are conventionally and bounded you only get to learn E mod R but coming back there if a prover manages to cheat that means that you will have been able to cheat on this value e times 0 plus C1 which is now in the plaintext part of the cycle test and so from the cheat prover we can show that the extractor can obtain the value E mod M but remember M and R are prime so if we initially pick this E to be a very large integer then giving this cycle test to the adversary in advance links on the E mod R which does statistically link no information whatsoever about E mod M which states that it is statistically feasible to cheat for the prover the equivalent of just guessing this value about which he has no knowledge at all so it's statistically feasible to cheat and verifying the proof that you don't either know it only as a strategy or know it only as a secret encryption key this also implies that the sum has no extent bounded because if the prover has polynomial many times with an oracle that contains this verification key which is this E or so you take her we can just replace this oracle by one that only uses secret key which as we already saw does not help the prover in cheating and which does not link any information about the value of E of our so you take hers so interestingly that means that even if you interact polynomial many many times with a verification oracle you don't get any advantage at all you don't get to learn verification key and finally as our extractor could decrypt fully all information picked by the prover we can easily show that we have an entire extractor dopingness of the statement so what do we get out of that when in her building upon this idea we design a general framework like a broadside where we can add down a large range of statements over a million proof and for those statements we get an important designator of knowledge the proof that we have in our practical and there are variants of the framework because the social framework allows you to prove say any relation that you can think of between say embarrassing commitment or bias in terms of decision in our assumption anything else is kind and this uses essentially a native homomorphic oracle scheme as the underlying tool to make it work there is a dual variant of the framework where now we use a billion proof as kind of the underlying commitment scheme and we use it to prove statements about the artistically homomorphic oracle system which means that the same framework uses a dual way that allows you not to prove any relation that you can think of about say bias microtext or proof that we might want to do in practice and whereas the survive of the framework where we essentially give everything that we were able to get so far so we don't have embodied soundness anymore we don't have proof of natural body anymore in fact the soundness is not statistical anymore but we should have those ideas still lead to some proof with meaningful security guarantees and what we can think of for that is a very very strong efficiency so with this third variant say we can prove a bit more relation between bias microtext for example and the proof that we obtain is non-tractive proof is very short it's even about twice shorter that what you will get in the wrong oracle model by letting it catch you if you receive to create those proofs and that's it thank you for your attention if you have any questions can you state this forward because you mentioned that you would say that we should do a proof system they also allowed proof statements I was wondering what the difference is and was it in decontainment so we essentially looked at the framework for our actual systems for the kind of algebraic language that is being located in the framework for our actual systems and you can prove any decontainment language in the sense that we can prove any relation between conclusions that you will always come into and prove our actual relation but we mainly looked at the specific framework to our proof of size is essentially proportional to the size of the distribution of this algebraic language and so we get this to be the same time of expression that we would be in the language so we kind of prove the same type of statement with all those additional probabilities so you said that Berkheim doesn't use the secret key to Berkheim so what does he use the secret key for so there is a bit of confusion here because it's a bit confusing here to differentiate what is the secret key from what is the verification key in the initial framework the secret key of this encryption scheme is the verification key of the proof that's not the case anymore in our framework where the secret key of the encryption scheme is just the extraction tracker so it does not appear anywhere in the real world it is only used in the security analysis to explain what it is secure but no one has known and the verification key now is this E this is a thing that is accurate here but offers integers no existence of the integers is what allows the verifier to recover this value how come is it designated to verify a proof because if the proof is used but it offers the integers it could change but changing the word is just someone that can guess the value E mod M but designated doesn't it mean that the verification key is the public key for someone yes, so here is the public key that is the encryption key but the public key of this scheme plus this type of test this is the public key of the proof system and the verification key is just this E and if the prover would know it offers integers he would be able to cheat and in fact that's how we prove someness we show that he could use equal partial proof which in 50 minutes I will propose the only information to the verifiers and those E ok, let's take the speaker again