 Hello, I'm going to tell you about witness and distinguishability for any single round argument with application status control, and this is joint work with Yale-Kali. Start by talking about delegating NP computation. So this is the setting where we have a verifier, which is usually computationally weak. And it wants to learn whether some NP statement X is valid or not, whether X belongs to some NP language L. And it wants to use the computational power of a powerful prover or server, which has the witness for this NP language. So the obvious way to do this is for the prover to simply just send the witness to the verifier. What we want is to do it with fewer resources in terms of communication and computation complexity for the verifier. So what we want is a proof system, which of course needs to have completeness and soundness, but also should have succinct communication. So the communication should be much lower than the length of the witness W. And also the complexity of the verifier should be much lower than the complexity of actually running the NP verification on the actual witness. Given these requirements, we can just hope to send a message from the prover to the verifier, but we still want to minimize the interaction or the number of rounds of communication. So what we can hope for is to have a two message protocol where the verifier first sends a query queue and then the prover responds with an answer A. And furthermore, we can hope that the query queue can be generated independently of the particular NP statement X that we want to verify. Another requirement is that the complexity of the prover should also be moderate. So it should be proportional to the complexity of actually verifying the NP witness with the statement. So this is in general the task of delegating NP computation. And we can think of a few variants of this task. For example, the question is whether the verifier needs secret information in order to verify the validity of the answer A. So we can think about public verifiability where everybody who has Q&A can actually run the verification procedure and check whether the verifier would accept or not. Or we can think about secret verifiability where the verifier, while generating the query queue, also generates a secret key which keeps secret from the prover. And the secret key is used in order to verify the answer A. And indeed many protocols in the standard model, you actually need this secret verifiability. Another parameter that we want to consider is selective versus adaptive soundness. So the question is whether the NP statement X can be chosen by the prover after seeing the query queue for soundness purposes. Or whether we can assume that the NP statement is chosen independently of the query queue. And last parameter that we're going to talk about is privacy for the prover. So the prover might want to also maintain the privacy of the witness that it is using. And in this communication pattern, this translates to the notion of witness indistinguishability. So the verifier should not be able to tell whether the prover used witness W1 or witness W2 for proving the validity of the NP statement. In terms of known constructions, so in the random oracle model or under knowledge assumptions, we can really get the best possible parameters and even get a single message argument system with optimal communication complexity and computation complexity and so on. In the standard model, things are more complicated than we only have limited results for limited classes, subclasses of NP statements. And in particular, these protocols require secretive variability. And what we do in this work is sort of try to enhance the properties of these NP delegation schemes. And in particular, we're going to show how to add witness indistinguishability generically to these types of arguments. So let's talk about our results in a little more detail. So what we show is that it's possible to generically transform every NP delegation scheme of this form into one that also has witness indistinguishability. So we can add privacy for the prover on top of an existing delegation scheme. The cost of this transformation is an additive increase in the communication complexity and the verifier complexity compared to the original delegation scheme that we started from. This additive factor is a polynomial in the computational complexity of the original verifier. So our new communication complexity and our new verifier complexity are going to be additively larger with the factor that's polynomial and verifier complexity. However, if we start from a very good delegation scheme where the verifier complexity is much smaller than the length of the witness, then this property is going to be maintained even after this additional transformation. In terms of assumptions, we require a super polynomially secure two-message maliciously secure oblivious transfer. And in addition, we require that the original delegation scheme that we start from is also super polynomially sound. And we're going to see where this comes in when we talk a little more about the details of our transformation. The other result in this work is an application. So we present a new primitive that we call an access control scheme. And I'm going to say more about what this primitive is later on. But if you know the notion of anonymous credentials, then access control schemes are similar to anonymous credentials. And they also offer succinctness for the credentials, which is sought after property in this context. However, they do not have anonymity against the issuer of the credentials. But I will say more about what these access control schemes are later on. In order to construct these access control schemes, we have our witness instability transformation. We want to apply it on top of one of those NP delegation schemes in the standard model that I mentioned in the previous slide. And for this purpose, we also need super polynomially secure single server private information retrieval scheme, because this would allow us to instantiate those NP delegation schemes for limited classes of NP statements. And this would give us the access control scheme that we need. So concretely, if you sort of take all of these required building blocks and sort of go down to the concrete assumptions that are needed, then our results can be instantiated based on the existence, based on the hardness, the super polynomial hardness of assumptions such as learning with errors, Decisional Diffie-Hellman, Decisional Composite Residuosity or Quadratic Residuosity. So these are the results. And what I'm going to do is start by talking about our generic witness indistinguishability transformation. So what we want is to start from a delegation scheme and add witness indistinguishability on top of it. So in the original delegation scheme, it is possible that some information about the witness leaks to the verifier through the response A that the prover needs to compute. And we want to prevent that. And the basic idea is rather than sending A in the clear, what we're going to do is send a commitment to A, and a proof that the committed value is sort of proper, is really a value that satisfies the verification procedure. And of course, this proof needs to be a witness indistinguishability proof. What do we win by this? Now the witness indistinguishability proof only needs to apply to a statement that has a short witness, because it's a statement about the value inside the commitment and not a statement about the original NP statement. And therefore, even if we start with a witness indistinguishability proof that is not succinct, the eventual communication complexity is going to be succinct. So let's do this in a little more detail. So instead of sending A, we're going to send a commitment to A using some randomness row and an additional proof. And what is this proof? So this is a witness indistinguishability proof that C is indeed a valid commitment to a value A, and this value A satisfies the verifier's predicate with the given query queue. So we want a witness indistinguishable proof of this statement. And this is of course a very simple and straightforward idea, and it indeed can be made to work. All we have to do is sort of take care of the subtleties that arise where you actually try to apply this outline, this blueprint. So in terms of soundness, in order to be able to prove soundness, what we need is to show that if we have a prover that succeeds in this new transform protocol, then we also have a prover that succeeds in the original protocol. And the way to do it is to try to sort of take the commitment and pull out the value A, and this is the value A that will allow a prover to succeed in the original protocol. So what we need is to be able to extract the value A out of the commitment. However, we don't have additional rounds in order to perform extractability. So we're going to use complexity leveraging. In particular, we're going to use a commitment scheme, which is sort of fairly weak. It's super nominally hard to break, but it can be broken with some super nominal computational complexity. And we're going to sort of use brute force in order to break the commitment and extract the value A and use it for the purposes of the reduction. And this is the reason why we actually need super polynomial soundness for the original delegation protocol and for the witness indistinguishable proof, because these need to remain sound even in a setting where we can brute force open the commitment. So this is where the super polynomial assumptions come from. So this is in terms of soundness, other complications that we need to resolve. First of all, as I said, in many cases, in particular, in the cases that we want to use for application, the protocol is only secretly verifiable. This means that the verify has some secret key that it uses in order to check whether the response is valid or not. And if the prover wants to come up with a proof that the verify would indeed accept, then they cannot do it because they don't have the secret key and they must not have the secret key, otherwise soundness is broken. So we need to handle this difficulty. The additional difficulty that we need to handle is that if we just use standard witness indistinguishable proof here, then we're going to run into trouble because if you look at the statement that we are proving here, it really only has one witness. The value A, since our commitment needs to be statistically or perfectly binding for the purposes of brute force extraction. It must be the case that there's only a single witness that the prover actually uses for the setting of this witness indistinguishable proof, and therefore witness indistinguishability is actually meaningless. So we'll need to figure out what to do in order to get a meaningful notion of witness indistinguishability even in this setting. Let's start with handling secret verifiability. So now the verifier also has a secret key that it uses in order to verify the response A. And the idea is to use fully homomorphic encryption. So I want to sort of allow the prover to approve a statement that relates to the secret key, but I don't want to give him the secret key to clear. So what I'm going to do is send homomorphic encryption of this secret verifiability key to the prover. So I'm going to send the query and the public key for the homomorphic encryption scheme and some C0 which is an encryption of this verifier key. Now the prover can evaluate under the encryption, the verifier's predicate, and it can obtain an encryption of what the verifier would have output given the value A that the prover generates. So let's call this encryption CT. So this is supposed to be an encryption of one if indeed the response A is a valid response for the delegation protocol because the verifier is supposed to accept the prover's response. Now, what the prover is going to send back is it's going to send this ciphertext which is supposed to be an encryption of one. In addition, it's going to send a proof that the ciphertext was indeed generated properly so that it was generated by performing the homomorphic evaluation on the FHE encryption of the secret verification key with a response A that is committed to in the commitment C. So this is now going to be the statement that is going to be proven in WI and the verifier can now check that this ciphertext can check first of all the validity of the WI proof. In addition, it can check that indeed the ciphertext CT decrypts to one, which sort of guarantees that the ciphertext CT is indeed a homomorphic evaluation of some response that is committed to, and this response actually leads to, would have led to the verifier actually accepting. And this is sort of our strategy to deal with the secret verifiability issue, but we noticed that what we need is a homomorphic encryption with a circuit privacy property because we wouldn't want information about W or about a to leak from the ciphertext CT. So we need this circuit privacy property, which means that the output ciphertext does not leak information about the computation that was being performed. And furthermore, we need malicious circuit privacy because the parameters of the homomorphic encryption are actually generated by the verifier, which we want to protect against. Indeed, we can get malicious circuit private fully homomorphic encryption under fairly mild variants of the learning with errors problem. So this is one way to resolve this issue, but we can observe that actually we need less than fully homomorphic encryption for this task. In fact, the compactness property of the fully homomorphic encryption may not be necessary in this setting. Compactness means that in our case that the length of the ciphertext CT is independent of the size of the circuit for which we did homomorphic evaluation. But in our case, the homomorphic evaluation is of the original verifier. So the complexity of the original verifier is actually bounded and it's actually possible that other parameters in the scheme are also at least as large as this complexity. So even if we don't have compactness for our fully homomorphic encryption scheme, we get the same guarantee about the length of the prover's response in the worst case. So we can actually rely on a variant on something similar to fully homomorphic encryption without compactness, but with malicious circuit privacy. And this can be constructed using garbled circuits and maliciously secure oblivious transfer. And this we can do other additional assumptions. So in addition to learning with errors, we can also do it under DDH, Decisional Composite Residuosity, Corridor Residuosity. So this finishes the part about the secret verifier. Now let's talk about the witness and distinguishable proof system that we need. So first of all, let's notice that we want the witness and distinguishable proof to have adaptive soundness. Remember that we talked about adaptive soundness. And I should emphasize that the original delegation scheme that we start from does not need to be adaptively sound. And furthermore, the delegation scheme that we end up with, the one that has witness and distinguishability, is also not going to be adaptively sound. And nevertheless, we need to require that the witness and distinguishable proof system that we use inside our construction does have adaptive soundness. And the reason is that the statement that is proven using this WIProof system is only determined after the prover has seen the first message of the verifier. So in fact, the statement is chosen sort of adaptively after the parameters for the witness and distinguishable proof have been selected. And therefore we need adaptive soundness. So this is one requirement that we need. The other requirement is with respect to the variant of witness and distinguishability that is required. And this is what I mentioned when we talked about the obstacles. So just plain witness and distinguishability is not going to be good enough. What we need is this notion of strong witness and distinguishability, which talks about witness and distinguishability between distributions of instances and witnesses. So in strong witness and distinguishability, we consider two different distributions of pairs of instance witness. And what we say is that if the distributions of the instances are computationally indistinguishable, then it should also hold that if we generate proofs with respect to these instances, then the proofs themselves are also going to be indistinguishable. So you don't just hide which witness you use for a specific instance. You also hide which instance you're using when the instances come from two computationally indistinguishable distributions. And indeed, it's going to be the case that in our setting, if the prover starts from two different witnesses for the original NP statement, W1 and W2, this is going to translate to two different distributions on instances, which are computationally indistinguishable. And therefore, this notion of strong witness and distinguishability is actually going to provide the type of witness and distinguishability that we need in order to argue that the final witness and distinguishable delegation scheme that we have does have sort of the standard notion of witness and distinguishability with respect to witnesses for the original NP statement. And in order to, in order to actually instantiate this, this component, we use the words of Jane, Kalei, Korana and Rothblum or Kalei, Korana and Sahai, and they construct strong witness and distinguishability for NP under polynomially secure malicious oblivious transfer, which exists under learning with errors, DDH, Decisional Composite Residuality or QR, so all these assumptions that we that we actually could rely on. So this concludes our witness and distinguishability transfer and let me briefly talk about our access control scheme. So let's see what is access control. So we consider a space of attributes so it could be a very large space. Let's just, we're just going to number the attributes with numbers between one and capital M. And what we want is for an authority to allow an authority to assign credentials to a user, which corresponds to a subset of the of the attributes. So each user has a subset owns a subset of these attributes. And we have an authority which has a master secret key and of course there's a master publicly that everyone has access to and use the master secret key. It can give credentials to a user to a user which sort of prove that the user has indeed owns these owns these credentials. Once the user has these credentials, then any entity can efficiently sort of challenge the user and check that the user actually has attributes that satisfy some monotone relation F. So the monotonicity here is because well, you know, if you have a subset of the attributes, then of course you also have some smaller subset of the same attribute. So we only need to deal with monotone monotone relations here. And this procedure can be done efficiently and succinctly. As we will see in as we will see a minute so we want to be able to allow any entity in the world to issue a query such that then the query contains some monotone relation and the user who has credentials that satisfy this relation. And it's going to be able to show that it indeed owns credentials that satisfy the relation, but without revealing any additional information. So, as I said we want soundness and collusion resistance. So these are the standard notions for this for this sort of attribute based schemes. We want succinctness so we want to we want the query and the proof to be to have communication and computational complexity that are much much smaller than the size of the than the size of the relation. So the relation could have like sort of many, many different clauses it can be so either you have attribute one, or you have one of attributes, you know, two to five, or you have attribute six, or you have attributes, 18 and 19, and so on. So this is a sort of a very long monotone relation, but we want the query and the proof to be independent of the complexity of computing the relation. We want anonymity in the sense that the in the sense that the verify the person who issues the cues and the query and gets the response cannot tell which cannot tell which attributes the user actually has. It only knows that the user has some set of attributes that satisfies the monotone relation. So in order to construct this we use as a building block, this notion of batch NP delegation. And let me explain why that is. So, let's start with a notion of the batch statement. So about statement is a collection of sort of small empty statements, and in additional an aggregator function that computes the compute a predicate on the validity of these NP statements. So let's sort of look at a drawing. So we have a bunch of these small NP statements x1, x2 up to xm. And each one of these NP statements, these inputs can either be in a language L or can not belong to the language L. And on top of this, and on top of these sort of predicates, so each one of these predicates you can just think about as a bit. So it's one if x is in the language and zero if x is not in the language. And there's a function f that is applied to this collection of bits. And this constitutes sort of the total batch statement. So there's a bunch of small NP statements. And on top of those, you compute some, you compute some relation. And these works that I mentioned actually allow to actually allow to provide succinct delegation schemes for batch NP statements, where the verifier complexity and the communication complexity only scale with the length of the small NP witness and not with the length. So in order to give a witness for this entire big batch statement, you actually need to give a bunch of witnesses one for each one of these of these small NP instances. However, it's possible to show that you can do you can actually get delegation schemes, where the communication complexity only scales with the length of a single witness for these for these small statements. And this is what we're going to use in our in our construction of the access control scheme. So our master secret key and master public key are just going to correspond to a standard signature scheme. So there's going to be a verification key for signature scheme uses master public key and assigning key uses a secret key. And in order to assign the credentials you simply are you're simply going to sign the attributes that belong to the user together with a tag that corresponds to the identity of the specific user. So this is how you assign the credentials. Now, in order to successfully to succinctly prove that you own credential satisfying the relation F, you're just going to use a batch delegation protocol. So you're going to use these batch delegation schemes that we that we just described and indeed you can have batch delegation schemes where the aggregator function is a monotone formula. And you're going to be able to provide a proof to issue queries and then provide 16 proofs for sort of the validity for owning credentials that satisfy this that satisfy this batch NP relation. And the success property means that you're only going to the length, the communication complexity is only going to scale with essentially the length of a signature, which is just going to be a polynomial in the security parameter. Now in order to get anonymity, we use our witness extinguishability transformation on top of what I described so far. And this is going to give us anonymity it's going to give us witness extinguishability for this delegation scheme, which actually means that the the verifier cannot tell which witness was used in order to prove the validity of the statement, which actually means that it cannot tell which attributes the user was the user actually owns it only knows that these are the own some attributes that satisfy the relation. And this concludes the construction, and this also concludes my talk. Thank you very much.