 Hi, my name is Satya. Today, I'm going to discuss about our paper on perfect characters in Lockable of Persuasion. This is a joint work with Rishabh Goyal, Pankata Coppola, and Brent Waters. As you all know, IO has proven to be the most powerful primitive in entire cryptography. It has tons of applications, and you can do pretty much anything in cryptography given IO. But unfortunately, we don't know how to construct IO given standard assumptions. Probably, we can look for some slightly weaker primitives that has many applications and can still be constructed from standard assumptions. To that end, Goyal, Coppola, and Waters and Vixen Zidellis propose this beautiful notion called Lockable of Persuasion. Of Persuasion. Here, there are two algorithms, of Persuasion and Evaluator. The of Persuasion, of Persuasion is a program P, and the evaluator evaluates the of Persuasion program on any input. But unlike the regular of Persuasion, here the of Persuasion also takes a message and a random string called the lock value alpha or this input. The correctness criteria says that when you want to evaluate the of Persuasion program on any input X, if P of X is equals to alpha, then the of Persuasion program reveals the message. And if P of X is not alpha, then the of Persuasion program is supposed to output P. For the security part, given a program and a message, if you of Persuasion the program using a random lock value, then the of Persuasion program should be indistinguishable from a simulator program. That essentially means the of Persuasion program should hide the message and the program itself. The two papers that propose this notion, they also gave a construction based on learning with errors assumption. So we know how to construct lockable of Persuasion from standard assumptions. And at the same time, they also have shown plenty of applications of lockable of Persuasion. So that's pretty nice as well. But as it turns out, the construction of the two papers doesn't satisfy the correctness criteria perfectly. If P of X is equals to alpha, then the of Persuasion program will always reveals a message. And in that side, it's a perfectly correct mechanism. But if P of X is not alpha, then with some negligible probability, the of Persuasion program doesn't output P. Here, the probability is taken over the random coins used in the of Persuasion. Well, this scenario occurs in many other crypto papers. We know so many schemes in cryptography which are only statistically correct. And we are satisfied with them. And now you can ask, what's a big deal of lockable of Persuasion is only statistically correct? And now let's see some applications where perfect correctness is actually required. Suppose the government wants to have an inclusion detection software, basically what the software does is, given the profile of a person, the software outputs one, if he's an intruder. And the government notices the job to some other private company. In an ideal scenario, the company generates a program P honestly and offers gets a program using lockable of Persuasion and gives the of Persuasion program P prime to the government. But what if the company is malicious? And then it can choose some bad randomness during the of Persuasion. And to be even more specific, the company can first choose some innocent person X. So the original program doesn't raise a flag on this innocent person. And but the company chooses some bad randomness R such that the of Persuasion program is incorrect and the of Persuasion program doesn't output P. And the of Persuasion program raises a flag on this innocent person. And that's a serious issue because the government would then frame an innocent person. In future, even if the government gets a doubt on the company and wants to audit the process, the government can ask the company to show that it did it's job correctly. In that case, the company can just send the original program P and the randomness R they're used during of Persuasion. The government would then just accept the audit process because there is no way where the government could check whether the randomness R is bad or not. And this problem can be solved if you use the perfect lockable of Persuasion mechanism. And even more generally speaking, let me give some cryptographic applications where perfect hardness of lockable of Persuasion may be necessary. Generally speaking, if the lockable of Persuasion is used by trusted party, let's say the of Persuasion program is generated by the setup algorithm is included in the public parameters. Then even a statistical correct lockable of Persuasion might be okay. But if the lockable of Persuasion is used by an untrusted party, let's say the of Persuasion program is part of a stifletext. Anyone can generate a stifletext, then perfect perfectness might actually be required. And let's say the of Persuasion program is generated by a prover analysis proof. Since the prover could be there, even in that case, perfect correctness might be required. And to give more concrete examples, Bitalsky and Schmurli constructed constant on post-quantum secure zero-knowledge arguments. In this case, the lockable of Persuasion, if it is not perfectly correct, then the commitment scheme that they were using is not perfectly binding. And that actually violates the security properties. And they prove, even assumes that the perfect correctness of lockable of Persuasion for some hybrid arguments. Our work really helps to make this construction go through. And similarly, Anantan Laplacca constructed quantum extraction protocol recently. They use lockable of Persuasion to construct circular insecure quantum FHG. Here an of Persuasion program is given as part of the public parameters. And the of Persuasion program is designed in such a way that in case an adversary gets hold of some circular encryptors afters, that is encryption of secret key two with public key pk one and encryption of secret key one with public key pk two. Given if the adversary can hold of the circularly secure cybertext, then the lockable of Persuasion is used so that it can reveal, it can break the circular security. It can break the security property. And even here, one sided perfect correctness is still required. And similarly, Bitaunsky and others constructed zero-knowledge arguments. Here the of Persuasion program is part of a trapdoor. Since trapdoor is usually generated by a trusted party, one sided perfectness is actually sufficient here. Actually the previous constructions already satisfy this one sided perfectness property. But still this enforces the point where perfectness might be required for some applications. And now coming to the results of the paper, we constructed perfectly correct lockable of Persuasion by modifying the previous construction from lattices. And in order to do this, we also had to construct injected PRG from lattices. So we are given two constructions of injected PRG as a side result. The first one is from learning with drowning assumption, and the second one is from learning parity with noise assumption. Before discussing our construction, let's see how the previous construction works. They do it in three steps. They first construct lockable of Persuasion just for NC1 circuits and for one bit muscles. And then they bootstrap this construction to policy circuits and still one bit muscles. And then they extend this construction further for policy circuits and multi bit muscles. As it turns out, steps two and three, they preserve perfect correctness property. If step one gives a perfectly correct lockable of Persuasion, then the final lockable of Persuasion is going to be perfectly correct. And in the previous work, the lockable of Persuasion that is output with a step one is actually not perfectly correct. We'll look into the previous construction. We'll see why it's not perfectly correct and what changes we can make to make it perfectly correct. And from now on, we'll just concentrate on NC1 circuits only in one bit muscles. The previous papers actually make a small change for the security proof to go through. Instead of up first getting the program P, they instead up first get the program P prime, which is obtained by first computing the function P and then computing a length expanding pseudo random generator on P's output. We, of course, need to make sure that the PRG is also in NC1, so that P prime is in NC1. And they can start their scheme actually satisfy a slightly different correctness criteria than the previous one. Suppose beta equals to PRG alpha, then the correctness criteria that did this up first question mechanism satisfies us. If P prime of X matches beta, then the up first program outputs message and if P prime of X is not equal to beta, then the up first program outputs perp. Generally in crypto, whenever we are dealing with NC1 circuits, it's very difficult to first convert them into branching programs via Barrington's theorem. This is an example of a branching program. This is just a graph of nodes arranged in the form of layers. In each layer, there are three nodes, so the width is three, and there are five layers, so they're in this file. And each node has two edges, a red edge and a green edge. The red edge corresponds to bit zero and the green edge corresponds to bit one. In the first layer, there's a special node, marked in gray called the start node. And in the final layer, there are two special nodes. The green one corresponds to accepting state and the red node corresponds to rejecting state. When you want to evaluate the branching program, you first start with the start node. You process one bit of input at a time. If the bit is zero, follow the red edge. If the bit is green, so if the bit is one, follow the green edge. You keep doing this until the end, until you end up in either the green state or the red state. If you end up in the green state, output one, if you end up in the red state, output zero. The state transitions that corresponds to bit zero are represented in the red matrices. And the state transitions that represent the green edges, the corresponding to the bit one are represented in the green matrices. And remember our program P-Pram outputs multiple bits. Since a branching program can only output a single bit, we first need to split our program P-Pram into L components. The ith program computes the ith output bit of the program P. We now represent each of these individual programs into a branching program. And now our goal is to upfuscate each of this L branching programs. To upfuscate each program, we'll first find an alternate representation for representing each state of the branching program, representing alternate representation for each node. And then we also find an alternate representation for the state transition functions pack. Now let's see how to do that. To upfuscate the ith branching program, we first associate each of the nodes to some matrix M. For all the layers other than the last layer, these matrices are just chosen uniformly at random. Along with some lattice samples, but we forget about that. In the last layer, in the last layer matrices, we encode the lock value and the message. So here's how it goes. Here's how we encode the last layer. Let's consider all the matrices of all the, last final matrices of all the branching programs. If the ith bit of beta is one, then circle the green matrix in the ith branching program. If the ith bit of beta is zero, then circle the red matrix in the ith branching program. And then we'll sample all the non-circle matrices, uniformly at random. And then we'll sample all the circle matrices, still uniformly at random, but subject to one condition, the condition of this. The sum of the circle matrices encodes the message. The sum of the circle matrices is going to be zero matrix. If message is zero and the sum of the circle matrices is going to be scared of Q times identity matrix if the message is one. Here Q is our modules. So this is how the message is actually encoded in the final layer matrices. And now we find alternate representation for the state branch and function span. Here C one zero is an alternate representation for pi one zero and C one one is an alternate representation for pi one one. These representations are so chosen in such a way that LW problem can be encoded into these matrices. The upper skated program, the final upper skated program is going to contain the M matrix corresponding to the start node M one one. And it also consists of all these state branch matrices, C one zero, C one one and so on. And we do the, and we include this for all the L branching programs and that's going to be a final upper skated program. Now let's see how to evaluate this upper skated program. We initially store the matrix M one one and then corresponding to the start node. If the first bit of the input is zero the multiply with C one zero if the first bit of input is one then multiply with C one one. You keep multiplying these C matrices depending upon the bits of the input and what you get is an expression like this. And by the way we designed our C matrices we obtain an expression of this kind. Here M output corresponds to the output matrix. So if the output of the branching program is one M output is going to be this green matrix if the output of the branching program is zero then the M output is going to be this red matrix. And here we all S and error are matrices with small entries for example from some large distribution. And so if M output has small entries then this overall output is also going to have only small entries because S and error are small. And similarly if M output is going to have large entries then the overall output is also going to have large entries. Until now we just evaluated one branching program, right? But our officiated program contains many branching programs. And now let's see how to evaluate the overall officiated program. To obtain the overall output for the entire officiated program we just sum up these output matrices. And when you factor out the S and error terms you get an expression of this kind. So it essentially means S times summation of the output M matrices and plus error. Remember we encoded the message in the output layer with this condition. That means if the program's output is beta then the summation of output is going to be either zero or message is zero. In such a case if the summation M output is zero then this overall summation is going to have only small entries. So that's how you can distinguish whether message is zero or not. And if this program's output is beta in that case this summation is going to be squared Q times identity. And this overall summation is going to only contain medium entries and that's how you can distinguish whether message is one or not. And suppose if the program's output is not equals to beta and then at least one of the entries in this summation is going to be a non-circle matrix is the matrix that doesn't correspond to the beta. In such a case that matrix is entirely uniformly sampled without any conditions. So in that case this overall summation is going to be some uniformly random matrix. And so this overall summation S times summation output plus error it's going to contain, it's going to be uniformly random and it's going to contain large entries with high probability. So that's how you can distinguish whether message is zero, message is one or whether p of m of x is not equal to beta. So to evaluate that first case of program you can just sum up these output matrices and of each branching program and then compare the size of the entries. And that's all about the prior construction that we already know. And now let's see, let's discuss why the above construction is not perfectly correct. There are two issues. The first issue is that there is a mismatch between the correctness criteria that is actually required and the correctness criteria that is satisfied by the above construction. We need that, we needed that the original program satisfies for this correctness criteria. If p of x is equals to alpha we want the upper skated program to output the message. If p of x is not equals to alpha we want the upper skated program to output BOT. But the above construction satisfies this correctness criteria instead. The above construction up first gets the program p prime which is PRG of function p. And if beta equals to PRG of alpha the above the correctness criteria satisfied by the previous construction is that if p prime of x is equals to beta then the upper skated program outputs the message. If p prime of x is not equal to beta then the upper skated program just outputs both. This mismatch in the correctness criteria actually creates a problem. This is, there are times when the second correctness criteria satisfied but the first correctness criteria is not satisfied. This could happen when PRG that we're using is not injected. In such a case when suppose p of x is equals to gamma and PRG of gamma matches with beta then the above construction reveals the message. But ideally we are required to output both because gamma is not equals to alpha. And the second issue is that consider the case where p prime of x is not equal to beta. In that case, whenever you evaluate the branching programs the sum of these output matrices is going to be random, uniformly random. And using this observation we check that p prime of x is not equal to beta by checking if this summation is going to have large increase. You know, a uniformly random matrix will have large increase with overlanding probability but there is some negligible probability that this summation is going to have small increase. In such a case our evaluation algorithm outputs zero, one instead of outputting both. So these are the two reasons why the above application mechanism is not perfectly correct. And now let's solve both issues on both. The first issue is that there is a mismatch between the required correctness property and what is given with the prior construction. And this can be solved if we can use injective PRG's in the construction. If you use injective PRG's then beta has only a single inverse for the PRG function and our problem is solved. But the prior construction of PRG from lattices they were not injective. We saw this issue by giving two constructions of injective PRG's. The first one is from learning with rounding assumption and the second one is from learning parity with noise assumption. Let me describe the construction from learning with rounding assumption. Here the public parameters are sampled uniformly at random from domain ZQ. And for computing PRG and input S we just multiply S with the matrix A and we scaled on the modulus from modulus Q to modulus P. Learning with rounding assumption states that if S is sampled uniformly at random then the PRG output should also be pseudo random. But unfortunately this construction is not injective and due to the scaling on operation the PRG could map, the PRG function could map two different inputs to the same output. And now to make the PRG injective instead of sampling A from a uniform distribution we sample A from some kind of error correcting goal which means that when you consider the values of S1 times A and S2 times A for any S1 and S2 they're far from each other. So that even when you scaled on the modulus they don't collide. And here's how we sample the matrix A. We first sample some uniformly random matrix from domain ZQ and then we sample uniformly random matrix R from domain plus one and minus one. And then we set D to be a matrix with large entries. Here rho is a large value and ion is identity matrix. And then we set A to be concateness of the matrix B and B times R plus D. And by left of hashed MA you can actually prove that the matrix A is the distribution of matrix A statistically close to uniform. And now let's see if this matrix A actually forms an error correcting code and if we can prove this construction to be injective. Basically what we want to prove is given any two inputs S1 and S2 we want to prove that their PRG outputs don't collide with each other. So basically what we want to prove is we want to prove that S1 times A and S2 times A they're far from each other. So even when you scared on the modulus they don't collide. So we divide the proof into two cases. In the first case, we assume that the first component of the PRG S1 times B and S2 times B they're far from each other. So that essentially means S1 times A and S2 times R also far by default. So that's easy. And in the second case where we assume that S1 times B and S2 times B are close to each other. So that also means S1 times BR and S2 times BR are also close because R is a matrix with plus one minus one domain. But here we know that D is a matrix with large entries. So that essentially means S1 times D and S2 times D are far from each other. So combining these two statements in the blue color we can actually show that S1 times A and S2 times A are actually far from each other. So even when you scared on the modulus they don't collide and this PRG function is injected. And now let's resolve the second issue for why the prior construction is not perfectly correct. The issue is that then P prime of X is not equal to beta. We know that the summation of these output matrices is close to uniform. It has large entries with overwhelming probability but sometimes the entries of the matrix could be small. In that case it's hard to distinguish whether P of X is equals to beta. And now we want to design the final layer M matrices. In such a way that this overall summation is always going to have large entries whenever P prime of X is not equal to beta. So here's how we do it. And let me recall, so here are the final matrices of each branching program. And let's circle the matrices that corresponds to the bits of beta. So that is if the ith bit of beta is one we circle the green matrix in the ith branching program. If the ith bit of beta is zero we circle the red matrix in the ith branching program. When the program P prime outputs beta then all the output matrices are going to be the circle ones. And if the program P prime doesn't output beta it outputs something else. And in that case at least one of the M output matrices is going to be a non-circle matrix. We use this observation to our advantage. Simply speaking what we do is we take the distribution of these non-circle matrices a little bit. We add a matrix D with large entries to each of these non-circle matrix. So that's a new distribution from which we sample these non-circle matrices. And now when we sum up with the output matrices some of the output matrices obtained by evaluating the branching program what we actually get is an expression like this. This summation of outputs of branching program is going to be S times. Summation of the final layer output matrices of each branching program. And M bar is the new distribution that we're using right now. And when you've separated it to two parts you actually get the summation of M output from the original distribution plus C times D. Here D is our large matrix that we're using. And C is the number of places where the program's output differs from beta. So if the program's output differs from beta in that case this overall summation is going to have a large entries because of this D factor. Since D has large entries and the summation will have large entries when our P prime of X is not equal to beta and the sophisticated program the evaluation algorithm always outputs the right answer. And so finally let me finally come to the talk. In that talk we describe the importance of perfect correctness in case of lack of lawfuscation. We identified two sources of correctness errors and we solved both issues before constructed injecttpages from learning with rounding assumption and learning parity with nice assumption. And we also gave an alternate encoding for the final layer matrices of the branching program. So that solves both the correctness issues and our scheme is profoundly perfectly correct. And thank you for attending my talk and here's the promotion of the paper.