 Thanks for the introduction. Hello, everyone. My name is Ashok Guil. Today I'll be presenting work on separating semantic and circular security for symmetric keyword encryption schemes from the learning with their assumption. This is a joint work with Vengara Coppola and Ben Waters. First, let's talk about public encryption. So recalling public encryption, the semantic security says that if the adversary doesn't know a secret key, it basically cannot distinguish between encryption of two different messages. But it doesn't say what happens if the adversary has some partial information about the secret key. Consider the case in which the adversary knows some encryption of some portion of the secret key. Very popular common example of that is encircular security in which the adversary is given the encryption of the secret key itself. It's not given the secret key, but the encryption of the secret key. And it has to decide whether it has the encryption of the secret key or some garbage information. It has formalized, as follows, there are n-public keys and n-secret keys. It should be the case that in n-circular security, the adversary should not be able to distinguish between the n-public keys. And if it gets along with it an encryption key cycle, that is, it gets encryption of the second secret key into the first public key, third secret key into the second public key, and so on, an encryption key cycle, or encryption of all zero string. So the adversary should not be able to distinguish between these two distributions. Now, you might ask that, OK, why is encircular security important or does it have any good applications? So the most well-known application of encircular security is of Gentry's bootstrapping. So Gentry showed that if a level homomorphic encryption scheme is circular secure, then it can be bootstrapped to be fully homomorphic encryption scheme without any level bound. So a very good question, a very natural question that you might ask is that, does n-c-p-security imply encircular security? Or encircular security is something more stronger and it cannot be, we need some new techniques towards even encircular security. So you might ask that, what do we know about this? So we already have a couple of separations for encircular security. The most common one is of n equals to 1, when you have a self-cycle. Then in that case, we have a folklore count example. The count example is as follows. So what you do is that basically during encryption, you check if the message that you're trying to encrypt, does it correspond, is it the corresponding secret key for the public key that you're trying to encrypt under? If that's the case, then you output some special string zero, otherwise you just compute the ciphertext honestly. So this way you can clearly see if you get an encryption of the secret key itself, then it's a special string. And you should be clearly able to distinguish this from any other ciphertext. Now you might ask that, okay, this is a count separation result for n equals to 1. But what if we increase the length of the cycle? What if n is equal to 2 or more than that? So we also have a lot of count examples or separations for those. So this means that, okay, maybe we can't get gentries who bootstrapping for free. Maybe that is not the case. Encircular security or circular security are slightly more stronger than NCP security. But you might still ask that, okay, maybe encircular security is not the exact thing that we really want. Is there something else that using which we can bypass these results? So we know that all these separation results, they use one important fact, that the secret, that the message contains the entire secret key when you try to encrypt the message. So a natural idea will be, what if we consider only bread encryption? We only encrypt all the bits separately, just bit by bit. It seems to follow that all the separations, including the four-core result, they don't follow in the case of bit encryption. So we don't know if circular security and NCP semantic security are separate in the case of bit encryption. So that is the question that we're trying to answer in this talk. So what do we know about prior results in this area? We already have some count example or some separations from strong assumptions like idealized multi-linear maps and indistinguishability obfuscation. But in this work, we would like to base our, we would like to answer this question from a more standard assumption so that we could base more trust on it. So our result in this work is, as follows, we construct a NCP secure symmetric key bit encryption scheme that is not one circular secure. So we prove this theorem under the learning with error assumption. So in the rest of the talk, I'll try to construct a NCP secure symmetric key encryption scheme that is not one circular secure. So let's just dive into what are the assumptions that we make. So the assumption is of learning with errors and learning with errors simply says that if you are given a random matrix A and some noisy linear combinations of the rows of matrix A, then you should not be able to distinguish that from the random matrix A and some other uniform matrix U. And we will be using the short secrets version of the learning with errors assumption which says that the secret matrix S, which basically tells you how to linearly combine the rows of matrix A, this should also be, should be a low norm random matrix. So apart from the learning with that assumption, another important tool that we'll be using in our work is of lattice trapdoors. So lattice trapdoors, they are basically two algorithms, a trap gene and a sample pre. The trap gene algorithm, it takes this input, the dimensions of a matrix that you're trying to sample as well as a modulus Q. It outputs a matrix A along with some trapdoor information. Now using this trapdoor information, what you can do is that you can run the sample pre algorithm for any matrix U and you will get a corresponding pre-image S. So the property that the matrix S satisfies is as follows, that if you multiply matrix A with matrix S, then you get the matrix U that you were running this sample pre with. And basically we can use the shorthand notation A inverse of U. It is not exactly the matrix inversion operation, but it's just a pre-image of matrix U with respect to the matrix A. And you can compute this using the lattice trapdoor information. And the important property is that the matrix S is a short matrix. So this is going to be very useful in building the separation result. So now let's try to move on to our main result. So instead of directly constructing an encryption scheme, a symmetric encryption scheme, what we'll be doing is that we'll be building what is something called cycle testers. So cycle testers are basically an encryption scheme except they don't have a decryption algorithm. But instead of a decryption algorithm, a cycle tester has got a test algorithm. A test algorithm, it takes us to input a bunch of ciphertexts and it outputs either zero or one. It basically tells you with the ciphertext that it took us an input are corresponding to an encryption of a secret key or it was encryption of all zeros or something like that or some bad encryption. Now the correctness is, as I said, that the test algorithm should be able to distinguish between encryption of a key cycle. That is if you get encryption of ith bit of secret key for all bits of the secret key or you get encryption of all zero strings, you should be able to basically distinguish between these two. And the security requirement is basically identical to that of in CPS security that you should not be able to distinguish between encryption of let's say zero and one. Now, so in the rest of the talk, we'll be building, trying to build cycle tester from the learning with errors assumption using lattice trapdoors. Now, I'll try to give you a brief preview of the scheme that we're trying to discuss that I will describe and then we'll move on to our actual construction. So first the secret key in our cycle tester will have form of as follows, that it will contain a lambda bits string S as well as a bunch of matrices and trapdoors. Next, the encryption algorithm, recall in encryption algorithm, it only takes us input a secret key as well as some bit that you're trying to encrypt. But in our scheme, what we assume is that the encryption algorithm, it also takes a position from one through lambda as input. Basically you can imagine that this position will help us in constructing the test algorithm. So you might ask that, okay, in a cycle tester there is no position as an input to the encryption algorithm. The way to solve this is just you can, if you want to encrypt some bit B, you can basically redundantly encrypt it for all positions and that will be your big final ciphertext. But for simplification, we will assume that the encryption algorithm takes this position as the input. Next, the test algorithm, since the secret key is k bits long, the test algorithm will take k ciphertext, that encrypt k ciphertext and what it does is that it will simply work on the first lambda ciphertext and it will check whether the first lambda ciphertext, they encrypt the string s or not. They basically ignore the remaining ciphertext and the important property that the test will assume is that it will assume that the ciphertext, the i-th ciphertext cti, it will encrypt si, the i-th bit of string s for position i. So this is going to be very important when we are trying to prove correctness of our testing algorithm. And just a minor heads up before moving on to our actual construction that the scheme I'm going to describe next, we will have problems setting LW parameters for it. But later, I will just quickly mention or describe that how we will get around that particular problem. So before finally moving on to our actual construction, I would just like to give you a very brief, high level idea that there will be an underlying strand structure associated with our encrypt cycle tester and that will be very useful for the rest of the talk. So in our cycle tester, there will be a strand structure and the strand structure looks something like this. Suppose lambda is equal to three, that the string s is three bits long. What we have is that we have three parallel strands and the i-th strand deviates into, it bifurcates into two substrands at the i-th level. So the first strand bifurcates into two substrands at the first level and then the second strand bifurcates into two substrands at the second level and so on. And basically you can generalize this structure for any arbitrary lambda. So this is going to be a very important structure in our cycle test and it will appear regularly. So with that, let's just try to, next try to describe the cycle tester algorithms to you. So first, let's look into setup algorithm. In setup, recall, we have a lambda bit string s and a bunch of matrices and trapdoors. This string s is simply sampled uniformly at randomly. It's a random lambda bit string and we also have, so what we do is that we have this underlying strand structure and for each node in that strand structure we sample a matrix m. And how we sample these matrices are as follows, all but the top level matrices are sampled using the strapgen algorithm. And the top level matrices are sampled using this special structure that if you pick the matrix corresponding to the bit of si, corresponding to the secret bit si, if you sum those together, then they should sum to zero. So that's the special property that we require from the top level matrices. And now throughout the talk, I will switch back and forth and call these matrices to be like level one, level two, level lambda. And I will call the base level matrices as base level or level zero or starting matrices. So this is what the setup algorithm does. It chooses the secret key like this. Now let's see how the encryption algorithm works. We call the encryption algorithm takes us and put a bit b that we are trying to encrypt as well as a position i. So this is our strand structure and let's consider what happens if you're trying to encrypt some bit b for the first position, for i equals to one. Then what we do is that we focus in on the base level and level one, that is level zero and level one. We focus on this particular level. So we have matrices like this, arranged like this. So suppose you want to encrypt and the ciphertext is going to look like, is going to have lambda short matrices, c one to c lambda. And the high level idea is, we want these ci matrices, if we multiply these ci matrices with the base level matrices, then it will take us to an appropriate level one matrix. So suppose we were trying to encrypt bit one, then what we hope to do is that, we hope to do as follows, that if we take the ith matrix, then it will take the matrix from base level to the matrix to level one. And how we do this is as follows, for computing the first matrix c one, you basically compute the pre-image of the matrix on level one, corresponding to the bit you're trying to encrypt. Suppose the bit that you're trying to encrypt is one, then you choose the first matrix in the first strand and compute its pre-image with respect to the base level matrices. And there's another important part, the important part is that, we also choose one single short matrix s and compute the pre-image with respect to the short matrix s. That is the component, this component contains this short matrix s, and we have all these, so it's basically an LWE sample for all these matrices n. So now, okay, so this is how the encryption is going to work if you were trying to encrypt for position one. So let's see, hopefully, that these ideas internalize for, if the encryption, we were trying to encrypt for position two. So what happens is this, we basically, for position two, we again focus on the level one and level two matrices. We again want to output, as part of the ciphertext, these lambda short matrices, c one through c lambda, right? Okay, so what we do is that, suppose we want to encrypt bit b one, then we want these ci matrices to encode the following relation. They should take the matrices on level one to the matrices on level two via these arrows, these blue arrows that are indicated. So you can basically compute these lambda minus one short matrices c's, c two through c lambda, as before, as when we were trying to encrypt for position one. But an important property is that when we are trying to compute c matrix c one, the first matrix, we want this, a common matrix c one, such that it takes both the level one matrices, level one matrices in the first hand to their corresponding level two matrices. So it should be a common matrix. So trying to formalize it a little bit, the problem that we're trying to face over here is, we want to sample two matrices, m zero and m one, with trapdoors such that for any pair of matrices u zero and u one, we can compute a common short matrix s, such that m beta times s is equal to u beta for both values of beta. You might see, okay, it seems like lattice trapdoors should help us here, right? But the problem is that if we just simply independently sample m zero and m one with their trapdoor information, then it doesn't work. We don't know if they will out with the same common matrix s. So a simple idea works here. What we do is, instead of sampling m zero and m one matrices using trap gen independently, we sample a slightly larger matrix of double dimension, and we pass this big matrix m as m zero and m one stacked on top of each other. And now if you want to compute three images of u zero and u one with respect to matrix m, we basically just run the sample tree algorithm using the shared trapdoor information, and that gives us a common matrix s. So this is how we solve this problem of trying to compute the first short matrix in this particular ciphertext. And this idea could be generalized for all levels for all different positions. So if you're trying to encrypt any bit b for all the remaining positions, the ideas translate similarly. So at a high level, what we are trying to do in encryption is as follows. We hope that the ith ciphertext, the ith c matrix that is part of the ciphertext, it should perform a choice. It should choose which of the strands to go to, depending upon the bit that you're trying to encrypt. So if you're trying to encrypt some bit b at some position i, the ith short matrix should basically choose which of the strands you go to. And this is going to be very important in trying to construct the test algorithm. Now the test algorithm, recall we are just assuming that it takes us and put these lambda ciphertexts, which hopefully encrypt the string s for each particular position, that cti encrypts si for position i. So we just basically pull all the lambda short matrices out of them. And what we do is that, we assume that the test algorithm already knows the base matrices. You can basically include these base matrices or the level one zero matrices as part of the ciphertext or some public parameters. The test algorithm proceeds as follows. It basically multiplies the, so now the strands instead of being kept vertically, we are just putting these strands horizontally. So each strand is now a horizontal line. So what we do is in test is that we basically multiply the base matrices with their corresponding level one matrices. And then we multiply its product with the level, the second ciphertext, and we keep on going so on. And what we observe is that if we perform a matrix multiplication this way, then the ith level in this, the ith strand in this setting, it gives the output of this form that the output is product of a bunch of secret matrices si's with the matrix that correspond to the bit of, the bit si. And why this is the case is because during encryption algorithm, while we are computing each ciphertext, the ith ciphertext, we choose a short matrix s. And that is separate for all the different ciphertext. So when we try to compute this multiplication, all these s matrices, they multiply it together. And since the s matrices are common for each horizontal, each vertical cross section, these basically multiply one by one. And we have a common shape term. So what the test does is that after performing these multiplication, it basically adds or sums these together. So if the product of all these terms was close to summation of a product of si times some particular m matrix, then we get product of summing them up together. We get product of si's times the summation of m si. So recall in setup, we chose that the matrices at the top level, corresponding to the bits of string s, they should sum together to zero. So if we just ignore the product of si's term, then the inside term should be close to zero. If the whole term is close to this product of si times n, then this entire thing is close to zero. And this is basically how we perform the testing algorithm. So now moving on, this is how the testing algorithm works. And this works because the ith strand in this, it basically performs the choosing operation that it chooses one of the strands which you go to. So if the cti encrypts the sum si bit, then you basically switch to the ith strand si'th strand in the ith level. So next due to lack of time, I'll just quickly, briefly describe how what is the high level proof structure, what is this idea behind proving in CPS security of this scheme. So recall the secret key consists of matrices of this form, a bunch of level one, level zero two, level lambda matrices. And in the first game, we choose the top level matrices with this special structure. What we hope to do is that in the next game, we will choose all these matrices at the top level, uniformly or randomly, without this special structure. And we hope to do this using the leftover hash lemma. And the important fact at this point is since the adversary won't know the secret s, won't have any information about the secret s in the NCPA game, we can make this switch using leftover hash lemma. And next, the remaining idea in the proof is that we will have lambda more games. And suppose each ciphertext that recall, in encryption algorithm, we also take us and put the position that we're trying to encrypt to. So the idea in the remaining of the proof is we switch out all the positions to random matrices one by one. So initially we have lambda position ciphertext computed honestly. In the next hybrid, instead of computing the ciphertext at position lambda, we sample just basically uniformly random short matrices. And in the next game, we also just take the top two and sample them uniformly at randomly. And in the end, all the matrices are uniform random short matrices. So I will probably try to skip the remaining of the proof argument, but that's the high level argument that we tried to make sure that all the CI matrices, all the short matrices that part of each ciphertext are uniform random matrices. And we do this via top-down approach. So recall, briefly in the preview, I mentioned that we have problems setting parameters that we can't show how to set secure LWParameters for this particular scheme. The problem is in order to make sure that the test algorithm works, we have to make sure that the error accumulation is somewhat bounded. And for applying the leftover hash lemma in the proof, we have to make sure that something that the number of strands that we have in our system are pretty big. So the problem is for applying LHL, the number of strands should be greater than log q, where q is the LWP modulus. And for making sure that correctness holds for error accumulation, we need that the number of levels we have in our system are less than log q, so that the error doesn't grow too much when we make these multiplications. And as per our current design, the number of strands in the system are equal to number of levels. So this is the sticking point that we basically can't set the parameters. There are two competing resources over here. So the next idea is that we basically make sure that the number of strands now are equal to the PRG output length. Instead of, so we just try to bypass this. So what does this exactly mean? Where does the PRG come in? So let's try to see how do we include a PRG in this whole construction. So we proceed as follows. So let's first try to review the scheme that we have currently. If we try to superpose all the ciphertext for both bit zero and bit one for all the positions from one to lambda. If we try to superpose all the things together, then we have a structure like this. And if we look a little more closely, then this looks like a branching program that computes the identity function. And so it appears to us that what we are actually doing in the scheme is we are trying to obliviously evaluate the identity function. We're giving the via our ciphertext a way to encode this identity function and so that somebody can evaluate this identity function. And to relieve this tension, what we basically do is that we evaluate and encode a PRG using this technique. Instead of encoding a identity branching program, we encode a branching program for the pseudorandom generator instead. And the high level structure is that we just expand these out. And if we have a PRG that can be computed by a NC1 circuit, that is a log depth circuit, then we can write it as a poly length branching program. And then we can just basically use the same ideas and extend it. So trying to conclude, so in this work we gave a separation result and for circular security and semantic security and this is the first from standard assumptions. And through this work, we also made a couple of technical contributions. The central technical contribution is that we gave a technique, a novel technique to show how we can hide and encode branching programs using lattice trapdoors. And we gave a technique called oblivious sequence transformation for performing that joint trapdoor sampling as well as we also show how to encode log depth PRGs using these lattice trapdoors. And there's one more important nugget that is we require the branching programs to be fixed in for branching programs. But that is for consistent cascading and due to time constraints, I'm not going to tell why that is important. So this is the first one, but okay. So moving on, the problem is that this is only a symmetric key primitive. Now, you might ask that, okay, can these techniques be extended to the public key setting? Or can these techniques be even be useful elsewhere? So the answer to both those questions is yes. And in a very recent work, what we do is that we proposed a new primitive of what is called a lockable obfuscation scheme. A lockable obfuscation scheme consists of two algorithms, obfuscate and eval. Obfuscate algorithm, it takes us and put a program P, a message, and a log string alpha. And it outputs an obfuscated program P tilde. And evaluation takes an obfuscated program P tilde and some input X. And it either outputs the message or some reject symbol. And the correct requirement says that if the program P on some input X evaluates to alpha, then the evaluation of the obfuscated program should give you this message. And otherwise, it will always give you what? And the security requirement basically says that this obfuscated program P tilde should not reveal any information about the program or the message if the adversary has no idea about the lock string alpha and the lock string alpha is uniformly and randomly chosen. And we show a sequence of results, applications of this lockable obfuscation scheme. And we show that we can prove lockable obfuscation for all poly-size circuits and prove it's security under the learning with eddard's assumption. And the applications are listed here. So we basically also covered the circular security separations that are described in this work in the public key setting as well. And papers on e-print and there's also concurrent work. But thank you. We're time for one quick question, so in the back. So, have you considered this notion of circular security for a bit of symmetric encryption in which I'm going to give the adversary only one copy of the bit-by-bit encryption of the secret key under itself and then the task of the adversary is to break the CPA security of the scheme. So I'm wondering if your separation extends to this notion. We might be able to use the lockable obfuscation for it, but the techniques that were developed only in this work, it only shows how to separate this long key cycle, not just one particular ciphertext if it encrypts zero or one. But we should see. Right, so basically does your cycle finder algorithm, does it work by, if it's used on a key cycle, is it able to recover the secret key from those encryptions or is it only able to tell us that, hey, we were given an encryption of the secret key, and not an encryption of zero. It is only able to tell us the secret key and yeah. Oh, I see, right. Yeah, so I have one more question. It is not able to tell us, sorry, I meant to say it is not able to tell us the secret key. It's just able to tell us that you can distinguish that it's the secret key cycle. Right, so I have one more question, but I think that's what we don't have time, so I will ask you later, thanks.