 So today I'm going to talk about constraint hiding, constraint PI for NC1 from LWE. And this is a joint work with Rang. So essentially it's just a different name for the same thing which is talking about for the last talk. And yeah, so I guess I will start. So let me start by history. And so this is the famous figure by Napoleon. And such a symbol should remind you of punctury, right? And this is my introduction. So the notion of punctable or constrained PRF. So what does it do? So it starts with a normal PRF key, which I refer to as the original key. And then you point at some point that you don't like. And then it will derive a key, which I didn't notice, K is bracket X star. So it will evaluate. So if you have this key in your mind, then you can evaluate the function on the everywhere defined on the original key, but not at the point that you're puncturing. And so it's kind of like the, yeah, it preserve. And somehow you also want the original output on the punctured input to be hidden. So you want the output to be sort of randomized given a punctured key. And this notion is kind of given by several works. And in general, you can extend to a constrained PRF, where not only you want to kind of wipe out one input, but also you can choose a circuit or predicate where you can, the predicate will tell you there are many points essentially you can randomize or you don't want it to show up in the punctured key. So the notions of puncturable PRF and constrained PRF, they have many applications. Probably they are best known for being a very good friends of IO and that's the history. So, and I give it a little bit of a background on the kind of the textbook PRF how to puncture that. The textbook PRF is actually puncturable. So if you look at the GGM tree-based PRF, and essentially each point is kind of the value is defined by a path of the PRG evaluation. So to puncture the GGM PRF, what you do is you output the output values along the siblings of the punctured point. So that if you output everything in the black, which is the original value, and you just don't output the thing on the red point, which is the input you are interested in, then kind of the value will be randomized and everything else you can still evaluate. And they will be preserved. So something to remember is like the output on X star is wipe out, but by looking at the Napoleon's hand gesture, you can still tell which point you are pointing. So this is kind of the limitation of the previous puncturable PRF that kind of the constrained key essentially leaks to the constraint that you are dealing with. So back to the irrelevant history a little bit. So Napoleon is not only known for good at a lot of things, he's also good at dealing with kind of, how to say, so how to deal with the precious situations. So where to put your hand? We have pressure, right? So natural things he is thinking is like, what if I put my hand here? Like I hide something, right? So this naturally leads to the notion of constraint hiding, constraint PRF. So before we formalize what is mean of constraint hiding, constraint PRF, let's just think on a very high level. So essentially for cryptographers, the more thing you can hide, the more things you can do, right? And so you can think about putting your secrets in the pocket. You can think about putting the key in the pocket or whatever you don't want to review. And sometimes I mean, like the power of hiding more things, it leaves the whole imagination kind of in the theory and the practice of cryptography. So I think this is a great notion to think about and you can hide more things in this case. So in the work of Bonet, Louis and Wu, they ask and answer partially three philosophical questions related to constraint hiding, constraint PRF. First, what is a constraint hiding and constraint PRF? So they essentially give the indistinguishability database definitions that you may hear from the last talk. And we are to find these PRFs. And so they first show actually IO of punctual PRF directly is already hiding the things that you can do with a proof. And also for restricted type of constraints you can do from various assumptions from multi-linear map. Still strong assumptions but more concrete in some sense. And also how to use them. There's many concrete ways of using them where you can see on the screen and potentially you can come up with your own applications. So in this work, we further kind of addressing these three problems. So first, what are constraint hiding, constraint PRF? We give an alternative kind of simulation based definition. And more importantly we kind of find a more concrete way to construct these things for NC1 constraint where the predicate can be specified by a log depth circuit and we build it from learning with errors assumption. And also we show more applications of constraint hiding, constraint PRF. So specifically we can do, we show that constraint hiding, constraint PRFs implies one key, private key functional encryption which is also called reusable GABA circuits. So this kind of depends on your taste of what's simple or natural. Like I think this is kind of easy understandable construction. I will tell you a little bit about it later. And also we show that two key CHCPIF, it implies obfuscation. So I will also mention that. So these are the main results. And also concurrent work by Bonet came in the Magamni which you may heard in the previous talk. So they show one key punctible constraint PRF so basically one point from LWE. So aside from the similar looking on the results, so actually we both roots on lattice-based PRFs which is also root from the previous Eurocryptif I remember correctly. But we use different method to hide and concentrate. I think that's completely different. So even for me to understand that the previous talk is not very easy. And okay, so the plan of the rest of the talk is kind of be, first I will still review the definition, the implication and related things. And then I will tell you how to construct it. Yeah. So first defining CHCPIF. So the syntax of constraint hiding, constraint PRF is the usual thing like a normal punctible PRF. So first you have a master key generation and give you a master secret key. And then the constraint algorithm take the master secret key, takes the constraint clear and the output of constraint key. And evaluation you can take whatever key you have hand and evaluate on whatever inputs you want. So let me tell you our simulation based constraint hiding constraint PRF. So let me record kind of the brief idea of, kind of the brief framework of simulation based definition is usually say in the real world you get this kind of constraint key and there's actually a simulator who don't even need to have the constraint to produce the key which is indistinguishable. Therefore it will capture the constraint hiding in such a way. So let me do it more concretely. We think about, so this is the case where the adversary is dealing with the real constraint key game master in some sense. So it can ask constraint query C and it gets constraint key. It can also ask input queries and in this game we think about it you always get the output from the original key. So then let's see what happened in the potentially simulated world. So the adversary still can ask constraint query and in this case we ask the simulator to produce the simulated key kind of without even knowing the constraint. He only knows kind of the size of the constraint and such things you cannot avoid in some sense. And then the adversary can ask input queries and the simulator is supposed to simulate what you should look like in the original case and we allow the simulator to render input and learn the result whether it's in the constraint or not. And then the simulator is asked to simulate what should you see in the original world. So given this definition we require correctness where if the points it falls into the preserving region then the real output should be consistent with the original key evaluation. And the pseudo randomness and the constraint hiding are all captured in the indistinguishability in the simulation fashion so that the simulated key is indistinguishable from the real key. And for the outputs if it's in the randomized region we require that the simulator actually to output a random output. So there for some how in force that the real output looks random given a constraint key. So actually you can for one constraint key so if like the system is only secure for one constraint key but still for polynomial in many input queries you can actually show that the simulation based definition is equivalent to indistinguishability definition. And so I want to say a little bit about the simulation based definition for many constraint key because somehow if you have only even if you only have two keys then you can kind of do the consistency check and somehow recover the full functionality of that. So you can even relax the definition a little bit by allowing the simulator to ask a query. So that's even more than we previously required. So turns out even that is gonna be there are some problems there so I will mention later. So let me mention even you kind of lost on the formal definitions that I introduced the previous slides. So this notion of hiding something, hiding a circuit in the constraint key should already remind you of obfuscation. So more precisely we can show that if you have two keys if CPIF then it already implies a meaningful notion of obfuscation. So precisely the two key relaxed version of simulation based definition it already implies strong VBB definition which is impossible for most of the types of the circuit classes. And even for two key indistinguishability is CPIF which defined by defining for example you might hear it in the last talk it implies IO already. So how to achieve this is actually quite straightforward you just have two constraint key one for a functional circuit one for another circuit which is either O0 or O1. And to evaluate you just to compare these two. And then depends on the notion of the constraint hiding you're working with you will get the corresponding notion of obfuscation. So and this idea of like the kind of functional branch and dummy branch two keys is also kind of implicit from the first IO candidate. So okay so I must be honest to you that in this talk we only achieved one key so we didn't achieve IO in any meaningful sense. So for the rest of the talk let's focus on the one key definition. And even for the one key definition you can achieve functional encryption for one key in some sense. And the idea is actually simple you just decrypt and eval in your pocket in the constraint. So how do we do a functional encryption? The ciphertext is just gonna be the encryption of a normal symmetric key ciphertext plus you apply a tag which is gonna be the PRF evaluation of the ciphertext. And the function decryption key is gonna be the constraint key for the decryption and eval functionality. And the two real evaluation to real decrypt function only just eval and compare with the tag so that you can recover the whether it's consistent or not on the point you are interested in. All right. So I've described several applications of this notion and then the real interesting question is how to construct it. So our main construction is such a C-H-C-P-I-F for NC1 from L-W-E and our technique combines, you know, first we start from the original lattice based PRF and then essentially what do you need to do from a raw PRF? So firstly you need to have a meaningful way to embed functionality inside. So for that we take the Barrington theorem to embed functionality. And also you need to have a meaningful notion to publish something and still gaining security. So for that we use the GGH-15 multi-linear map mechanism to provide a meaningful public constraint mode. So this actually demonstrates a positive side of GGH-15 based application which previously we only know very few things from multi-linear maps that we can reason about them that have a concrete security. All right, so let me tell you a little bit about the original PRF constructed by Banerjee at all. So if you only want a lattice based PRF without anything else, so what you will do is just picking a lot of more matrices as from the correct ALWC case distribution just to do the subset product legacy. So the input will assign which product you take. You multiply all of them together with the A matrix which is usually the ALWC mask and then you rounded the whole thing. So therefore you have a security argument that you can do ALWC proof. You can inject the noise for free and on the fly. And okay, so what do you need in addition to build a constraint hiding constraint PRF as what I said you need a reasonable public mode of that and you need to embed a structure of that. So I'm repeating these things because so you only need to take two ingredients and sort of putting them together and get this notion. So let me be first to be more precise about the digit 15 encoding. So this will provide us a meaningful public mode. So digit 15 requires kind of a sample A with a trapdoor and it requires trapdoor sampling but anyway we are working the kind of a private key scene so you can sample them privately. And so to encode the subset product idea of digit 15, they actually, from lattice, digit 15 actually do it in the hop by hop fashion. So for each hop you have a pair of secrets. So how do you sample an encoding of that? You first do an ALWC sample related to the secrets and then you use the trapdoor of the left guy to sample a small encoding of such that A times D equals to the ALWC sample. So and we think about the D as the encoding of S. So for many hops we do it for kind of, you think about there's AL pairs of secrets along the line and they essentially do it for, this is just repeating this one hop scene for AL hops. And just to remind what's public when you try to do a meaningful constraint PIF. So you will see all the A's and see all the D's. So this is the easy way to remember. Just to kind of rewind the functionality of digit 15, I will do a quick slide and just trying to convince you. So you start from the evaluation mode of ADDDDDDD. And then by the correctness of that, you open one hop, then the error goes to the noise and something goes to the noise. There will be some small terms that magically goes to the end. And therefore, watch closely and watch closely, you will get a lot of small things that adding up together they are still small after you round that. So that ADDDDD is actually quite close to SSSSA plus small. And then after you round that, the small things disappear. So it's actually kind of encode a private functionality with a consistent output. So in the second tool we use is we use Barrington's theorem to embed the circuit into the key. And what is Barrington's theorem? It tells you that if you have a circuit, you could convert into a matrix product. And so sort of like the identity and some known ADDDDD matrix they multiply together is that the identity will recognize one, the no identity will recognize zero or the other way round. So that you can actually put the matrix structure into the key. So how do we really do that? So each secret now we put in the GJ-15 coding, it's gonna be with the structure. So in the normal mode we put all the identity matrix tethering S and potentially in some constrained mode it will be some permutation matrix tethering S. And by the mechanism of the Barrington's theorem there will sometimes go to one encoding, sometimes go to, sorry, sometimes go to the encoding of I tethering S, sometimes go to the encoding of another permutation tethering S. So this is the final NC1 constraint, hiding constraint PRF from GJ-15 and Barrington's theorem. So this is the key that you will see. It's, so each of the key is a square matrix and it will encode a potential permutation matrix structure. So, and this is the kind of the check of the functionality. I don't have time to go through the proof, although like the proof is actually the hardest part like you need to handle for the GJ-15 like objects. So let me just give you a brief comparison with GGM. So in GGM still you see a tree and here if you kind of, you're not careful enough then what you will see is anyway as several subset product matrices. And essentially the hard part to prove is to show that these things that potentially encode permutation matrix can be hidden in the constraint key. So what we are trying to simulate, we eventually we will go to simulate all the matrices, we sample them obliviously from some small discrete Gaussian distribution. Eventually the goal is to prove the real constraint key is indistinguishable from this kind of simulation constraint key. And this type of simulation constraint key, it has no structure inside the D matrices. So the proof is kind of messy and I'm not gonna talk about today. So it composes of two ingredients. One is some LW notions that you need to deal with permutation matrices. And the other is part of the we already know from the GPV assembly lemma, you need to sample these D from a specific distribution therefore you can prove some statistic steps. And in high level you, because in the real process you use the trapezoid to sample things and now you need to argue that the trapezoid doesn't cause you any trouble. So these are the high levels of the construction and the thing I hope you enjoyed the talk and thanks for your time. Okay, we have time for our questions. What kind of parameter do you rely on for your LW problem? Sorry. What are the approximation factors for the LW problem that you rely on? That's a great point. Actually the approximation factor for us is inherently gross exponentially with the level L. So you need to rely on a sub-exponential approximation factor and then you select all the parameters according to the best case that you don't know how to attack in some sense. Yeah, maybe a related question. When you say sub-exponential approximation factor, is it like L2, no, L of one-half approximation? So this can be kind of any, I actually forgot to know, I think it doesn't have to be kind of always a constant but even better. Like you can set the lattice parameter to be bigger and yeah.