 Hello, my name is Sina. I will be talking about constraining and watermarking PRF from Minder LWE Assumption. This is a joint work with Chris Picard. We start with reviewing the definition of constrained PRF. Bonne water, kiaïs and others and boil and others introduce the notion of constrained PRF in three independent works. A constrained PRF is basically a PRF which can delegate or generate a constrained key with respect to some predicate C. So for instance in this picture we have this party who holds the secret key or master secret key of the PRF and we have this user. Now at some point this party who holds the master secret key decides to give this user a constrained key with respect to some predicate C. So it sends this constrained key to this user constrained or constrained key with respect to some predicate C. Now this constrained key has two properties and the first property which we call correctness says that this user with this constrained key can evaluate the PRF on every input point which is authorized by this predicate C. The second property is constraints of the randomness and it says that on points that are not authorized by this predicate C the value of the PRF looks random to the user even with access even when the user has access to the constrained key. In 2017 Bonne, Leppi and Wu strengthened the notion of constrained PRF and introduced private constrained PRF. A private constrained PRF is a constrained PRF where the constrained key hides the constraint or the predicate C. So here the constrained key satisfies three properties. The first two properties are the same as before our correctness and constraints of the randomness but we have a third property which says that this constrained key hides the predicate C. Constrained and private constrained PRF have found various applications. One notable application is software watermarking. In software watermarking we have a program a piece of software and we want to put a mark on it. We need this mark to be unremovable meaning that if someone has a marked version of our program then he should not be able to remove this mark. We also want the watermarked version of our program to be functionally equivalent to the original unmarked version of our program. Bonne, Leppi Wu and Kim and Wu showed how to use private constrained PRF to build PRF that can be watermarked. Here we briefly give a high level overview of their construction. So in their construction we have a PRF which is a private constrained PRF and we want to put a mark on this PRF. The watermarked version of this PRF is a private constrained key corresponding to the point function predicate with the random point access star. So this predicate authorizes every point except for this random point access star. To see why this this mark is unremovable, notice that since this is a private constrained PRF the predicate is hidden and in particular this random point access star is hidden. So an adversary cannot find out access star and cannot find out at which point this watermarked version differs from the original version. To see why this this is functionality preserving notice that the functionality of the watermarked version of the PRF is the same as the original PRF except for one point except for one random point. So this is for functionality preserving and now to check whether a watermarked version of whether a PRF is watermarked one can evaluate it or it can be evaluated on this point access star random point access star and then the the value can be checked with the original version of the PRF and if they are not equal then it means that we are dealing with the watermarked version of the PRF. Constraint PRFs also have other applications as shown by Bonah Levy and Wu they can be used to build a deniable symmetric key encryption where the scenario is that we want to generate fake keys which open ciphertext random looking messages. Another application is updateable cryptography where we want we have some cryptographic scheme and we want to update maybe the keys or the ciphertext quickly and as shown by Annan, Skohen and Jane constrain PRFs can be used to build updateable garbled circuits. Constraint PRFs have been built based on various assumptions. The first three works that introduced the notion of constrain PRFs used one-way functions to build constrain PRFs for the limited class of prefix fixing predicates. Bonah and Waters used multi-linear maps to build constrain PRFs for polynomial side circuits. Atrapadang and others used the DDH or DDH type assumptions in pairing free groups to support all NC1 circuits or circuits of logarithmic depth. Most relevant to this work, Brackerscan by Kuntanotan used the learning with errors assumptions to build constrain PRFs for polynomial side circuits. This construction, the bv15 construction, inspired many subsequent works which used their techniques to build constrain PRFs or private constrain PRFs from lattice-based assumptions. Let's briefly review the learning with errors or LW assumptions. The LW assumption is a computational assumption introduced by Regev in 2005. It is parameterized by dimension n, a modulus q, an error distribution chi, which is usually a Gaussian with root of n. And with these parameters, the LW assumption is as follows. It says that for a uniformly random n-dimensional vector in zq, the following two distributions are computationally indistinguishable. In the first distribution, we first pick a uniformly random n by n matrix in zq and output a along with f times a plus e, where this e is an error vector where each coordinate comes from distribution chi, comes from this error distribution. LW says that this distribution, LW says that this distribution is computationally indistinguishable from the following distribution. Again, we pick a uniformly random n by n matrix a in zq, but now the second component is a uniformly random n-dimensional vector in zq. So this is the LW assumption. It has been shown that if you can break LWE, then you can have a quantum algorithm to approximate certain short vector problems on integer lattices within q square root of n approximation factors. So in this reduction, it is clear that if q becomes bigger, then the approximation factor becomes looser and less desirable. Furthermore, if q becomes larger, then we actually have better attacks that perform better. And this in particular means that we need to make dimension and bigger to prevent or to achieve certain security level. So security level and this bigger dimension and also bigger q lead to larger key size, larger public parameter size, and etc. So in short, we prefer a modular skew, which is a smaller. As I mentioned in the previous slide, after Bracker's and Weikund-Tamaton's work, there have been a few other papers which build private constraint PRF for different predicates. Bone and others build private constraint PRF for point functions. Kennedy and Chen build it for NC1 circuit. And Bracker, ScanOther, and Peichart and myself build it for polynomial size circuit. If you look at these papers, then at least when they want to support polynomially long input, if the PRFs want to support polynomially long input, then all of these construction needs a modulus, which is sub exponential in the dimension. Okay. In this work, we build constraint PRFs, where the LWE modulus scales way slower than sub exponential in the dimension. In particular, we build private constraint PRFs for polynomial size circuits, where the LWE modulus is exponential in the depth of the circuit. We also build constraint PRFs for log depth circuits for NC1 circuits, where we did nearly polynomial LWE modulus. We also build private constraint PRFs for the class of inner product predicates based on any key homomorphic PRF. So instantiating our last, this last result with the LW based key homomorphic PRFs of Banerjian Peichart, we get an LW based private constraint PRF for inner product predicates with a nearly polynomial modular. I have to mention that we get our results in a slightly relaxed model for constraint PRFs, but this relaxed model is sufficient for what the applications need. And in later in the talk, we will focus on this relaxed model. We also have an additional result. We build private constraint PRFs based on any one way function for the predicate class of TCNFs. And this was also independently noticed by Davidson and others. We can directly plug our private constraint PRFs into the water markable PRF construction of Kim and Wu, or the water markable PRF construction of Quatch and others, and get the first water markable PRF with a quasi polynomial modular. Our results together with the work of Anant and other gives us an updateable cryptography primitive, updateable garbage circuit from LWE with nearly polynomial modular. Also plugging our results into the construction of, into the work of Bonnet and others give us deniable symmetric key encryption with from LWE with quasi polynomial modular. One big caveat of our result is that the concrete correct net level of our construction scales with the LWE modular. So for instance, if you have a nearly polynomial LWE modular, then an adversary running in nearly polynomial time can break the correctness of our scheme. So this means that we only, we can only get a theoretical negligible security or correctness. So this is room for a future work. Here we examine previous LWE based constraint PRF constructions and the reason they needed sub exponential modulus. In previous constructions, the output of the PRF is some values in zp where p divides the modulus q. To compute the original value of the PRF at some input point x, we first evaluate an intermediate function f prime and get a value mod q and then round it to get a value mod p. If we evaluate at point x using a constraint key, then we get intermediate value f prime plus noise and then by rounding it we get the value of the PRF. The problem happens when the intermediate value s prime is dangerously close to the rounding threshold. In this case, adding noise changes the rounded value and correctness does not hold. Previous constructions got around this issue by making the ratio between q and p larger than input domain of the PRF. By doing this and then using the union bound argument, they could argue that the chance that there is an x such that s prime of x is in the border line is negligible. But as I said, the caveat here is that doing this leads to q being sub exponentially big. To avoid the sub exponential modulus q, our first observation is that the current definition for constraint PRF has a restrictive notion of correctness. The existing correctness notion requires the value obtained using a constraint key be equal to the original value of the PRF at all authorized input point. We relax this correctness condition and define a new notion called feasible correctness. Feasible correctness relaxes the previous definition for correctness in the following way. Feasible correctness now only requires the original value of the PRF and the value obtained using constraint key to match only at points that do not depend on the bit representation of the constraint key. For instance, if a point is chosen uniformly at random, then that does not depend on the bit representation of the constraint key. And this immediately tells us that feasible correctness is sufficient for water marking. It is indeed sufficient for most other applications. More formally, we model feasible correctness at the game where we have an adversary who has access to the original PRF as an oracle and can send this oracle input point x and get the value of the PRF, original value of the PRF at x. The adversary outputs finally the adversary outputs x star and input point x star. And we say the adversary wins if there exists a predicate c such that c authorizes x star and the original value of the PRF at x star and the value obtained using the constraint key at x star, the constraint key corresponding to c do not match. So we say a constraint PRF is feasibly correct. If any polynomial adversary has at most a negligible chance of winning this feasible correctness game. To see why feasible correctness is enough for most of the applications, notice that in almost all of the applications, when we are evaluating a PRF, we rarely evaluate evaluated on a point that depends on the bit representation or the description of the key of the PRF. And this is why feasible correctness is sufficient for almost all of the applications. We achieve feasible correctness by shifting with an independent PRF. In more detail, we will be using an auxiliary independent PRF x PRF with key k and we concatenate this key k with the master secret key and the constraint key. Now to evaluate the PRF at an input point x, we can compute f prime or f prime plus noise as before. But now prior to rounding, we add the value of the x PRF at point x and then we round. The main observation here is that if this input point x does not depend on k on the x PRF key k, then the value of the x PRF at point x looks random. And consequently, the probability that f prime plus the value of x PRF lands in this borderline area is negligible. Finally, we describe our construction of private constraint PRF for hyperplane predicates. In this setting, our input domain is L dimensional vectors of short integers. Our predicates are hyperplanes H alpha where the coefficients of the hyperplane are alpha zero alpha one through alpha sub L. And we will be using a key homomorphic PRF kH where key homomorphism means that if you evaluate the PRF at point x with k zero and add it to an evaluation at point x with key k one, then this is roughly equal to approximately equal to evaluation at point x with key k zero plus k one. In our construction, the master secret key consists of L plus one secret key for the underlying key homomorphic PRF. To evaluate the PRF at an input point x, we evaluate the underlying PRF at point x L plus one time. The first time we evaluated using key k zero. The second time we evaluated using key x one k one. And the last time we evaluated evaluated using key x of L k sub L. And then we sum up all these evaluations. And this would be the value of the PRF at point x. To generate a constrained key for hyperplane H sub alpha where which has coefficients alpha zero alpha one through alpha sub L. We first generate a random key for the underlying a key homomorphic PRF. And then we output k sub zero minus d alpha zero through k sub L minus d alpha L as the constrained key. Evaluation using the constrained key is almost identical to evaluation using the master secret key. The only difference here is that we use the constrained key and in particular the components of the constrained key. To see why this provides correctness and constraint for the randomness we use the key homomorphic property of the underlying key homomorphic PRF. In particular we notice that we can group together these k i's. And if we group together these k i's then what we get is the value of the original value of the PRF at point x. And if we group together the alpha i's then what we get is a multiple of the value of the key homomorphic PRF at point x. Where this multiple is zero if x is in the hyperplane and is not zero if x is not in the hyperplane. In the latter case, since the value of the PRF at this point x, the value of the key homomorphic PRF at this point x is random, then this thing hides the original value of the PRF. So this thing also looks random. Thanks for your attention. I would be happy to take questions in the live session.