 So I'll focus in the talk on lossy trapdoor functions and just give a brief nod to correlation secure trapdoor functions. So as we saw in the last talk, some lossy trapdoor functions, and I guess I'll just go over the, what exactly they are again. They're introduced by Picard and Waters and their functions. You're given a function F and it can behave in one of two ways. Either it's injective, which means it's a one-to-one function and there's a trapdoor that allows it to be inverted or it's lossy, which means that it loses information and in particular the image of the function is smaller than the domain. More specifically, if we can measure the lossiness, we can quantify it. If the domain is n bits and the image is n minus l bits then we say that F has l bits of lossiness. And the security property in these functions is that the descriptions of injective functions and lossy functions are computationally indistinguishable. And it's actually this security property that allows for constructions. So that's what they are. So what can we do with them? Well, lots of different things. Picard and Waters suggested some applications. These included collision-resistant hash functions, oblivious transfer, and their killer app was CCA secure, so chosen cybertech secure, public key encryption. But since the original paper, there have been other applications including deterministic public key encryption by Bodhi Reva at all and security against selective opening attacks which was Valari and others. And there have been a bunch of other papers. You can find a full list in the proceedings. And all of these constructions use lossy trapdoor functions just as a modular primitive, so nothing in particular about the construction itself. So given all these uses, we'd like to have a big library of lossy trapdoor functions based on different computational assumptions. So we can choose the one best for the application or if some assumption gets broken. So what kind of constructions do we have? So Picard and Waters gave constructions based on, one based on decision-diffie-hellman and second based on learning with errors which is a lattice assumption. So we add, in our paper, we add new constructions, one based on quadratic-residuosity assumption. In the next talk, we'll see a similar construction that's based on an apparently weaker assumption of two versus three primes. And for details comparing these assumptions, again, we'll refer you to the full version of the paper. This quadratic-residuosity construction also generalizes to higher powers. So I guess, eth power, residuosity. We also give composite-residuosity construction or the PIEA assumptions, also known as. This particular construction was discovered independently by Bode Reva and et al. We also give a construction based on the delinear assumption, this, which simplifies and generalizes the Picard and Waters DDH construction. So we also give a correlation secure trapdoor function and just a brief overview of what that is. So correlation security is a generalization of a one-way function to correlated inputs introduced by Rosen and Segev. And this says, basically, if you're given a collection of functions, then they're correlated secure if they, when evaluated on k-tuples from some distribution, that the function is one-way. And Rosen and Segev show that this security property is enough to construct chosen cybertech secure encryption and that it's implied by lossy trapdoor functions. And in particular, we'll see in the next talk that it's implied by lossy trapdoor functions with any amount of lossiness, even a bit or less. So our contribution in this area is a new construction based on the hardness of syndrome decoding, which is a coding theory assumption. It's related to the dual of a McLeese cryptosystem. So, but now I'll focus on the lossy trapdoor functions. In particular, give outlines of two of our constructions, first being the quadratic-residuosity construction. So, the quadratic-residuosity construction is based on some, well, some old observations. We have, if we have an RSA modulus n, it's p times q, we're p and q are three mod four. Then it's well known that the squaring function, x goes to x squared mod n, at least on z mod n star, this is a four to one map. So, the image is a quarter of the size of the domain. So, there are two bits of lossiness. And it's also well known that we can take a unique square root, so we can invert the squaring function if we know two more bits of information. And those bits are the Jacobi symbol, which is it just is minus one or one on z mod n star and zero if the number is not in z mod n star. And the second bit is the sign. So, here we represent x as an integer between minus n over two and n over two. And we take the sign in the obvious way. And so, more specifically, if we have four square roots of a square mod n, and we call them plus or minus x zero or plus or minus x one, the Jacobi symbol will tell us whether we've got, whether we're working with x zero or x one, because the two are different here. Jacobi symbol of the plus or minus x one is different from plus or minus x zero. And the sign then will tell us which one, given the Jacobi symbol, which square root it is. And here we're specifically using the fact that p and q are three mod four. So, all right, so we have this velocity function, the squaring, and we know how to invert it, given these extra bits of information. So, to create an injective function, we need to encode these extra two bits somehow. For example, as in Raven-Williams encryption. But, in particular, we need to do it in a computationally indistinguishable way. So, that's the problem. And the solution is to put these extra two bits in the exponent of quadratic non-residues. So, more specifically, I'll define two functions. Function h gives information about the sign. It's, it's defined to be one if the sign is positive and zero if it's negative. And function j captures information about the Jacobi symbol. It's basically an additive version of the Jacobi symbol. It's one if the Jacobi symbol's minus one and zero otherwise. So, now we define our function by choosing integers r and s in z minus n, or a z minus n star. R has Jacobi symbol minus one, so it's automatically a quadratic non-residue. S has Jacobi symbol one, and we choose it to be a quadratic non-residue. And then we can define our injective function. So, it's described by the triple r, s, and n. And what we do, so we take x, we square it, then we multiply r to the j and s to the h, and do that all mod n. So, well, why should you believe me that this is injective and can be inverted? So, now I'll show you how to invert this function, which I guess proves that it's injective. So, first of all, to recover the Jacobi symbol, we use the fact that the Jacobi symbol is multiplicative. In particular, if we take the Jacobi symbol of f of x, then it's the same as the Jacobi symbol of x because the other two factors here because x squared has Jacobi symbol one, because it's square, s has Jacobi symbol one, so we're left only with r, and that term has Jacobi symbol plus or minus one, depending on what j is. So, once we learn that, once we learn j, we can sort of kill off that factor and recover x squared times s to the h. And now, x squared is clearly a square, so then whether this term is a quadratic residue will tell us the value of h of x. So, in particular, we invert as follows. First, we compute the Jacobi symbol of f, and that gives us the Jacobi symbol of x, multiplied by r to the minus j to cancel out that term. And now we can determine whether the result is a quadratic residue to learn the function h. Multiply by s to the minus h to kill off that term. And finally, we're left with just x squared. We compute four square roots and find the one that matches the values of h and j that were computed. So, now we have an injective function based on squaring. Now, the same function is lossy if we choose, instead of choosing s to be a quadratic non-residue, we choose s to be a quadratic residue. And particularly, you can show that this function is two to one on all of z mod n, except for zero. So, it particularly loses the information about the sign of x. And now it's pretty immediate that the lossy functions are indistinguishable from the injective functions under the qr assumption, because the description is r, s, and n. The only difference is whether s is a quadratic residue or not, and that's exactly the quadratic residuality assumption. So, some extensions. So, as I described it here, the domain of the function depends on the function index in particular n. And for some applications, we need functions that don't depend on the index. So, we can extend the domain in a pretty straightforward way. And you don't achieve a full bit of lossiness, but it's some fraction of a bit. This extension, well, all these extensions are in the full version. We also can use, instead of using squaring, we can use raising to the eith power. And then there's an analogous eith power residuality assumption. And for, we can achieve much more lossiness this way. The eith has to be small enough so that the Coppersmith attacks don't apply. So, that's roughly smaller than n to the 1 fourth. And computing the analog of the Jacobi symbol, which is necessary for decryption, is time polynomial in e, in general. So, not in log e, so that seems to be a problem. But we can get around this by choosing e to be a number with lots of small prime factors, do the computation module each small factor, and then use China's remainder theorem. And the details involve some Eisenstein reciprocity in number fields, which is a generalization of quadratic reciprocity. So again, these details are all in the full version on e-print. So now, like to describe the lossy trapter function created from a delinear assumption. So, here's the basic observation. So, let's take our input x, and it's just a bit string, zero one to the n, and let's view it as a length n vector. So, just a vector of length n with entries in zero one. Now if we have some matrix over finite field fp, then we define our function just to be matrix vector multiplication. So, the output is also is a vector of fp to the n. And the observation is this function can be lossy or injective. It's injective if m has full rank, rank n. Then, if we know the inverse of m, then we can invert. It's lossy if the rank is small. In particular, if it is rank d, then the image has sides of most p to the d. So, p to the d is less than two to the n. Where if the image is less size, the image is less than the domain, then we have lossiness. So, this is a good start, except, well, we can easily distinguish these two cases. Given a matrix over fp, it's pretty easy to compute the rank. So, how do we get around this? Well, so what we do is, we encode the matrix m in the exponent of a group where discrete log is hard. So, we let g be a group of order p, the little g be a generator, and the matrix have entries m, i, j in fp. And now, instead of describing the function by the matrix m, we describe it by the matrix of group elements g to the m. So, it's g raised to each entry of m. Now, the trapdoor remains m inverse. And now, we can evaluate, given g to the m, we can evaluate g to the m times x. Simply given x, this is more an exercise in notation than anything else. And similarly, if we're given m inverse, we can apply that also in the exponent to recover g to the x. And now, we need to take a discrete log to recover the entries of x, but since x is a zero one vector, these discrete logs are easy. It's determining whether the entry is the identity or not. So, that was the description of our function. And for security, we rely on a theorem of Bonnet et al, which did the case d equals one, and now are in segative in general. Which who said that if the d linear assumption, or they prove that if the d linear assumption holds in the group g, then this g to the m, where the matrix has full rank, is indistinguishable from the g to the m where the matrix has ranked d. So, well, so what exactly is d linear assumption? This is a generalization of decision-diffie-hellman that was designed so that it holds, or it could hold, in groups even with a d linear map. So, the d equals one, so the one linear assumption is decision-diffie-hellman. The two linear assumption could hold in groups with a bi-linear map, and it's also known as decision-linear assumption. And so, when we apply this function with a matrix of rank d, the amount of lossiness is n minus d log p. So, if we choose the parameters correctly, we can achieve varying amounts of lossiness. So, some observations about this construction, we sort of, we simplify and generalize the Piker-Waters construction based on L-Gamal. In particular, they encrypt the matrix m so that it can be decrypted. And decrypting the matrix m, in our case, would require taking discrete logs, but we observe that we don't actually need to do this. We just need to use the functionality of m. So, this saves some space. And then, if you wanna generalize their construction directly, then you'll have to use generalize L-Gamal encryption which has d-group elements for each ciphertext. So, it would be d times as large. So, as we said before, we can choose the parameters to achieve different amounts of lossiness. This construction also admits an all but one generalization. Well, the construction works for all d, but there's a gap in the proof for d bigger than one. So, we only get a proof for ddh. So, I invite you to try to plug the hole for d greater than one. And this particular, so this all but one generalization is used in the PikerWater CCA secure encryption construction. So, in conclusion, I've described some constructions of lossy trapdoor functions based on the quadratic residuosity and delinear assumptions. And in the paper, there's also a construction based on the composite residuosity or PIEA assumption and also a construction of correlation secure trapdoor functions based on syndrome decoding. So, these constructions expand our library of lossy trapdoor functions and therefore expand the methods that we have to build new crypto systems in a modular way from these primitives. Thank you.