 Thanks for the introduction So yeah, I'm gonna talk about this new thing that we call trapdoor hash functions And as I'm gonna show really soon it has lots of interesting applications. So let's start And in order to give you some motivation, I'm gonna actually begin with the application of a trapdoor hash So all of our applications they fall in the following context We have a public function f that takes two inputs and we have the two parties a receiver Bob with an input x And we have a sender Alice with an input y and the goal essentially is to let these two parties communicate And at the end we want that the receiver learns the output of the function on the two inputs Okay, so our focus for this talk is going to be on protocols that consist only of two messages So at first the receiver sends a message and then the sender responds And our goal is going to be to minimize the communication complexity of such protocols or in other words to minimize the length of these two messages Good so in an ideal world where Alice and Bob love and trust each other We we can construct a very simple protocols that have optimal communication But we're in crypto So we're interested in a scenario where the parties do not trust each other and each of them wants to keep his inputs Private and we're going to consider the semi honest notion of security And the main question we were going to ask is whether protocols that guarantee security can be as efficient as Protocols from the ideal world So in other words, we're asking what's the cost of security in terms of communication in such a setting? Perfect so in order to give a meaningful answer to this question we have to distinct between two different cases and the first case is when the Sender's input is much larger than the receiver than the receiver's input and the very common and useful example is Oblivious transfer where the sender has two long messages say of a length n and the receiver Bob wants to learn one of these messages And we want to achieve this while keeping Bob choice Private against Alice and we don't want to give any information to Bob about the other string the string that he did not choose to get so if we do not need security then we can let the Bob send his Choice bit X and then Alice can respond with the chosen string So the communication here is dominated by the length of the second message, which is n in this case So our goal for secure protocols is going to be to optimize the length of this the second message And this means we want to optimize the download rate of a protocol Which we define as the ratio between the output length of the function Which is n in such functions and the length of the second message of the protocol we're analyzing Cool so again without security we can get Optimal download rate of exactly one. So now we ask whether we can match that in insecure protocols and The bad news is that we cannot and this is not surprising And more specifically we show that if lambda is our security parameter Then the length of the second message has to be larger by a than n by at least twice lambda So the best we can hope for in some sense is download rate that at least approaches one when n grows larger and larger And this is exactly what we get with a trapdoor hash So let's see what we could do before trapdoor hash So we could get a rate half oblivious transfer using generic assumptions So these are protocols where the sender had to send at least two n bits and The only way to get a higher rate was to use higher rate homomorphic encryption schemes and the only Such encryption scheme that was known is the damgaduric scheme which with security based on the DCR assumption So from all standard assumptions other than the DCR assumption The best we could do is rate half oblivious transfer and the only exception to this statement is two very recent Papers the one by one by Gentry and Halevi the other Barber keske et al where they show how to construct Optimal rate a full homomorphic encryption schemes under the LWE assumption Good So using trapdoor hash we get the first optimal rate oblivious transfer protocols Under the DDH QR LWE assumptions and we also get a new construction from DCR with nicer properties compared to the damgaduric So more specifically we get statistical sender privacy in such protocols and we get receiver privacy Which is computational under these assumptions and the sender in such protocols She sends n plus a poly lambda bits in the second message Good, so we can further get protocols for more general functions such as a batch ot batch OLE and matrix vector A matrix vector product Perfect So besides rate one OT being interesting by its own it also has lots of powerful applications So the first application is in private information retrieval using our rate one Oblivious transfer protocols We get the first protocols for single server peer that have both polar poly logarithmic communication and optimal download rate So we get the first touch construction from DDH QR and LWE and in particular We finally solved the open question of constructing a peer with poly log communication from BDH or a QR The second application can be seen as a generalization of the above and here We get one more encryption scheme for branching program where the length of the ciphertext Grows only with the length of the branching program, but is independent of its width and both of these Applications are based on a transformation from the work of Isha and Paskin in 07 In the third application, I'm going to mention we get the first optimal rate constructions for lossy trapdoor functions from again DDH QR and LWE Good So the second scenario I'm going to consider for our for our applications It's going to be when the receiver input is actually the largest input of the two and the very generic example is when Bob the receiver has a huge database of size n and The sender Alice has a small RAM machine with running time much smaller than n and the goal here is to let Bob learn the output of m when we run it on on his database X and Notice that because the running time of m is much smaller than n and in particular when we run it on the database X Then it looks at very few locations Of the database so again, we want to achieve this a This functionality while keeping both m and X private And there are lots of real life applications for those of you who care So without security again, we can let Alice just send the description of the machine m and this is very And if we assume that m is small then the communication here is independent of n and actually much smaller of n than n so our goal for secure protocols is going to Achieve communication that's somehow smaller than n and this is already a non-trivial task when we restrict ourselves to protocols with two messages only And another thing I want to mention is that if we want security then we need to work in a model where we assume That there's a common reference string that the two parties can access Okay, so before having trapped or hash the only solution to this problem that could get you a sublinear communication was to use laconic function Evaluation where you would get communication proportional only to the running time of M of M And this is in some sense optimal up to poly lambda factors So laconic function evaluation would give you security security under the LW assumption and if lattice based Security was not good for you Then you could then another thing you could do is you could use a laconic oblivious transfer Which gave you a more or less the same communication complexity And security based on the DDH assumption However, the problem with the iconic OT was that it would not guarantee the full notion of security and in particular The access pattern of the machine M is revealed to the receiver Bob So using trapdoor hash we get the first fully secure solution with a sublinear communication and security based on a number of theoretic assumptions So in particular we get using something we call private laconic OT We get a protocol with communication complex complexity proportional to t times square root n under the DDH assumption And if we want to use bilinear groups with pairings, then we could use that to cube root of N And I'm already going to give you an open question and it's whether we can close this gap between Efficiency gap between a lattice based solutions and solutions based on assumptions like the DDH assumption Okay, so now that you know what are the applications of trapdoor hash? Let's talk about a trapdoor hash And in order to define trapdoor hash I'm going actually to begin with a trapdoor functions that we all know and I'm going to take you back to the two-party context so trapdoor functions allow some party Bob to generate a pair of a key and a trapdoor and Then Bob can publish the key and in particular send send it to his friend Alice and Now Alice given the key can take her input X and evaluate the the trapdoor function given this key and get some image Y and The only one who can take Y and invert it back to X is Bob who has a trapdoor So using the key you could evaluate the function and using the trapdoor you can invert it so trapdoor functions allow Bob to recover the entire pre-image of Y X and Information theoretically if he wants to do that then he will have to see Information which is at least larger at least larger than X. So Y has to be at least as large as X here However, in our applications, we need something a bit different It's enough for us that Bob learns only a small part of X We don't need him to learn the entire pre-image, but on the other hand we want to minimize communication And that's what trapdoor hash are useful So trapdoor hash functions allow Alice to compute a very small image of her input X Which we call the hash value of X and we denote it by H And now Bob wants to learn only a small part of X. So let's assume he wants to learn the ith bit of X So information theoretically is going to need something more than the hash value So the trapdoor part of trapdoor hash functions will allow Bob to generate again a key and a trapdoor And he's going to publish the key and send it and send it to Alice And now given the key Alice can compute a really small image of her input X which we call the hint and Now only Bob who has a trapdoor can use this hint and the hash value to recover X I So again Bob recovers only X I but he needs much less information which consists of the hint and the hash only So the syntax of trapdoor hash consists of our algorithms the hashing function that takes the input X and the randomness are The key generation which allows Bob to generate a key and the trapdoor We have the hinting function and the decoding which inverts the hash value and the hint back to and and recovers X I given the trapdoor and We're going to require security for both parties. So we want that Alice's input X remains private So the hash should not reveal any information about it And on the other hand we want that Bob's input I Remains also private. So the key that he sends to Alice should not reveal any information about it And our main efficiency goal here is first we want the hash to be small and in particular its size should be independent of n and We also want that the hints are small and In other words, we want that the rate of the trapdoor hash is is actually high Where we define the rate as the inverse of the length of the hint and To make some some sense out of this straight definition notice that Bob can generate multiple keys in order to recover multiple bits of the database and You can see that the rate here as we defined it as is asymptotically equal to the ratio between the information that Bob recovers and The length of the hints that Bob needs in order to recover this information So the main technical contribution of our paper is constructions for trapdoor hash with optimal rate Under the DDH QR LWE and DCR assumptions So next I'm going to show you that the trapdoor hash construction from a DDH And I want to say that we use techniques that were used before to construct a schemes for IBE, Laconic OT and trapdoor functions with very nice properties and we're going to work over a multiplicative abelian group G of prime-order P and We're going to have a public generator of this group, which we are going to denote by small g And let's recall the DDH assumption which says that if we take two random integers A and B from ZP Then G to the AB it looks like a uniform group element even given G to the A and G to the B so Let's proceed to the construction. So again Alice has an input X and the first And okay, so so we're going to have public parameters for our Construction and this will be 2n uniform group elements, which we are going to order in a matrix of 2 over n Uniform group elements. So again, these are the public parameters and now let's define the hash function So the way that Alice is going to compute the hash is as follows So Alice is going to go over every column in the public matrix And she's going to take the top a group element if the corresponding input bit is 0 And she's going to take the bottom group element if the corresponding bit is equal to 1 So she collects these n group elements. She multiplies them all and get a hash value, which is essentially a group element here So again the hash value is defined as this product And now she sends the hash to Bob like we said We we were expecting this hash value to be a private however here we do not use randomness So this this has function of course cannot be private But you you'll have to believe me that with a bit more effort we can get statistical privacy for Alice Okay, so now Bob wants to learn the ice bit of the database and For that he's going to generate a pair of trapdoor and the key and the trapdoor is going to be just a uniform integer in zp Which are which we're going to denote by t here and he's going to generate the key as follows He's going to take the matrix from the public parameters and he's going to raise every group element there to the power of t The trapdoor and then he's going to go to the bottom element at the ith column and multiplied by g So the key looks like that and we're going to denote the group elements there by the g-tildes and Under the ddh assumption we can show that the g-tildes They look like uniform matrix even given the public parameters and therefore they hide the value of AI from Alice So now I want to show you how Alice can compute a hint that will eventually let Bob recover Xi and the way She's going to do it is very similar to the way. She computed the hash before except now She's going to use the deep the detail does rather than the G's So again, she's going to go over the matrix and take the corresponding group element from every column She's going to multiply everything and get this a group element, which is the hint E so again, we defined it as this product and We can already analyze the rate of this construction So again, we define the rate as the inverse of the length of the hint And we know that if we wanted a security from ddh then the length of the hint has to be proportional to the security parameters So roughly speaking that the the rate of this construction is going to be one over lambda Okay, so now Alice sends the hint to Bob and all she means to show you is how Bob given the hash H the hint E and the trapdoor can recover the value of Xi and I claim that all Bob has to do is to compare e to h to the t and h to the t times g And and Bob can learn the value of Xi According to this comparison So if e is equal to h to the t Bob can conclude that Xi is zero and otherwise He can learn that Xi is actually one and I'm going to convince you now why this is actually true so let's assume for now that we define the g till does as the Teeth power of the group the the public group elements So if the g till does are defined this way Then it's easy to see that the hint is going to be equal to h to the t in all cases Right. We just multiply the same group elements raised to the power of t But in our construction, we do something a bit different We actually multiply g till the I1 by a factor of g and Now notice that the hash value is still computed as before but the hint e now depends on whether we use the The group element g till the I0 or we use g till the I1 in our computation of the hint And this depends of course on the ith bit of the database x So notice that if Xi is zero then we don't we're not going to take this g factor and if Xi is one we're gonna actually take it and therefore the The element e is going to be always equal to h to the t multiplied by g to the Xi. I hope I convinced you So this is our chapter hash construction from the from the edge But I promised you an optimal rate construction And this is clearly not an optimal rate construction because the rate here is one over lambda like I said So what I'm going to show you next or at least explain in a high level how to optimize the rate of this construction and Get a rate one h chapter hash Good. So the our goal essentially is to take this hint Which is a group element and compress it to a single bit If we do that we get a rate one chapter hash But to to keep the construction correct We have to be able to distinct between the compression of h to the t and the compression of h to the t times g Because otherwise Bob will not be able to make that a comparison and learn the value of Xi so we want some encoding to a single bit that will distinct between these two values and The very natural candidate is to take the the parity of the district the discrete log of the hint Because notice that the discrete log of h to the t times g and h to the t It always differs by one and therefore the parity will always be different However, if we could actually compute discrete log, we could also break the security of the scheme. So this is inefficient, obviously So the alternative to this is to use a very useful tool called distributed discrete log Which was first introduced by BGI in a totally different context and Distributed discrete log is actually an efficient algorithm, but it still satisfies the property we need so What we essentially need is that if two group elements have discrete log that differs by one by one then the Their encoding is going to be different So so the encoding that we're going to use is We're just going to take the parity of the distributed discrete log of our group element and distributed discrete log again satisfies this property and Will and will give us this inequality which we need for security for correctness Okay, so with the time I have left. I'm just gonna give you some open questions So again to conclude we introduced this new primitive It's very simple and easy to realize using many standard assumptions, but on the other hand It's super powerful. We get lots of new stuff. We couldn't get before and So we showed applications in in two scenarios and one can ask lots of interesting interesting questions And I think the more important question we can ask is whether these techniques and this primitive can be used to get other stuff Yes, thanks for listening So we have time for a brief question and the next speaker. Please come to the podium in the meantime So there are no questions from the audience and ask a question. So You know, I'm fine with lattice assumptions at w e but so right Can be a critical issue in practice from an efficiency point of view So are these techniques to reduce the rate something that is potentially interesting Also in terms of practical performance of the skin. So we get communication. That's like really Really tight that really are so the so we get right one, but the The extra bunch of bits that we pay are actually there are actually few in many of our constructions I'm not sure I can say that the computational complexity Can match the practical requirements, but speaking of like communication the constructions Which is already good But you think there is a potential for these techniques to lead also to something that people may want to do It's a good question. We'll one has to think about it. Okay, so let's thanks the speaker again