 I'm going to tell you Laconic Oblivious Transfer and its applications. So in this talk, I have only one thing to tell you that is the most interesting and most important part of this work. It is this new notion called Laconic Oblivious Transfer or Laconic OT for short that we introduce in this work. And if you just want to spend five minutes for this talk, please stay with me in the first five minutes. So what is it? Let's start from the definition of Oblivious Transfer, which is a special secure two-party computation protocol where we have a sender receiver. Sender has two messages, M0, M1 as input, and the receiver has a single bit, B as input. And then at the end of the protocol, the receiver gets one of the two messages depending on his choice bit, and the sender gets nothing. The security guarantee is that the bit B is hidden to the sender, and the other message M sub 1 minus B is hidden to the receiver. So this is just the definition of Oblivious Transfer. And if you consider a particular OT protocol that is two-message OT, there are only two messages in this protocol. In the first message, the receiver somehow commits to his bit B to the sender and keeps some randomness R as private information. And then in the second message, the sender somehow inputs her two messages and send to the receiver. And then the receiver can use the randomness to recover M sub B as desired. The security guarantee is the same as before. The first message hides the bit B against the sender, and the second message hides the M sub 1 minus B against the receiver. So this is two-message OT. And here comes the most important slide of this talk, the definition of Laconic OT. It is almost the same as before, except now the receiver, instead of just having one bit as input, he has a huge database of size M. And then again, in the first message, he somehow commits to this huge database to the sender and keeps some randomness R as private information. And then the sender, again, she has two messages, M0, M1, but in addition, she can pick a location L of the database. And again, she somehow inputs her two messages according to this location L. And then the receiver can use the database along with the randomness to recover one of the two messages depending on this bit here. So denoted as M sub D of L. So D of L is the L's bit of the database. And if this bit is 0, the receiver gets M0. If this bit is 1, he gets M1. The security definition is the same as before. The first message hides the database against the sender. And the second message can only be decrypted to M0 if this bit here is 0. Can only be decrypted to M1 if this bit is 1. So if you use the existing techniques of two message OT, you can do something like this. But the first message would be huge. It would grow with the size of a database. But the conicality has the additional requirement of efficiency. Namely, these two messages here and the computation here are succinct. By succinct, I mean the communication complexity here and the computational complexity here are independent of the size of the database. It only depends on the security parameter. Okay, so that's why we call it the conicality. And moreover, it can be done multiple times. Namely, the sender can later come up with a new pair of messages and a new location and do it again. And so on, any arbitrary polynomial number of times, okay? So this is a definition of the conicality. And our main result is that we can construct such a primitive from the decisional D.V. Helmholtz assumption. Here, I wanna stress that we don't even know how to construct the conicality from fully homomorphic encryption FHG. Because usually you would think FHG is like a very natural and powerful tool to reduce the communication complexity. But here it turns out it doesn't help in this case. We don't know how to construct the conicality even with FHG. The only natural way to construct it is via very strong assumptions like obfuscation or witness encryption, okay? So this is our result, five minutes up, you're good to go. But then you might be asked, why? Like, why should I care about this primitive? Why should I try to understand or remember this definition? Why is it interesting at all, right? So one of the reasons is that it turns out to be very useful, very powerful. Here are a bunch of applications that we found using the conicality and different scenarios of secure computation, and we believe there are more. We believe it is a very, very, very powerful tool to reduce the communication complexity and computational complexity in different scenarios. I wanna mention the last point is not a direct application of the conicality. But our ideas and techniques have been very useful to construct this identity based encryption from the Diffie-Hellman assumption, which you have already seen yesterday. So I won't go over all the applications. I will just talk about the first one. But before that, I want to emphasize some of the keywords here to see what is in common among these applications. In what kind of scenarios could the conicality be helpful? As you can see, large inputs, large inputs, ram, gobble, ram, ram, something like this. So it is also somewhat obvious from the definition of the conicality that it could be helpful when there is a large database, when you are doing large-scale secure computation. So it could be helpful to reduce the communication complexity or the computational complexity in different scenarios. I will elaborate more on this later. So in the rest of the talk, I will first talk about the first application to give you a better sense how to use the conicality, and then I'll tell you how to construct it from the Diffie-Hellman assumption. So let's start from the first application, non-interactive secure computation on large inputs in the circuit model. What does it mean? Let's start with the definition of secure two-party computation. So we have two parties. They want to jointly compute some function f on their private inputs x and y. Both parties want to learn the output f of x, y, without revealing any more information about their private inputs. So this is the definition of a two-party secure computation. And we are particularly interested in the setting where one of the parties, let's say me, has a huge database. And the function that we are computing is a Boolean circuit. So this is a secure computation on large inputs in the circuit model. And then this notion of non-interactive secure computation, or NISC for short, is a notion introduced by Shai Law in 2011. So in the setting of NISC on large inputs, there is one party, let's say me, who has a huge database. Let's say I have put all my data of my entire life in this database. Okay, and then I somehow compute a succinct commitment of my huge database. And then I publish to the world. I put it on my homepage, publish to the world. And then whoever wants to do the previous secure computation with me can do it in a non-interactive way. In particular, let's say there is a party Alice who has input x. She wants to do the previous secure computation with me. She can just send me a single message. And then I can use my database and the randomness R to recover the output, c of x, comma d. And I can do it with any person in the world. For example, there is another Alice with a different x. She can also do it with me in a non-interactive way. And so on. The key aspect in this application is that we can make the first message that I publish to the world to be succinct. Otherwise I wouldn't be able to put it on my homepage, right? So the succinctness is a very key point in this application. And so if you look at this picture, it looks very similar to the definition of laconicot tea. If you didn't see it, you can try it this way. And then this is exactly what laconicot tea is doing. And we should be able to use laconicot tea here. So we'll do it from laconicot tea. And plus we will use a primitive called Yau Scarborough circuit. I want to stress here a little bit that we don't even know how to do this via the strong primitive fully homomorphic encryption FHE, OK? Even though it is a very straightforward application of laconicot tea. So let's see how it works. First, I want to briefly describe Yau Scarborough circuit where you have a garbler and the evaluator and a circuit C. First, the garbler will garble this circuit into a garbled circuit. And along with every wire, there are two bit strings, random bit strings. We call them labels. So one label corresponding to this while being zero, one label corresponding to this while being one. We call them zero label and one label. And these labels are hidden to the evaluator. But if the evaluator wants to evaluate this garbled circuit, he will need one label per input wire. And then he can evaluate the garbled gates one by one and eventually figure out the output of this evaluation. And the security guarantees that the evaluator learns nothing more than the output. So this is Yau Scarborough circuit. And back to the problem that we want to solve. First, I want to commit to my huge database and publish to the world. And then whoever wants to do the secure computation can just send me a single message. And say, here is a circuit that we want to jointly compute. So since it looks so similar to the definition of the conical tea, let's just do the conical tea. So the first message will just be the first message of my conical tea. And then in the second message, Alice will generate a garbled circuit for the circuit that we want to jointly compute. And now I become the evaluator of the garbled circuit. I will need one label per input wire. For the input wires from Alice, it is very simple. She can just send them directly to me, right? But for the input wires from me, how am I going to get these labels? Ideally, we want to do one oblivious transfer per wire. For example, for the first wire, the two inputs from Alice are the two labels for this wire. And my input bit is the first bit of the database. So if the first bit is zero, I get the zero label. If the first bit is one, I get the one label. But we cannot do oblivious transfer. It needs more interactions. But if you think about it, this is exactly what the conical tea can do. In particular, Alice just needs to generate one laconic OT ciphertext per wire. And for example, the first ciphertext can only be decrypted to the zero label if the first bit here is zero. It can only be decrypted to the one label if the first bit here is one. This is exactly what laconic OT can do. And then using the conical tea, I can use the database and randomness to recover one label per input wire, and then I can evaluate the gavel circuit, and we're done, right? I want to say a few more words about the intuitions of this application here. So if you think about it, it is like doing oblivious transfer first for the first bit of the database, and then the second bit, and then the third bit, and so on. It is like doing a large number of oblivious transfer in parallel, but to reduce the communication complexity from the receiver to the sender, we make the first message succinct. But I want to say, actually, laconic OT is more powerful than that because it is not necessary that you first do first bit, and the second bit, and the third bit, and so on. The locations that you want to do oblivious transfer can be arbitrarily chosen, and that's the very, very high-level idea that we've got more applications in other scenarios. I won't talk any more details here, but you can see our paper. I've told you the first application, and then in the rest of the talk, I will tell you how to construct it from a DDH. So we'll do it in two steps. In the first step, I will construct a laconic OT with compression factor two. I will let you know what that means in a second. And then in the second step, I will do something called bootstrapping. To bootstrap the space laconic OT into laconic OT with arbitrary compression factor, and this is exactly what we want. The first step can be done from DDH, and the second step can be done from Yau Scarborough circuits again. So remember, this is the definition of laconic OT, and we call it arbitrary compression factor because the size of a database can be arbitrary polynomial in the security parameter. And if you are still following me, you may notice that I have secretly removed a small lock here because for the rest of the talk, I will not care about receiver privacy, but don't worry, it's very easy to add. So if you don't care about receiver privacy, then the first message can be viewed as a succinct hash of this huge database. And for simplicity, let's say it is of length lambda. And as you can imagine, the laconic OT with compression factor two is just saying the hash function here has compression factor two. And then the bootstrapping theorem is saying as long as you get the laconic OT with just a little bit compression factor. Compression factor two. And then you're done. Let's see how it works. Say that you have a hash function which can compress two lambda bits, two lambda bits. Now you want to construct a hash function which can compress arbitrarily long string into a lambda bit string. What would you do? Mercury. The idea is very simple. So this is a database of the receiver. And then the receiver will just compress two lambda bits, two lambda bits, two lambda bits, two lambda bits, and so on. And then the root of this hash tree will be the first message of laconic OT. This is just the first message, very simple. And then comes the sender. She has two messages, M0, M1, and the location L. Let's say L equals 1 for simplicity. For other locations, it's just similar. She needs to somehow equip these two messages according to this location. Let's first focus on this part of the tree. And let's say the sender has this hash value. If the sender has this hash value, then we are done. Why? Because the sender can just use laconic OT with compression factor two here and equip these two messages and let the receiver be equipped. By the way, I put a small two here to indicate this is a ciphertext from laconic OT with compression factor two. And then we are done. But the question is, the sender doesn't have this hash value. She cannot ask the receiver to send it back because it needs more interaction. Or she can ask the receiver to send the whole tree back, but it's just too large. So what is she going to do? Our idea is that we will use garbled circuits to somehow traverse the tree from the root all the way to the leaf. In particular, the sender will generate one garbled circuit per level of the tree. And along with a bunch of laconic OT ciphertexts with compression factor two. And this whole thing will be our new ciphertext and sent to the receiver. So it is still succinct because it only grows with the depth of the tree. And then the receiver, at a very, very high level, he will use this tree and the garbled circuits to somehow traverse from the root all the way to the leaf. And then the last garbled circuit will output the message as required. Let's take a closer look how it works. So I will name these strings as S0, S1, S2, S3. I haven't told you what C1 is doing, but C1 will take S1 as input. And as you can imagine, the receiver is the evaluator of this garbled circuit. He will need one label per input wire corresponding to S1. So how is he going to get these labels? The same idea as before via laconic OT. In particular, this ciphertext can only be decrypted to the zero label here if the first bit is zero. Can only be decrypted to the one label here if the first bit is one. So using laconic OT with compression factor two here, the receiver can use S1 to decrypt these ciphertexts and get one label per wire. So he can evaluate C1. I still haven't told you what C1 is doing. But let's look at C2. C2 will take S2 as input. And again, the receiver wants to evaluate C2. He will need one label per input wire. And we want to use the same idea as before using laconic OT ciphertexts. For example, the first one can only be decrypted to the zero label if this bit is zero. Can only be decrypted to the one label if that bit is one. But the question is, who is going to generate these ciphertexts? Because in order to generate them, you will need this hash value here. But Alice doesn't know that hash value. She only know the root. But although the sender doesn't know this hash value, C1 knows. C1 takes it as part of the input. So we will just let C1 compute these ciphertexts and output these ciphertexts. And this is exactly what C1 is doing. So in the bigger picture, the receiver will first use S1 to decrypt these ciphertexts, get one label per input wire of C1, and then evaluating C1 will give you the ciphertexts for the next level. And then you can use S2 to decrypt these ciphertexts and get one label per wire for C2, and so on. And then all the way to the leaf, the last gavel circuit will output m sub d of 1 as required. And we're done with bootstrapping. So all we are left with is the space of the conical T with compression factor 2, how to construct it from the decisional Dphilman assumption. So our construction is based on this primitive called somewhere statistically binding hash functions. And actually, there is an alternative approach to construct this primitive using the ideas and techniques from the work by Appelbaum et al. I won't be able to talk more details of this construction, but please do see our paper for the construction. With that, I will conclude. So this is the most important slide of this talk. We introduced this new notion called the laconic oblivious transfer. And the very crucial point is that we can make the communication complexity and computational complexity here to be independent of the size of the database. Only depends on security parameter. And the main result is that we can construct it from the decisional Dphilman assumption. And the open question is, so far all that we have done is secure against the semi-honest adversaries. We can make it maliciously secure at the cost of more interactions or stronger assumptions. So open question is, can you make it maliciously secure but with the same efficiency and under the same or weaker assumptions? And it turns out to be very powerful tool in all the scenarios. And open question is, can you find more applications? We have some potential applications listed in our paper if you are interested. So that's it. Thank you.