 So, welcome everybody again. The second talk of this session is Battle Tea with Optimal Rate by Svika Brackerski, Pedro Branco, Niko Dötling and Sihang Pu, and Sihang Skonekivetok. Your mic is off, your mic is off, right? Try saying that. Sorry for that. Thanks for introduction again. I'm Sihang. I will talk about Battle Tea with Optimal Rate. It's a joint work with Svika Brackerski, Pedro Branco, and Niko Dötling. So, let me first recall the functionality of obvious transfer. There are two parties, a sender and a receiver. So, a sender inputs two bits, M0 and M1, where the receiver inputs a single choice bit, and the functionality will output the chosen bit MB to the receiver. So, for security, it requires that the sender doesn't want the receiver to learn the other unchosen message and 1-B. Similarly, the receiver doesn't want the sender to learn which message is chosen by him. So, there's an amortized variant of OT called batch OT. Basically, it's an independent bit OT where a sender inputs a pair of bits, and the receiver inputs a choice bit to retrieve a B1 to MBN. So, in this work, we focus on the two-round batch OT protocol, which means each party in the protocol only sends a single message, where the receiver sends a first message, encoding his choice bits, and the sender responds with a second message, encoding the chosen bits. As the two notions we are interested in this work are absolute rate and download rate. Upload rate means it's a ratio between the choice bits and the first message. Download rate is a ratio between the chosen bits and the second message. So, it's natural to ask, can we build a batch OT protocol with optimal rate? Here, optimal rate means both upload rate and download rate are close to 1. In other words, the total communication complexity tends to be 2 times N, where N is the number of bits you transfer. So, the straightforward solution is to use rate 1 FHE. Rate 1 FHE makes access for M-Bit Stopper test. It also contains or conveys roughly M-Bit message. But there are a few drawbacks. The first is it requires lattice assumption. Second, it's not computationally efficient due to the Bustrabi mechanism used. So, we tend to ask, can we build a rate 1 batch OT with assumptions not implying FHE or without lattice assumptions? So, the answer leads to our main result. We are able to build a batch OT protocol with over rate 1, which secure against semi-honest adverse rates. And we need a DTH assumption to argue similar security, and we need a DTH plus LP assumptions to argue receiver security. As an additional result, we show how to emulate a small subgroup in the P, which gives us the first statistically function-private linear homomorphic encryption under DTH with rate 1 server test. Here's a roadmap of this talk. I will first show you how to build a standard OT from Elkma encryption schemes. Then I show how to achieve download rate 1 via server test compression. Next, I will show how to achieve output rate 1 via encryption technique. As the end, I will briefly describe how to tackle two small issues appearing in the last step. As a warm-up, let's see how we can build a standard batch OT from Elkma scheme. The receiver generates a public key, namely a generator G and group element G to the Y, where Y is also the secret key. To generate the server test, the receiver encodes its choice bed B as G to the B. Note this encoding can be decoded efficiently due to the small message space. The receiver sends a public key and a server test to the sender. The sender homomorphically computes an OT function on this server test. This OT function is linear. You can see if X equals to 0, the output would be M0. X equals to 1, the output would be M1. So, after the evaluation, the server test will encrypt the G2's MB and, of course, with a refreshed readiness. Then the sender sends back the server test to the receiver. The receiver can decrypt it by his secret key to learn the choice bed. Considering the communication, the download rate is 1 over 2 times group size, as there are two group elements in the second message. And the upload rate is 1 over 4 times group size, as there are four group elements in the first message. So, this basic protocol is not satisfying as it has poor rates. So, how can we improve on that? We use a server test compression technique to achieve a download rate of 1. And that is to encrypt L bits of our public key. It's going to be L plus 1 group elements. And the uncompressed server test also contains L plus 1 group elements, and they share the same randomness. After compression, the server test is composed of a header and L payload bits. The header contains C0 and key. The header, the size of the header, only depends on the security parameter. So, in this way, we can compress the server test into L bits plus some constant. This is a symptotic rate of 1 server test. So, with this technique, we can amortize the sender's message as follows. The receiver encodes his choice bit for each choice bit encoded as a vector. So, he has L choice bits. So, there are going to be L server test of vectors. And the receiver sends the public key and L server test to the sender. The sender also homomorphically computes the OT function on each server test. Note, for each server test, he runs the same function on each coordinate of the vector. So, after evaluation, the server test will encrypt the choice bit at its specific position. After summing up, the sender can compress it to a small or short server test using the server test compression technique. After receiving it, the receiver can decrypt it to learn the chosen bits. So, in this way, the download rate becomes L over L plus some constant, as long as we transfer, as long as the number of bits to transfer is large enough, so this rate is close to 1. However, in the other side, the upload rate is even worse because we use L square group elements to encode just L bits. So, how to improve on the upload rate? Our approach is to use rate encryption technique. What we need actually is a rate one encryption scheme with linear decryption. So, LPN almost fulfills these requirements. We call LPN means for any uniform random matrix and random vector and error vector with small hamming weight. LPN holds FAS plus E is computationally indistinguishable from a uniform random vector. Symmetric encryption also simply works as follows. Given a secret KS, we can encrypt binary vector M as compute AS plus E and plus M. To decrypt it, just compute D minus AS. So, for the moment, we will ignore this decryption error. So, with LPN, the receiver can encrypt his choice bits under the LPN scheme. Additionally, he will encrypt the LPN secret under the LKMA scheme. So, then the receiver sends the LKMA public key and LKMA server test and also the LPN server test and the matrix A to the center. The center will first homomorphically decrypt this LPN server test under the hood of LKMA. So, he will get a server test encrypting the receiver's choice bits and then he can homomorphically evaluate the OT function and compress it to the small server test as before. The receiver then just decrypted to learn the choice bits. By doing this way, the upload rate is L over N times polylambda plus L. This is also close to one. As long as the dimension of the LPN secret is much smaller than the number samples L. Also, notice that we ignore the matrix A in calculation because we can reuse this matrix for multiple bunch of choice bits. But there are still two small issues. The first one is LPN has decryption errors. So, it will produce incorrect outputs. Second, LKMA is actually not a function-private scheme over Z2, but in this step, we need to homomorphically decrypt LPN server test. We actually need to operate in Z2. So, let's see how to deal with LPN errors. We need to run additional protocols in parallel. Let's first consider the positions with error, which means the choice bits at these positions will be flipped after LPN decryption. Since the receiver knows the error positions, so for each error position, it can compute the first message of an additional OT protocol with the choice bit BI as its input. It can compute the first message of PLL protocol and with position I as input. Then, it sends both of these messages for each error position to the center. After receiving them, the center also for each error position computes the OT response based on these messages for all of its inputs. So, in this way, the center will get a database of OT responses for each error position. Then, the center just computes the PRR response based on each database for each error position. Then, it sends the responses to the receiver. The receiver can locally recover the chosen bits at these error positions by finishing the PRR protocol followed by the OT protocol. Okay, but what about the bits at the positions without errors? We cannot just directly send them to the receiver because the center doesn't know which positions has error and which are not. So, the center needs to use an additional technique called distributed punctual PRF. So, the center holds the PRF key and mask all of its inputs at the PRF values and the center then obliviously generates and transfers the punctual key to the receiver. With this punctual key, the receiver can mask PRF values at all positions except for the error positions. This can be constructed by known techniques. So, the last issue is about the function privacy. So, center privacy actually doesn't hold in above protocols because ALGMA is not a function private over the two. The reason is the group ZP doesn't have non-trivial subgroups. So, if we encode the bit as a G to the B, it will leak information that we cannot do modular reduction. However, if we want to encode it in higher order bits, like in that setting, it will still accumulate errors, which leaks information again. So, our solution is to use random master running. With random master running, we still want to encode the bit in high order bits. But now, this time, we want to integer close to 0 or equal to 2 according to a discrete Gaussian distribution. In this way, we can solve the problem and it actually gives us statistical function private schemes. There are a few open questions to be solved in future works. For example, can we upgrade our same honest security to malicious security or can we remove or replace PN assumptions with others? Thanks for listening. Thank you. Anybody has questions? Hi. Thank you for the talk. Very nice tricks. I was wondering what kind of error rates do you use in the LPN assumption? Okay. Thanks for the question. The error rate is slightly sublinear to the number of samples. It's one inverse small polynomial error rate. Okay. Thank you. No problem. Are there any other questions? Can I ask you if you could go back to slide 12. To which slide? 12. 12. Oh, yeah. It's about here. Okay. So, this is LPN and at the same time L-Gamal, right? Yes. Where is L-Gamal? What are you applying L-Gamal to? Sorry. What is L-Gamal encrypted? Okay. The L-Gamal encryption encrypted the LPN secret. Yes. And the server, the sender applies L-Gamal to re-encrypt? Yeah. The sender applies a sender homomorphically decrypt the LPN under the L-Gamal schemes. Since L-Gamal is linear homomorphic. Ah. Also, server evaluates the decryption algorithm over the L-Gamal scheme, yes, or L-Gamal sub-test. Ah. And that's what it sends back? Yes. Yeah. Yes. And so, this packing, it's, you use the packing technique, right? Yes. So, it has no limit, like I can pack however many bits I want into the single L-Gamal object, group element, or it stops at some point. You're seeing how many bits we can pack in the L-Gamal sub-test, right? For the L-Gamal sub-test, I think it depends on your, since L-Gamal has a super polynomial modulus. So, in the packing, we need some, we need a PRF to detect, to generate a breakpoint in the group. As long as the group is large enough, we can pack as much information as we want. And so, this, do you think this can extend to OT in larger messages and bits? You mean the string OT, or? I mean, what we solve is, we focus on the bit OT, it means the message as a single bit, right? For, if you want to do for string OT, you can do with the easier techniques. I mean, bit OT implies string OT, right? But not the other way around. So, you just repeat over, right? Because you only have honest but serious security. Yeah. Okay. Thank you so much. Thanks. If there's no further questions, and let's thank the speaker.