 I can change this. So the next talk is asymptotically quasi-optimal cryptography, and Leo de Castro will give the talk. Thanks. These are the other slides, this one. Which slides did you want? The AQS slides, I downloaded them here. Great. Thanks. All right. Hi. So this talk is on asymptotically quasi-optimal cryptography. This is joint work with wonderful co-authors, Karmit, Ubal, Benod, and Muthu. So in this talk, we're going to introduce the notion of asymptotically quasi-optimal cryptography. And we're going to give new techniques to construct a QO crypto. And in particular, we're going to be focusing on semi-honest two-party computation. And also, we're going to briefly talk about constructions for malicious verifier zero knowledge and how to build that from sender rate one AQO batch OLE. So this talk is motivated by this fundamental question of what is the overhead of cryptography? So if we consider a cryptographic problem of size n, such as securely sending a message of length n over a public channel or performing batch OT or batch OLE of length n or proving the satisfiability of a linear size circuit with n inputs in zero knowledge, all of these can be insecurely solved with order and communication and computation. So what about the secure solution with security parameter lambda? Well, the most natural solutions usually have a multiplicative overhead. So you have something like order n times poly lambda, where you need to do roughly poly lambda work for each instance of your problem. The work of Isha'i, Kushi Levets, Ostrowski, and Hai showed an amortized overhead where you have some polynomial in lambda. And once your problem size becomes big enough, the overhead of the cryptography is just amortized over all the extra problem instances. But the problem is that this polynomial is unspecified. So it could be quite large. And it could be quite a while before your amortized efficiency really kicks in. So you could hope for the best possible overhead, which, in most cases, is just n plus lambda. You can sometimes do better in communication, but in general, n plus lambda is the best that you can hope for. So the problem is that asymptotic optimality is actually quite hard to achieve. We don't even have heuristic constructions for public key in encryption. So we're going to settle for asymptotic quasioptimal overhead, which is O tilde of n plus lambda. So we're going to allow for polylog factors. But this is going to get us within polylog factors of the best possible solution. So this is our stated goal, solve size n cryptographic problems with O tilde n plus lambda complexity. So what was known before this work? Some crypto problems did have AQO solutions, lots of solutions for secret key in encryption. Basically any secure PRF is going to give you a good secret key in encryption solution. For AQO public key in encryption, ring learning with errors is the main assumption that we have. This is going to be the main assumption that the rest of the talk is going to be focused on. If all you care about is AQO communication, elliptic curves will give you this, but because of the differentiation, you're not going to be able to get AQO computation. Similarly with the string OT, elliptic curves and ring learning with errors are the main assumptions. And additively homomorphic in encryption, you can also get from ring learning with errors. This is not for the function private version of this primitive. We're going to be talking about function privacy later. So in our work, we construct lots of AQO primitives, but the focus of this talk will be batch OLE. And all of our constructions are going to be relying on ring learning with errors. So let's jump into some brief background. What is batch OLE? It's a two-party protocol between a sender Alice and a receiver Bob. Alice has two vectors A and B. Bob has a vector X. At the end of the protocol, Alice gets nothing. And Bob gets the result A times X plus B, where all the arithmetic operations are component-wise. This is a fundamental building block of arithmetic NPC. And there are lots of special cases of this protocol that are very well studied. There's OLE, which is just where everything is a scalar. There's vector OLE, where A, B, and the output are all vectors, but Bob's input X is just a scalar. And then you can view this as an arithmetic analog of OT. You can get back to OT by just setting your plain text modulus equal to 2. Just very brief background on ring learning with errors. We have our polynomial ring, which is mod of a degree n polynomial. So our polynomials will have degree n minus 1. So they'll be represented by vectors of length n. And the ring LWE assumption states that these two distributions on the left and right here are not distinguishable. So in particular, this second polynomial in the output on the left is very close to a linear function in A. But the end on the right is just uniformly random. Very briefly, what we're going to define additively homomorphic in encryption, all of the AHE schemes that we'll be looking at today will be single instruction multiple data. So all the cybertexts are going to encrypt vectors of elements, and all the arithmetic operations will be component-wise. So like SIMD encrypted addition will take two encrypted vectors, A and B, and output on encryption of the component-wise sum of A and B. Same thing with plaintext addition. It's just that the upper end B is in the clear. And similarly for plaintext multiplication, it's just component-wise multiplication between A and B. So how do we get AHE from ring learning with errors? Most of you have probably seen this. But if not, the second polynomial has the structure diagrammed here, where it has this mask A times S that's taken away during decryption. And what's left is this polynomial here that has this large gap in the middle. The message bits are pushed to the top by the scaling factor delta. The error bits live in the bottom, and there's this big space in the middle to allow the error to grow. So when you have a cybertext, it looks like this ciphertext addition becomes very easy. You take the two ciphertexts, and you just add them component-wise. Plaintext addition in the same way, really. Plaintext multiplication is a simple operation, but maybe less simple in its implications. So again, we're just going to do the natural thing of just multiplying through the plaintext M prime by our ciphertext. So this gives us an encryption of M times M prime. But the problem now is that we have this error term that depends on M prime. And so if you were to give the ciphertext back to someone who knew the original error term, you're going to leak M prime to that party. And so this is a problem if you care about function privacy. And if you want to achieve function privacy, you have to hide this noise term in some way. The classic way of doing this, believe it or say it's all the way back to Gentry's original paper, is noise flooding, which is really just adding a noise term that's lambda bits larger than the error term you're trying to hide. So this works well, but it requires lambda extra bits of space in that gap between the message and the error. And so this is going to be a problem if we try to construct AQO batch OLE just from this straightforward AHE scheme. So let's take a look at this protocol. So Bob is going to encrypt his input X and Alice is then going to take her input and evaluate the plaintext ciphertext multiplication and plaintext ciphertext addition on Bob's ciphertext. This is going to generate an encryption of the OLE result. And then Alice is going to flood the ciphertext with noise. And so what Bob gets back is going to look like this. So the message bits in the top will have the OLE result. And in the bottom is going to need to be enough space for Alice's flooding noise. And so the original error term had roughly the log P bits from the multiplication. And then you need another lambda bits on top of that for the flooding noise. So this gives you a multiplicative N times lambda overheads. This is not AQO. So we need to fix this problem if we want to use this approach for an AQO OLE. So let's try to fix this. Let's try to ease the amount of noise that we're adding in this scheme. So let's go through this toy example that will illustrate this idea that we call gentle noise flooding. So let's start with some number E that we're going to try to hide. We just know that E is in some range, say zero to 10. And then we're going to have a noise term between zero and 20. That's just going to be one extra bit of E. It's not going to be lambda bits more than E. It's just going to be one more bit than E. And our noisy output is going to be just E plus eta. And so the challenge is going to be can you actually guess E given T? So sometimes this is going to be easy. When T is zero, there's only one set of E and eta that will be able to give this T output. And so, okay, this is not great. But if T is 10, then you have something a little bit more interesting. You have a value of eta for every possible value of E. And so you could argue that E is hidden. And you can actually formalize this by saying if T is in this middle range of possible values, then E is actually perfectly hidden. So if you repeat this, this game, and times you have a bunch of secret E's and a bunch of noise terms eta, and you give some party a bunch of values T at TI, you can formalize some toy version of our gentle noise flooding level by saying, okay, actually, we know that at least half of our secret EIs are going to be hidden. So this is good. It means that we're hiding something. So let's look at what happens if Alice just uses a gentle flooding term as opposed to a regular flooding noise term in this AHE protocol. So Bob does the same thing. He encrypts his input X. Alice does the same thing to generate the encryption of the OLE output. But now instead of adding a flooding term, she adds just a gentle flooding term. And so what Bob gets back is a ciphertext that needs a lot less space between the message and the error because the noise term that Alice adds is a lot smaller. This is smaller both concretely and asymptotically. And the only downside is that now some of Alice's A input is actually leaked. So we need one extra idea to fix this. And the final idea here really is to use an OLE extractor, which is this really cool protocol that takes leaky OLEs for some bound on the leakage and turns them into truly random OLEs. We instantiate this extractor using the work of block, Gupta, Maji, and Nguyen with read Solomon codes to maintain our quasi-linear computation, which we need for a QO. And then once you have random OLEs, this is basically good enough for any OLE application. You can turn them into arbitrary OLEs. You can use them in other protocols. This is, we're gonna say that we're done once we have random OLEs. Okay, so what's our full a QO batch OLE protocol? We start with the folklore OLE protocol from additive homomorphic encryption. We use a random A, B, and X and we replace the flooding term with a gentle flooding term. So this will allow us to add only like log in, roughly log in extra bits of noise per term. And this will give us a bound on the number of leaked coordinates, call it L. And L is gonna be order lambda. So we only have order lambda coordinates leaked with very high probability. And then we can take this leakage bound L and what do I do? We can take this leakage bound L and plug it into the OLE extractor to get our random OLEs out. The nice thing about this protocol is that it's very concretely efficient. It's actually competitive with other state-of-the-art batch OLE protocols. And this was very surprising because semi-honest batch OLE has been optimized like crazy. And so to just kind of start with this very nice theoretical question and to end up with a concretely efficient protocol is a very motivating result. It suggests that this AQO problem can be this bridge from theory to concrete efficiency. Okay, but I said at the beginning, we're gonna talk about maliciously secure two-party computation in this AQO protocol, even though it's fast is only semi-honest. And in fact, if Bob is malicious, there's a pretty simple attack. If Bob sends a malformed ciphertext then Alice's full input could still be leaked. So if we want a maliciously secure batch OLE protocol, we're gonna need to do something else. So it didn't render right. The next bit of the talk is going to focus on this malicious receiver batch OLE protocol. I'm going to talk briefly about the malicious receiver batch OT protocol that we can build from the batch OLE protocol. And then I'll talk very briefly about the AQO zero knowledge from the batch OT and for the two-party secure computation I'll refer you to the paper. Okay, so as we said, there's an attack where if Bob sends a malformed ciphertext, Alice's input can be totally leaked. So if we want to defend against a malicious Bob, we need to say that for any ciphertext that Bob sends, at least some of Alice's input is going to be hidden. So there's lots of prior works on protocols like this. They're usually called statistically sender private OT or OLE, but for the reasons that we talked about at the beginning, none of these are AQO. So we want to think about how much information does the resulting ciphertext leak about Alice's input? And the simple upper bound here is just the number of extra bits that Alice sends beyond the output. So Alice has some output that she's trying to communicate to Bob, this is M. And then there's just going to be extra bits in the ciphertext. And we're just going to call all of these extra bits, say in the worst case, they're going to all leak information about Alice's input. And so we're just going to bound the leakage on Alice's input by the number of extra bits in the ciphertext. So the goal here then is to get less bits of leakage than are in Alice's input. And if we have this, then we can say, okay, at least some of Alice's input must be hidden. So the plaintext modulus log P could only be slightly smaller than the ciphertext modulus log Q. If we had a ciphertext modulus that was only a little bit bigger, then this would suffice. And the problem is that naively we need log Q to be greater than log P for correctness because we need space for the error to grow for the OLE multiplication. But there's a standard trick to fix this. It's called modulus reduction. So you start with a large modulus that's bigger than twice the plaintext modulus, then you finish the OLE computation. And then when you're finished all of the computations, you just reduce the modulus down and you end up with this much smaller modulus that is smaller than twice the plaintext modulus. So this is fine, but the second challenge now is that our ciphertext actually has two polynomials. So even though we have a small ciphertext modulus, the fact that we have two ciphertext polynomials means that Alice's input could be totally leaked in the second polynomial. Remember Alice has two polynomials as her input A and B. Really, if the A input is leaked, then so is everything else. So really we want to prevent Alice's A input from leaking and it could be that all of Alice's input leaks in the second ciphertext polynomial. So we naively need two log Q bits for this ciphertext, but we can actually get a better rate on this ciphertext by reusing this first term, K, K time. So instead of just encrypting one polynomial M, we're going to encrypt K polynomials and just reuse the same first ciphertext polynomial in the rest of these in encryption. So now our rate is K times log P, which is our message over K plus one times log Q, which is the number of polynomials. So the question is now, how do we actually get this ciphertext as a result? And the answer is to use a vector of secret keys and to write the message as a matrix with the message is just on the diagonal. The ciphertext is going to look like a matrix of polynomials that is K by K for the mask of the message, and then there's gonna be one extra column for the first polynomials. And then this whole thing is going to get masked by error. And so you might say, wait, this doesn't look a QO anymore, but actually it will be because K is gonna only be polylog lambda. So at the end though, if we have the sender input also be K polynomials, then we can get this ciphertext as the result just by multiplying this matrix by Alice's vector of polynomials as her input. And so this is good. So now we have our output ciphertext that's very high rate. And the point is that the number of extra bits in this output ciphertext is less than Alice's input. And as we said, K is gonna be polylog in N and lambda. So everything is still a QO. And this is an AQO batch OLE protocol with security against a malicious Bob. Okay, so then just very briefly, the way that we get batch OT from batch OLE is we start with N OLEs over some prime P and then we factor this composite that's slightly larger than P and then we reduce over this composite. And this gives us tau OLEs mod each prime and then we convert each OLE to an OT using the standard reduction. I believe I'm out of time. Yeah, yeah, so I'll leave it there. Thank you. Questions? You kind of use the two components ciphertext then you kind of do this trick with the fixed A and then many, many, many to get the six. Can you just use entry? Cause then you just only have one ciphertext component to get the same compression. Possibly, we didn't, yeah. I'm not sure how entry would work in terms of the AQO parameters, but that would be interesting if you could use that, yeah. Cool, thank you. Questions? Okay, let's thank the speaker again.