 Hello, hello, is that, yeah, all right. The sensing of the technical difficulties. Thanks for the introduction, Mike. So yeah, this is joint work with Karmit and Manolan Peter. We're all in the room. So if you have any extra questions after the talk, because I'm going to have to skip some things, you can catch us at any point during the conference. So I want to start, also not by giving you what is MPC, we're all in the MPC session, we know what it's about. But we also all agree that this is great and there's many problems that MPC can address, right? We have a lot of parties that want to do maybe private service, statistics. We have the farmers in Denmark that want to do their sugar beet oxygen. We have all these blockchain people that want to do secret stuff. And there's a problem with some of these scenarios, which is that most of the protocols we have at the moment for MPC don't really scale very well for having these many, many parties. The reason for this is that we have achieved a lot, but we have always been focusing on this scenario, which we have a few number of parties, a very like two, three, or it can generalize, but it blows up a lot. So these kinds of scenarios what we do sometimes is that we have maybe, we say, oh, we have some servers and we are going to expect these thousands of parties to agree that some of these three are going to be fine. And that's sometimes not very realistic when you consider that each of these thousand parties is going to trust at least one of these three servers, for example, or you can do these techniques of sampling a random subset of parties, this committee, that is going to run MPC for everyone. But still, if you start with thousands of parties, you're going to end up with a committee that has tens or hundreds of parties, very, very likely. So this is the starting point of our work. Can we do things better here? And we are considering also this honest majority here. And the reason for this is that if we are going for many parties, let's exploit this to make our protocols more secure. So the more parties we are going to have, the more trustworthy this system is going to be. And I think a good example for this, apart from all the ones I gave, is this Tor metrics project within the Tor project in which you have like 6,000 relays on the Tor network. And at the moment, we don't really know much about what's going on within Tor. We have some statistics that they provide, but they don't use MPC for this. What they do is that they obfuscate their data before publishing it, and then you get some aggregate result that it's going to be affected by this kind of noise that you're aggregating locally. And also most of these 6,000 parties, they don't really provide data because they're still not really trusting this. So if we had MPC that would scale to this many parties, we could have more complex functions. Parties would be probably more willing to provide data and so on. And we could have like a vector picture of where is censorship happening? How can we improve the traffic on the network? And so on. All right, so more concretely, the setting that we're studying here is concrete efficiency because we want people to run these kind of things eventually for this large number of parties, tens, hundreds, thousands. As this is one of the first steps, we're just going to deal with static passive adversaries, but still we're in this strong, dishonest majority setting. And our protocols are going to be in this offline online phase like with fever triples or capital circuits for Boolean circuits. So now about the complexity of practical protocols for MPC. We have this kind of vague picture. This is not really accurate, but it's like, oh, if we have the dishonest majority, we have this quadratic complexity in the number of parties times the security parameter. Otherwise you can do something like n log n. But also one of the issues here in the literature is that when we were thinking about the dishonest majority because we were thinking of these very few parties, we were thinking most of the time of this full threshold adversary in which all but one of the parties was corrupted. But is it really realistic to consider that when you have 1,000 parties, 999 of them are conspiring against the single honest guy? Well, I don't think it is really. So this is the fact that we exploit in our protocols. So can we do protocols where each honest party present on the computation is going to help us to get something more efficient? The answer is yes, otherwise I wouldn't be here. So we have new passive GMW style protocols in which we have 10 up to 25 times less communication that the best result in the literature like published last year by the Tsukidol. On the constant round setting, we produce garbled circuits on the BMR paradigm where we reduce by up to a factor of seven, the communication in garbling the circuit. And we also get a more online phase that is a bit more circuit dependent. All these details about the garbled circuits, you can ask me later or here or offline because I won't have time to discuss. But it is interesting that these are the best improvements we get, but for small as 20 parties, we start getting protocols that are more efficient. And they are quite efficient already when we have like 10 to 30% honest parties, which is not a lot to ask when you have that many. All right, so I want to introduce you to what is the technique that we're using. This is a very simple example. If you understand this, you're going to understand everything else. So imagine you have end servers. We're going to give a number and a color to each one and they want to encrypt the message. Now the way they can do this is that each of them is going to hash their key. And now they are going to send these hashes to the message. And we know that this is indistinguishable from random as long as the keys are security parameter long. Even in this case in which, oops, sorry, all but one of the servers are corrupted. But what happens now if we assume that we have eight honest parties, right? So the title of the paper might give you the idea already. What happens if we use shorter keys? So instead of having this security parameter long keys, we are going to have some arbitrary smaller length L, these keys. And now we want to argue whether this is still indistinguishable from random. Intuitively, the adversary has to guess each of the short keys of each honest party, right? But formally, is this secure? Well, one of the problems with having these L-bit keys is that each of the hash functions has a very small possibility, small domain for the keys you can choose. So imagine you have L equal to, you only have four possible keys. You can brute force this very, very easily. And in particular, we can represent this brute forcing by this matrix product. So we have the columns here representing the evaluation of the hash function in each of the possible keys. There's two to the L keys, remember. And then here we have this vector of length two to the L and having weight one, that is going to represent which key we pick. So this is another way of representing this evaluation here. So if we just substitute this for every hash function, what we end up having is this matrix product, where we have this matrix H filled with random values, which are the evaluations of the hash function. And on the right, we have these H blocks, each of them with length two to the L and having weight one. So this sum of the hashes now is equivalent to saying, okay, is this value Y, which is the matrix times this vector? Is this indistinguishable from random? Now, there's another way in which you could look at this that is much nicer. Maybe you go looking at the colors and you say, well, this is the flag of New York City. No, it's not that. So what this is, is a coding theory problem where we can look at H as the parity check matrix of our random binary linear code. E is going to be an error in a code word which has this particular regularity property where we have these blocks, each of them, one for each other's party. The length is related to the keys and they have having weight one. Then we also have Y, which is the syndrome from which we want to recover the error. This is the problem you have in coding theory. You have your syndrome and you want to recover the error. So now the problem is given this parity check matrix and the error, sorry, and the syndrome, can we recover E? So someone might think that I'm cheating you because this looks more like a key recovery attack. This was the keys and I was just talking about indistinguishability. But it turns out that we have a search to the decision reduction, so actually finding E is as hard as distinguishing Y from random. So this is very nice. It's also not the first time this problem has been used in cryptography. Actually for the Shathri competition, there was this fast syndrome-based hashing proposed by Agot, Vines and Sentrier. It turns out that adding this regularity to the syndrome decoding problem doesn't make things much easier and the syndrome decoding problem is equivalent to learning parity with noise, which is a problem that we are widely using in the literature. And now what we have to deal with is getting these parameters right where R in this matrix that we had is the length of the message that we're masking. Then we have L for the key length and H for the number of onus parties. And it turns out that this problem, it's even statistically hard if you have a small enough message, small enough R, or if you have a large enough number of onus parties. So if you don't like all of this assumption, which actually there's many, many people who have studied it, you can just have larger age and you're going to have something that is statistically secure. But yeah, I wouldn't be able to name all of the people here, but these are works that have studied concretely these regular syndrome decoding problem or the techniques that are used to crypto-analysing. All right, so if we got that example, it's going to be easy doing the rest of the talk. As I said, we have two results. We have a secret sharing kind of result. So this is what we call here, like this tiny GMW. And then we have also a garbled circuits result, which is this tiny BMR. For the secret sharing base protocol, we get a key length as small as one bit, single bit. And this is what I'm going to focus on during the rest of the talk. And for garbled circuits, L can only be as small as five, which is quite good already. This is actually quite challenging. So I'm going to skip this because there could be too many details, but one of the difficulties here was dealing with circuits that have a very high fan out. So if you have a gate that's going to fill many, many gates, this is going to be difficult with our assumption. This is a relative problem to what happens when you have the pre-exert technique, when you have this correlation that is across all your gates in the circuit. So I'm going to give a very quick recap of GMW and then a graph about our complexities more concretely, and that will be the end of the talk and you can ask questions. All right, so GMW is a secret sharing base protocol where each of the parties here is going to have an additive share of, remember we're dealing with Boolean circuits. So we have all linear operations that can be done locally. So if we want to add these values X and Y, we can participate just locally at their shares. But the problem comes when we have to compute the end of two bits, because we need to compute this product here, all these cross-products actually. But we know how to do this. If we have oblivious transfer and we have one out of two oblivious transfer for bits, this is equivalent to multiplication. So if Alice has as her inputs R and R plus YJ, and both has as the choice bit the value XI, what Bob is going to get here is R plus XI times YJ, right? If XI is zero, he's just getting R, otherwise he gets that. Now what Alice and Bob got is a secret sharing of XY times YJ, which would be one of these cross-products here. And oblivious transfer is very nice, but we know that it requires public key operations. Nevertheless, we also know that we can do this great technique called OT extension. So here we are following the technique by XI at all, XI, Kylian, Wism, and Petrank in 2003, where we have this base OTs in which we are going to use public key. So we have security parameter one out of two OTs on these security parameter strings. I'm not going to give you much detail, but what you can think is that each of these OTs, each of these security parameter OTs, is kind of fixing one bit of Bob's secret key. And this secret key then can be combined with symmetric crypto to obtain many, many OTs. So just using a PRG, some hash function, and some messages from Bob to Alice, we're going to get these R OTs, where R is as big as we want. So what's the way we improve this? We change the keys again. Instead of doing security parameter base OTs, we're just going to do L of them. So this is reducing the number of public key operations, but most importantly in practice, this is going to reduce the communication complexity here. But there is some problem with this actually, which is that this OT now becomes leaky. So there is this, at some point here, you're hashing, Bob is kind of hashing his key, and he's sending that plus his trace bits to Alice. And now the problem is that I said that L can be as small as one bit. So Alice can just try the two different keys and can very easily learn the trace bits of Bob. Here with Bob, I'm representing the R inputs of Bob to each OT. So this is completely broken, right? What are we doing here? Well, let's use a broken primitive to do a secure protocol. So here in blue, I'm representing honest parties, and red are corrupted parties. Remember, we want to compute the end of these two values. And one of the ways in which we can represent this product is like this. We can just represent as the product of this, which with the parentheses over there, right? For everyone. We also have our leaky OT. So the honest parties are going to compute an OT with this X as their inputs. And now each malicious part is going to learn this kind of leakage from all of them. So on the short key that was set up plus the inputs. Now, it's going fast. So we knew that each of these, like it's very easy to break, right? So how are we going to get around of this? Well, instead of having the parties, the honest parties to run the leaky OT with their actual inputs, we are going to re-randomize these inputs by using some random sharings of zero. So it's a party, it's a less party. So P1 is going to mask it with some value is S1J, P2 with a value S2J. And what it holds with these values is that if we sum them across this first coordinate, they give zero. So now the leakage, oh, sorry. Well, the correctness is still going to hold. And what happens with the leakage now is that these values are uniformly random. So each of these messages is going to look uniformly random. But nevertheless, they are still correlated, right? So individually, they are uniformly random. But what happens when you look at the join distribution? So if you sum all of these, you're the adversary. Now you can go like, okay, this I know that it is zero because it's the sum of all the sharings of zero. I can take that out. So now I have this leakage. We have the sum of the H short keys of the NS parties plus the honest shares that I was missing. So if you understood the example at the beginning of the talk, this is exactly what I gave with the toyed example of the distributed encryption. We have the H short keys. So we know that this is secure under our assumptions. So that's it. That's the simple technique that we use. And we get some, so this is our results. Sorry, more concretely. On the blue line, we have the best version of GMW that we have at the moment in the passive setting. So we have around, this is a plot for 200 parties. This is the communication complexity for producing a triple or competing an AND gate. On orange, we have this technique of using committees. So this means that if we have, for example, 10 NS parties, instead of running GMW between all of them, we are going to run it between 191. So we know there's at least one NS party and we can just reduce to the known problem. And in red is our results where we combine these committees with the short keys technique. So we're going to have two committees, one that has H NS parties, the other one with one party, and we're going to combine these. So that's in a bit more detail what we do. So yeah, this is our work, our work or a part of our work, better said. We have introduced this new technique of distributing trust in NPC when we have this large-scale scenario where the more honest parties we have, the shorter we can make the keys, the better communication complexity and computation as well. We improve protocols for as soon as we have 20 or more parties. For secret sharing, we get up to 25 improvements in communication compared with the best protocol secure against all but one corruptions or the gap you could see from the previous slide when you use committees. For garbled circuits, we get up to seven times better communication for the garbling of the circuit, which is the most costly part. And our online phase is up to three times faster, depending on the circuit. So as I said, there's these challenges with dealing with the fan out of the specific circuit. So this is an interesting problem to explore, maybe compilers that are good for this or a more concrete analysis. Also, as I said, this was the first step. We took a second step already. We have an actively secure version of this tiny keys technique in which we apply tiny keys to the tiny t-protocol where we have this kind of pairwise information theoretic max. And what we are doing is that we are doing this one's shirt and we are still secure. So that's going to be Zoom on E-print. Also, there are some challenges still open. We can maybe optimize this more. Maybe if we got even more cryptonized, it would be great because we have some quite conservative parameters at the moment in our experiments and yeah, getting more applications as well. So that's it. Thank you very much and I'll take any questions. Thank you.