 Yeah, thanks for the introduction and good afternoon. Yeah, so my paper is about information theoretically, secure multi-party computation. So before I get into the problem, let me just remind you once more about secure computation. So just to fix notation and so on. We have M parties. Each of them has a private input and they want to compute some function. They want to do that through protocol where they communicate to each other. They have secure channels between each pair of parties. And then the security is captured by the notion of an adversary that corrupts a set of some, it can corrupt some subsets of parties. And then you should not get the information on the uncorrupted inputs or the inputs of the uncorrupted parties beyond what it is implied by the outputs and the corrupted inputs. And furthermore, it cannot alter the computation of the function except by changing the corrupted inputs. So you also know that there are many ways of coming up with protocols. There's garbling, there is homomorphic encryption and so on and then there's also secret sharing-based multi-party computation protocols. And the way this work usually is that you represent your function as an arithmetic circuit over a certain finite field. So basically a bunch of sums and products. And then the parties have some linear secret sharing scheme over that same finite field that they use to share their inputs at the beginning of the protocol. And then the computation goes gate by gate on the shares. So the parties are able to compute the sharing of the outputs given the shares of the inputs in each gate. And then they can just reconstruct the output of the circuit. Now when the gates are linear, then this is very easy because the parties can compute locally using the linearity of secret sharing scheme. They compute locally sharing of the output of the gate. When it is multiplicative, then it becomes a little bit more complicated and then it depends on which protocol you are using. Yeah, so the motivation of this work is that many secret sharing-based multi-party computation protocols actually require large finite fields to work. I'm not saying that all of them do. I mean, there are, of course, protocols for small fields. But many of the efficient protocols work over large finite fields. And I have some examples. There are different manifestations of this. So, for example, in the information theoretical case, that is the one that we are going to talk about here. Well, if you are using some scheme, which you do often, for example, in the protocol by Ben Orgel, and Wigdarson, and many others that follow the same pattern, then you need that the size of the finite field is at least the number of parties. There are other techniques. For example, hyper-invertible matrices. This is something, some object that was introduced by Berliova and Heard in 2008 in a very efficient protocol that I will talk more about later. And that also requires that the size of the field is large for different reasons. And so, actually, you need to be at least two times the number of parties. Now, I want to mention some results that we found out. It's unrelated to the rest of the talk. But we can actually replace this hyper-invertible matrices by a notion that is a little bit weaker, that has little bit weaker requirements, but it allows to obtain the same functionality. And that notion actually only requires constant size fields, but still, which are at least 64 elements. So if you are interested in this, you can ask me later, but I will not talk more about that. This is just a side result unrelated to what I'm going to talk about later. But I also wanted to mention that you also need the field to be large in other types of protocols, for example, speech, computationally secure protocol. If you are using the message authentication codes, you are going to need that the field is large because the probability that an adversary cheats and it's not caught is going to be inversely proportional to the size of the field. And you saw some other manifestation of that in the talk yesterday by Ariel, where depending on whether you have a large or a small finite field, you needed to use a different protocol. Yes, so then the question is, is there any way that we can use these protocols that work over large finite fields when actually we have that our function is more naturally represented as a circuit over a small field. So for example, say comparison of bit strings or maybe, I don't know, set intersections, something like that. So I'm going to talk just mainly about the binary fields, but our results just work for any small fields. It's easier to talk about the binary field. Of course, one can always say, okay, the field of two elements is just contained in any field of two to the k. So you just take a power of two that is large enough so that you can use your arithmetic protocol. And then basically you just compute the same circuit where your sums and products will now be in the extension field. You can do that. But that seems wasteful because you are essentially using n bits to represent what would be just one. So then the question that we have is can we get more out of this? And what do I mean by that? So basically since we are anyway going to work or use arithmetic protocol over a large finite field, then the question is whether we can use it to evaluate more than one instance of the binary circuit. So we want to, in other words, embed k parallel evaluations of the binary circuit into one of the circuit over the large field. And of course when we are doing this embedding we don't want to add too much complexity that sort of dwarfs the one that you use for the protocol over the large field. So we are going to be concerned here about communication complexity by the way. So this is our question and what we did concretely was to focus on the case of information theoretically perfectly secure multi-party computations with no broadcast. And in that case the best protocol, at least if you are going for the strongest adversary, is the one by Berliova and here from 2008. So what they get is exactly this thing here. So they are able to tolerate m minus one divided by three active corruptions and that is optimal in this setting. And the protocol has a communication complexity of all n field elements per gate. But it has the restriction that the field has to have at least two n parties. So what we did is to show that we can use that protocol in order to compute log n evaluations of a Boolean circuit. And so if you do that, and again we are not going to increase too much the communication complexity while doing this embedding. And if you do that, then if you now count how much information you are sending, then it turns out that the communication complexity is linear bits per gate in each instance. So we are able to remove the limitation on the size of the field, but at the cost of having to compute log n evaluations of this circuit. So that is the main result that we have. Now you may be thinking at this point that the way that we did it was by using packed secret sharing, but the point is that we cannot do that because if we do that, then we lose this corruption tolerance in third parties. We will not be able to tolerate this optimal adversary anymore. And in fact, we can even combine our techniques that I will explain with packed secret sharing. And if we do that, then we get this other result where now we are not able to tolerate an optimal adversary, but we have an amortized communication complexity of constant bits per gate in each instance. And now we need to evaluate n log n circuits in order to get this because we are using this packed strategy. Okay. So now let me explain how we did this. And for now we can just forget about which specific protocol we are using over the extension field. So this is the situation that we want to solve. We have a binary circuit, and again, we want to evaluate it in K different sets of inputs that I represent by the different colors here. Now if you think about it, you could just think that you have a circuit that does one evaluation in your circuit, the gates of your circuit just operate coordinate-wise in vectors of K elements. So then your sums and products would be coordinate-wise. And the resource that we have, the thing that we know how to do, is how to compute an arithmetic circuit over a large field. So now if the sort of algebraic structure of this large field was the same as the algebraic structure of a set of vectors over F2 with coordinate-wise operations, then we would basically be done. I mean, we could just use directly this protocol. Now the problem is that this is not the case. So if you have your vectors of length K over F2 and you consider the sum and product coordinate-wise, this doesn't have the same structure as the field of degree K. Because while the sums are essentially the same, so they are isomorphic as vector spaces, the products cannot be the same. In one case you have the divisors of zero, in the other you don't. So we cannot do that. They are not isomorphic as FQ algebras. That would be the way of saying this mathematically. So the next best thing that we can think of is using what we call reverse multiplication-friendly embeddings. So what is this? This is a pair of functions. Now one function takes vectors over F2 and outputs an element of a field, and now the field has to be a little bit larger. It has degree M. And the other map goes the other way around. And the condition is that in order to compute the products coordinate-wise of two vectors, X and Y, we can first apply this map phi, multiply them as field elements, and then apply psi back. Now, since I said before that these two structures are not isomorphic, then you can never have that psi and phi are the inverse of each other. But you have that phi is invertible, and this is sort of important for our product. So I want to say something about the history of this notion. Why do we call it first reverse multiplication-friendly embeddings? Well, we had the notion of multiplication-friendly embeddings. Some of the authors of this paper introduced this notion in crypto 2009. In a paper about multiplicative secret sharing. And that notion was just the same as I'm considering here, but with the roles of the field and the ring, or the vector space over F2, swapped. Now, this notion had been studied in mathematics under the name of bilinear multiplication algorithms. While the new notion that we are introducing seems to be studied much less. And it is sort of natural, because the notion is actually not so natural, because you are expressing some simple object, which is like F2 products, coordinate-wise products over F2, in terms of a more complicated notion, which is a product in an extension field. So it looks like it's upside down. We knew about this notion, and we could use it to improve a little bit the construction in crypto 2009, but we never published it. Now, the first authors that came up with an application for this notion were Bloch, Maji and Guyen in crypto last year. They didn't explicitly define this notion, they just used a construction for that. But now they have uploaded a preprint, more or less at the same time that we got this paper accepted, where they explicitly defined that and studied this notion also. Their application is a little bit different than ours. They are trying to construct, or they construct OTs from one instantiation of oblivious linear evaluation over a large field. So the focus is a little bit different, but it shows that this notion seems to be interesting in different regimes. So what about the constructions? So in order to get the results of this paper, we need to come up with constructions of reverse multiplication-friendly embeddings, where the degree of the field is linear in the number of copies of F2 that you want to embed. And we can do that by means of algebraic geometry. And actually, mathematically, this notion and the one without the reverse part, so the multiplication-friendly embeddings, they are not so different in the end when it comes to constructions. You can use the same techniques. So you can use algebraic geometry for this, but I wanted to also mention that for quite large parameters, actually the best thing that you can do is still polynomial interpolation-based constructions where you maybe have to concatenate two of these over some different fields and so on. For example, you could embed 99 copies of F2 into a field of degree 325, and the best way you can do this is by polynomial interpolation and it seems to be already quite large for any possible practical purpose of this. So now, how do we use that? Or why is this notion important? So the point is that, again, we want to compute our circuit over a Boolean field. So what the parties do is to embed their vector of inputs into an element of the large finite field by applying phi, and then they will compute certain circuits with the protocol for the large finite field, and then they will retrieve the result pack by applying the inverse of phi. Now, what is this circuit here, C prime? You may think that it is the same as the one that we want to compute as the Boolean one, but it has some differences, and the main difference is that whenever you, in the Boolean case, you are going to multiply, you have a multiplication gate or an ant. In the case of the large extension field, you are going to multiply your inputs but then apply this concatenation of functions, of psi and phi, where they come from the reverse multiplication of learning embedding. The idea of doing that is that what we want is that basically at every point of the protocol we have phi, or shareings of phi encodings of the vectors that would be in the Boolean circuit. And by doing this concatenation here, you have that if you have encodings of A and B by phi, and these are vectors, then this will map that into an encoding of the component-wise product of A and B, and that's because of the properties of reverse multiplication, friendly embeddings. So this is how we modify the circuit. But then we have two obstacles, because now we have introduced a new gate, with this phi composed with psi gates. How do we compute that? If it was linear over the large field, then we could just locally, I mean the parties could just locally compute that and it wouldn't increase the complexity. But the problem is it's not linear over the large field, it's linear over the base field F2. And that is a problem, and that already happens even for passive adversaries. Another problem is how do we guarantee that actually the parties input encodings by phi, because the image of phi is actually not the full field, the full extension field, the protocol over that field doesn't care what it gets. I mean it doesn't care if it gets an encoding by phi or it gets another element, as long as it gets some element of the large field. So these two things are the ones that we have to solve. Also, some protocols also require random elements of the large field, and we maybe need that they are also encodings by phi, but that is more or less the same as the second problem. Now I'm not going to give the details of how to solve this, but just to say that we can reduce this into some other problem. And this problem, I mean we can reduce both of the problems that I mentioned before. And this is the following problem, we have a subspace V, and this subspace is, I mean it contains vectors of L coordinates in the extension field, but it's only linear over the base field F2. And now you want to create sharing of coordinates of a random element in V. And now if you know a little bit about this or barely over here protocol, you think that we are going to use hyperinvertible matrices, and we actually are, but the problem is that they don't work directly because in order to do that, you would need that V is F2 to DM linear subspace, and it's only linear over the base field. So in order to solve that, we need to introduce some other technique, which is to apply these hyperinvertible matrices to some related vector space, which is the tensor product of the extension field with V. This is a vector space over the large field, and so we can create a series of random elements there. And you may think, okay, but that's not what you wanted, so why do you do that? Well, the thing is that actually this tensor product of the elements can be seen as vectors of M components in V. So basically you are just creating random elements in your subspace, in the subspace V, but in batches of M elements. So that is how we solve that problem, and then this gives us the solution for the problems before. All of this would be done in the preprocessing phase. Okay, so I, of course, don't have time for more details, so I just finished by saying that we have introduced a methodology to just take a multi-party computation protocol that works over the large field and use it to evaluate several instances of a Boolean circuit, a circuit of a small field. Then we got these results that we can remove the limitation on the size of the field in Berlin and here. And the main technical handle is this reverse multiplication-friendly embeddings, although we also use these other tricks with the tensor product and so on, which we think maybe has independent interest. And I guess natural future work is how to extend these techniques to other models. Of course, the point is probably it can be done. The point is how much complexity, I mean, to get the right complexity, because we need to add all these extra steps so they should not add more complexity than necessary. So, yeah, that's the end of the talk. If you have questions.