 Okay, thanks a lot for the introduction. Since I'm saying this is a joint work with Ivan, Jesper and Mikhael. So the talk is about secure multi-part computation in intrusion in the 80s by Yamatov. So secure multi-part computation allows a set of parties to jointly compute a joint function on the inputs in such a way that the honest part is to receive the correct output and the outputs are only the information leaked. Okay? And we want this to hold in the presence of an adversary who can arbitrarily corrupt the parties. And in our case, the adversary is going to be unbounded. We are in the information theoretic setting. So am I honest and static? Okay, and we also follow the idea of real paradigm. So we saw also the previous talks on secure multi-part computation, so I'm not gonna give more details about this. So the motivating question is like, what is the lower bound on the communication complexity of information theoretic protocols? And once we establish such a lower bound, the question is if we can show that these lower bounds are tight. So let me first start with the state of that in the information theoretic protocols. So there are many information theoretic protocols that they work for both the honest majority setting and both for the dishonest majority setting which we're processing. And without loss of generality, I'm gonna split them into two big categories. So the first category includes what we call the traditional secret setting gate by gate protocols. Okay, that is the GMW, the BGW, the CCD protocols. And these protocols work on secret set inputs and they evaluate the circuit gate by gate. So initially the inputs are secret set and then we follow the invariant that every time they do compute a gate, the outputs of that gate are like, we assume that it's a secret set, a random secret setting of the output. I'm gonna be very specific about this in the next slide. So all the current protocols that we know in this model, they have to communicate per each multiplication gate. So the communication complexity is proportion to the circuit size C where N is the number of parties and actually because of that, it follows that the random complexity grows with the multiplicative depth of the circuit. So now if we give up being efficient in the circuit size of the value-added function, we do have constant round protocols. For example, you can think of yaw in the information theoretic setting where instead of implementing the symmetric encryption using crypto, we can use one-time pad. But it's very easy to see that every time we will go level by level, we will have to double the size of the one-time pad. So that means that the complexity here is exponentially the depth. So these protocols are only efficient in the branching program size of the value-added function. So they're only good for NC1 or log-space computations. So this is the state of the art. And information theoretic protocols are great, especially the ones based on the secret setting because they're computational more efficient than the tools that we use in the computational setting. For example, they're more efficient than full homo thing encryption, but they require a lot of interaction, right? So the big question is that if we can have a full homo thing encryption in the information theoretic setting, okay? And this question is very hard. And like, if you think about it, if we can prove that, then several breakthroughs will happen in different areas. So then maybe we can actually see if we can significantly improve the state of the art. And also, so we don't have any lower bounds for poly-size information theoretic randomizing coding. Okay, we don't know if they exist. Moreover, we don't have any lower bounds for the gate-by-gate protocols. So let's tackle this problem and see if we can say something for gate-by-gate protocol. So there are all these open questions. And let's try to like dig into it and delve into the problem to see if we can say something and answer partially the question that I have on the bottom. Okay, so the question is that if we can significantly improve on that and prove that maybe we don't have to communicate perished multiplication gates, right? Or maybe we do have. So the answer is yes, we have to communicate perished multiplication gates. That's what we saw. And what does that mean? Is that actually it's a partial answer and it shows that we have, so these are the only protocols that we know now that they're efficient in the circuit size. So that means that we will have to invent a completely new approach if you want to achieve the goal on the bottom. Okay, or even prove that that's impossible. Okay, so our result. So we prove that you have to communicate perished multiplication gates. And it's trivially follows that they're on complexity will be proportional to the multiplicity depth of the circuit. Okay, and actually we even saw that even if we're given pre-processing. And also we also saw that we cannot have a bounds that they grow with the field size. I will be very specific about this later as we go talk. So let me be very specific about the model and then I'll show you the impossibility results. So we are in synchronous point to poor security channels. We have N party protocols, T out of N static corruptions, semi-honor security, statistical security. And actually the weaker the assumptions, the stronger the impossibility results. So let's see how this gate by gate protocols work but that we seem to be using all the time. So we have the multiplication and addition gates. So at the beginning we secret share the inputs and at the end we have an output secret sharing. Okay, so we have as an invariant every time that we compute each gate we have a multiplication gate protocol. And actually because we work on secret shares additions are for free. We don't wanna say for free, I mean that we don't need to communicate for additions but we do have to communicate for multiplications. So let me emphasize what this multiplication protocol does. So since we said that we work on secret shares so we secret share the inputs A and B, the bracket notation in secret sharing. Based on the secret sharing S with N players and the private threshold T. So we secret share the inputs so now the multiplication gate protocol receives inputs which are secret shared and then this outputs a secret sharing of A times B. And then correctness we of course want to compute correctly and actually the output secret sharing reveals only A times B nothing else. Okay, so this is how this gate by gate protocols work. And also we are interested in the communication in the red area. I don't care how much communication happens in the input secret sharing phase and in the output secret sharing phase because that will not show me how much communication intensive is the valuation of the circuit. And also there are many beautiful works lowering bound with the communication starting with the work with Shilovitz and there's the recent work of data Prabhakaran and Prabhakaran. But these protocols actually show lower bounds for addition but in our model additions are for free. And also these models consider clear inputs. Okay, here we start with inputs that they're secret shared, okay. So we cannot really use techniques from this papers because we work with secret shared data not on clear inputs because we really want to see what is the communication complexity in the red area. Okay, so this is our lower bound in the honest majority setting. So we prove that the most complex is the Paris multiplication gate is 2T plus one. Basically we have to communicate Paris multiplication gate. And we have this bound that the communication complex doesn't grow with the size of the field. So also we can have very nice circuits. Okay, so what we can do, so suppose we have K multiplication, so if the circuit is nice and the number of players grows with K, then we can do all these K multiplications at once. And we can do that based on packed secret sharing actually. But the question here is, okay, that works only if the number of parties grows with the number of multiplications but what if you have constant number of parties? So, and actually we saw that if we have constant number of parties, it is actually indeed true that the communication complexity should grow linearly with K. And we saw that for a restricted class of secret sharing seems, but it's good enough to include some secret sharing. We also prove something about the computational complexity. We prove that everybody has to work hard. You cannot save any computational works. For example, we cannot have lazy parties but they can do sublinear computation in K. So this proves for the K amortization that actually current methods based on packed secret sharing, they're optimal up to constant factors, okay? So now for the previous results were for the honest majority setting. Now moving to the dishonest majority setting where we assume a pre-processing, right? That's the only way that we can do dishonest majority in the infamous static setting. So there we prove the following. So if the output sharing is additive, we prove that you have to communicate by its multiplication gate. And for arbitrary secret setting seems, we prove that an inner product gate has to communicate, okay? And I'm gonna start with a warm-up for the honest majority case. So here we prove that we have to communicate by its multiplication gate. So here I'm gonna like simplify the proof. Yeah, so I'm going to simplify the proof. Instead of sewing you for 2T, I'm gonna sew you for T just to simplify the proof. So actually this is like a warm-up, and it's very easy to see it. So let me. You're hitting the button. Okay, okay, so suppose that we have a multiplication protocol which is too good and it doesn't communicate a lot, okay? So we want to prove that this is not possible. So for the proof, suppose that we do have such a protocol. And now I'm taking these parties, these T parties that they communicate between each other. Since I assume that they expect the communication complexity is at most T. And then I'm taking also another party. So what I'm going to do, I'm gonna build a two-party protocol that computes the product. So I'm gonna have two parties, Alice and Bob, and I'm gonna construct a protocol that computes A times B. So how are we going to do that using the multiplication gate protocol? So party A is going to emulate all these T parties that they communicate between each other. And then party B is going to emulate the last party. And everything is gonna happen in the head of Alice. So Moot, you talked a lot about this. Everything happens in the head, okay? So how are we going to do it? So at the beginning, party A and B, they secret share their inputs. And then together, they emulate the multiplication gate protocol here, okay? So we emulate the protocol. And then at the end, after the multiplication gate protocol, we receive the output shares. And then we just have to exchange them. So party, so this, we send the CT plus one to party A and C one up to CT to the other party. And correctance, three valley follows because Alice and Bob, they have exactly T plus one shares so they can reconstruct. And privacy actually follows from the privacy of the multiplication gate protocol that we do in the middle here, okay? And here, remember that this, there's no communication between here. So communication was happening here between the T parties but everything happens in Alice's head, okay? So it's actually very easy to show that for one party is very easy to simulate the shares of the other party. So suppose that one party of the two is corrupted, either Alice or Bob. So we can actually simulate the shares received from the other parties just from the shares of the malicious party and the output A times B. And A times B comes from the output of the emulation of the multiplication gate protocol. And by assumption, the multiplication gate protocol reveals only A times B so we can simulate, okay? So that's the bound. So what, but what did we build here? So what is the contradiction, all right? We need some contradiction. So I showed you a protocol how we can do the product of A times B of two parties, okay? Using the multiplication gate protocol. But actually, if you think about it, we know that it's impossible to do two-party computation in the information theory setting from scratch. Okay, it's just impossible. We know it's very well known. But I showed you that I did it before. So here is where we use the contradictions. We don't know how to do from scratch to party computation. And before, I showed you that we did it here, actually. Also, here's something that I wanna say is that, so okay, we did it for the honest majority and we reached the contradiction based on the fact that we don't know how to build two-party computation. Right? So is it, can we reach a contradiction in the same way also for the pre-processing phase where we use pre-processed stuff? So the answer is no, right? Because we know how to do two-party computation when we have to process data in the dishonest majority setting. So this argument completely breaks down in the pre-processing phase if we have pre-processing phase. So we must find another way to reach a contradiction. So here is our result for the dishonest majority setting. So here I'm gonna give you the main idea and the main difficulty of the proof. So here is that we saw that per its multiplication gate, the message complexity is at most n minus one in the pre-processing model, at least. So here, so the difficulty is the following. So before, we were emulating this multiplication gate protocol here, right? But now this multiplication gate protocol uses pre-processed data. Plus, when we want to reconstruct, we will have to use some pre-processed data, okay? So how we are able to show that the main difficulty is that now given pre-processed data, we need to reach some contradiction. And actually there's this result from Winkler and Wisleggen in 2010, that they actually give lower bounds on the amount of pre-processed data that you need to compute any function. So you give me any function and I'll tell you how much pre-processed data it needs exactly. So we will try to get the contradiction from here. So in high level, what we saw is that we are able, like while we're doing this reduction, we are able to reuse some pre-processed data when we have additive secret sharing, okay? And we prove that we are able to reuse pre-processed data and we are constructing a protocol that uses less pre-processed data than what the bound of Winkler and Wisleggen says. And that's how we reach our contradiction. And also if we consider arbitrary secret sharing, there are even more complications, okay? That's the main difficulty. But what I would really like to show now is this thing I was talking about in the honest majority setting. If the message complexity, I saw that we have to send two T plus one messages for the honest majority case. But do these messages have to grow with the field size? Okay. And current protocols actually, this is true for current protocols. The communication complexity does depend on the field size. But actually, surprisingly, we showed that the answer is no. We can have some secret sharing that actually this is not true. And I'm gonna give you the counter example now. So the counter example goes as follows. So suppose that we want to secret share the input A which belongs to some field. And I'm going to represent it as Zi and Li in the following way. So if A is zero, I'm gonna like set Z A to be zero. If it's not zero, then I'm gonna set it to one. So now for the Li value, if A is one, I'm gonna like choose a random element in the multiplicative group of F. And if it is not, I'm gonna take the logarithm of A based on some fixed generator of the multiplicative group. And now it's very easy to see that if you want to multiply, you will multiply Z A times Z B and you will add L A and L B, okay? And you will take the module. So now let's see. So here was our multiplication gate protocol. So what are we going to do? So I show you how we represent our elements now. So now we are going to secret share Z A and Z B using, for example, Samir. And then we also additively secret share L A and L B, okay, using some additive secret sharing. And now we give that shares to the multiplication gate protocol because it works on shares. And then we're gonna get the result out. Okay, so let's see what we did now. So actually you can notice that when we multiply Z A and Z B, these values are bits, right? So I can use any multiplication gate protocol that operates on bits. And the communication complexity of this protocol because it operates on bits, it will be independent of the field size. And also we said that we don't need communication for adding the shares, right? The L A and L B. So that clearly shows that the communication complexity can be independent of the size of the field. So you see here, I only made it to do a multiplication on bits. Okay, so that's the counter example. So we cannot have the, we cannot get a bound on the question with the size of the field. The answer is no. So let me go on to conclude this on open problems. So actually we saw that there were many open problems in the information theoretic setting and we still have many open problems. And we actually tackle the protocols that we are using like and we know them. So the secret sharing gate by gate protocols which are actually the only ones that they are efficient in the circuit size of the value of the function. The ones based on randomized encodings, they're only efficient for the project program size. And we saw a lower bound showing that we cannot significantly improve protocols that fall in this category. We have to communicate perism multiplication gate. So an obvious open question is if our lower bounds extend to any protocol, not just the secret sharing gate by gate protocols which were the only ones that actually know. That's why I said before that means that we completely have to do, to construct completely new approaches to do information theoretic protocols if we want to achieve this. Even if it's possible, right? Maybe it's not possible. And as I said, it will mean many, it will imply many breakthroughs in many areas. Okay, also something that I want you to say. Here we don't assume anything about the, we don't assume that the secret sharing is the same. They can be different. I, we only assume that the threshold secret sharing the privacy threshold for the honest majority case is like at least two T and for the dishonest majority case it's just N. Also I don't make any assumption on the multiplication gate protocols. Maybe my circuit uses different multiplication protocols. So now another obvious question is that what we know is that what we assume is that we, the output secret sharing doesn't reveal anything as only A times B. And this is what we do in all the current protocols and this is for what we solve our bounds. So an obvious question is if we can actually for the intermediate gates of the circuit we can actually be more relaxed and maybe we can allow this distribution of shares to have some traces of the inputs. Why? Because in the intermediate gates maybe I don't have to communicate. Maybe I don't have to reveal information. I only need this property for the final gate of the circuit. In the final gate I want to reveal only A times B. But in the intermediate gates maybe we can do something. And that concludes my talk.