 Yeah, thank you for the introduction. So this is joint work with Benny Apelbaum and Srikabh Rakeshke. And I will talk about degree two is complete for the round complexity of malicious NPC. So in multi-party computation, we have multiple parties. And they have private communication channels between them. And each of them has some private input. And their goal is to mutually compute some functionality over all of their inputs in such a way that each of them only learns an output without learning anything else about the inputs. Now, there are many different security notions that were defined for NPC. So for example, there is the semi-honest setting where the parties must follow the protocol. And all they can do is just observe all of the information that they get and try to learn something out of it. And there is the malicious setting where they can actively act differently than what they're supposed to do and send arbitrary messages. And you can consider perfect security versus computational security. And there are other notions, such as the threshold and whether we allow a port or not. And there are also many different combinations of those settings. Now, you can also measure protocols according to their efficiency. So you can consider the computational complexity and the communication complexity of protocols. And you can also consider the number of rounds. And in many classic results, the number of rounds is correlated with the degree of the function to be computed. So our goal in this work is to reduce the number of rounds in a generic way that is independent of the security notion. So the high level idea is to show a non-interactive reduction from any function to function of degree 2. So in that way, once we have a protocol that computes this degree 2 function, we have a protocol that computes any function. OK, so what is a non-interactive reduction? So let's say we have this function that the parties want to compute. But now we say that instead of communicating with each other, there is some oracle that computes a functionality age. And the only thing that the parties can do is query once this oracle. So now the reduction has a very specific structure. So in the first step, each party can do some local preprocessing over its input. And then they all perform an oracle query sending their processed inputs. They all get a response. And eventually, each of them can do post-processing and derive the final output. Now notice when we consider the malicious setting, then the only chance of the parties to deviate from the protocol is in the preprocessing phase. So that's factably important for us later. And why this primitive is so strong, because if we have a non-interactive reduction to a function age, then given any way to compute this functionality age, we then get a way to compute f in the same number of rounds. So it can be either a protocol that computes age or a trusted party or even some physical device. OK, so let's go over some previous results. So many classic protocols are actually implicitly non-interactive reductions. So for example, Jaun's protocol is a non-interactive reduction in the two-party semi-honest setting. And in the multi-party setting, we have reductions to degrees-free functions. And usually, they use something that is called randomized encoding. And then a natural question is whether there is a reduction to the degree two in the multi-party setting. So there were a few breakthrough results in the last year that showed the protocols that have only two rounds in the multi-party setting, but they don't show a general reduction. So they face each security notion separately and try to solve the problem in the independent steps. And recently, in TCC, there were two words that showed a non-interactive reduction, but only in the semi-honest setting. So the natural follow-up question is whether there is such reduction in the multi-party setting for a degree two, but with malicious security. And this is the question that we saw, and our answer is yes. OK, so we do so by proving a main theorem that we see as our main result. So we call this the master theorem, and it says the following. So if we have a protocol that computes f, we can use it in order to define a non-interactive reduction from f to a function of degree two. So what is nice about this master theorem that it's not sensitive to the security setting? So it preserves the security guarantee that we had from the protocol that we started from. And the price that we have to pay for that is that the complexity grows exponentially with the depth of the protocol that we started from. Now, you can plug in many protocols and apply the master theorem in them and then as a corollary, you get reductions. So we get the completeness theorem that says that every function is maliciously reducible to a degree two function, and we can get it either in the information theoretic setting or in the computational setting. And if we do so in the computational setting, then we can save the exponential growth. OK, so now finally, after having this reduction, we can use various protocol to implement a degree two function, and then we end up with explicit protocols and some of them improve known results. So in the last slide, I will give concrete examples. OK, so now I'm going to give the proof layout of the master theorem, which is the main technical part. So as I said, most works that show a reduction to degree three is something that is called randomized encoding. So they show that if you have a function f, you can compute an encoding of it, and then coding will be of degree three. And our idea is instead of computing and coding on a function, then we first look at a protocol that computes this function. And then we compute something that is not exactly randomized encoding, but it's inspired by it. But because we know that now our input is not an arbitrary function, but a description of a protocol that we can exploit it to our benefit and define the encoding in such a way that now it is of degree two instead of degree three, and also that the reduction has exactly the same security guarantees of the protocol that we started from. And I want to mention that the idea to start with the protocol and reduce its number of rounds is inspired by the works from a last year equipped by a Benton Wooden Lean and a Gargan Srinivasan. OK, so now the question is how we define this encoding in such a way that we can get degree two instead of degree three. So the first step would be to take the protocol that we start from and describe it as one large circuit. So every local computation that each of the parties do can be described by some circuit. And then we glue them together in a minute show how. And we get one large circuit that describes the entire computation that should be done by all parties in the protocol. So we do so by defining two types of gates. There are local gates that correspond to local computations. And then there are transmission gates that correspond to communication between the parties. So by defining this structure, we can then associate every wire with one of the parties. So for local computation gates, we associate all of the wires with the party that should perform this computation. And with the transmission gates, the input wire belongs to the party that sends the message and the output wires belongs to the receivers of the message. And now the idea that we had the work that showed security in the semi-onus setting was that because now every gate has the property that all of its input wires are always associated with a single party, then we can let that specific party perform local preprocessing. And in that way, we save one degree. So then we ended up with degree two instead of degree one. And now the problem that we had to face in this work in order to generalize it to malicious security is how to handle parties that cheat in the preprocessing. So a priori, it's not even clear that when they don't do the preprocessing honestly, they even end up with something that looks like some sort of randomized encoding. But what we did in this work is to slightly change the way that the encoding is defined in such a way that no matter which malicious strategy they choose for the preprocessing, it ends up looking like changing the local computation that was supposed to be done in that specific gate. So in that way, even if a party sends any arbitrary value instead of performing the preprocessing, we can translate it to a malicious strategy in the protocol that we started from. So we preserve the security guarantees of the protocol that we started from. So to summarize the results, we have the completeness theorem which we derived by plugging in protocols. So first of all, we get perfect security for threshold N over 3 for functions in NC1. And we can get statistical security if we want security in the honest majority setting. And if we want to support any function in P, then we have a computational solution assuming Black Box 1 function for any honest majority. And now we can use many different protocols to implement a degree 2 function. But we give in the paper a few examples. And we derive the following results. So as I said in the work from TCC, we have a semi-honest protocol for any honest majority in two rounds. And now in this work, we have a protocol with malicious security with selective approach. And the improvement here is now, this is for any honest majority. And there is a similar result in the work by an answer that you will hear about in the next talk. And our next new contribution is in the guaranteed output delivery model. So there we also improve the threshold to N over 4. And we get the number of rounds to be 3, which is optimal. So for NC1, we have perfect security. And if we want polynomial functions, we can do so assuming one with functions. So yeah, that's it.