 interested and focused after lunch. Thank you. And due to that, the program committee has assigned the three best talks to this session. So you've shown excellent taste to come here. The first talk is Round Optimal and Communication Efficient Multi-Party Computation by Michele Ciampi, Rafiastrovsky, Hendrik Waldner and Vasilis Zikas, and Hendrik will give the talk. Yes, thank you for the introduction. So yeah, let's directly start with some motivation for a multi-party computation and this work. So a multi-party computation, we have multiple parties who want to jointly compute some function. So here we have Alice and Bob, and they both have their inputs x1, x2. And then in the end, they want to learn f applied on x1, x2 without revealing any more information about their private inputs. And they're going to do this by interacting in several rounds of communication. So they're going to exchange a few messages with each other, and then they should be able to compute this function. And there are two bottlenecks that we have in the setting. The first bottleneck is the number of messages that these parties need to exchange. And the second bottleneck is the size of the messages that are exchanged. And there's been a lot of work with respect to the first question, and it has recently been shown that the four rounds are necessary and sufficient. And we're here in the setting where the adversaries are malicious, there is no setup, and we can have a dishonest majority. So this first thing is basically solved. So we are going to focus in this talk on the second point while preserving the first one. So what we would like to have is a protocol that has the minimal number of rounds with small messages. And yeah, there also has been some prior work in this area. So there's been some prior work in the semi-honest setting, which is with a bit weaker adversaries. So there have been two works a few years ago, one work by Ben Hamouda and Lin, and one by Gag and Srinivasan, where they show how to realize two-round multiparty computation based on OT. And there the communication complexity depends on the size of the function that is being computed. And then there's also been some works for maliciously secure round optimal MPC. There's been one work on promising your knowledge. It's called where they show how to realize four-round MPC based on DDH quadratic residuosity or nth residuosity with also with a communication complexity that depends on the size of the function that is being computed. And then there has been another work by Champi et al, where they have the same communication complexity, but based on OT. And then in the semi-honest setting, there have also been some works where they have round optimal protocols with an improved communication complexity. So there's this work by Anant et al and Quach et al, where they have a communication complexity, depending on the depth of the function that is being computed. And they realize this based on the LWE assumption. And then there has been another work by Anant et al, where they rely on Ring LWE, the Decisional Small Polynomial Ratio Assumption and OT, and they manage to get the dependency only on the input and output lengths of the function. And what we're going to focus on in this talk is basically dismissing a bottom right box. So we have two results. We're going to present the first one is also a communication complexity, depending on the depth of the function, based on LWE. And the second result is a dependence only on the input output lengths based on also the Ring LWE assumption, DSPR and OT. Right. And the starting point for the first result that we have is this compiler of Anant et al that relies on a notion called function encryption combiners. And before introducing function encryption combiner, it's probably good to introduce function encryption. So in function encryption, we again have our two parties. And here what Alice can do is she can sample a master secret key that she can then use to generate some functional keys together with a function F, as well as a ciphertext, which is here an encryption of X. And then she can send the ciphertext together with this functional key to Bob. And now Bob can use this function and the functional key and the decryption procedure to learn the function associated with the key applied on the underlying message. And yeah, so if Alice would, for example, like to send multiple ciphertext, Bob can always use this functional key to do this computation. And security in the setting ensures that nothing more about the underlying plain text is leaked than what is leaked from the function evaluation. And the notion of function encryption combiners basically adapts this to the to the multi-party setting. So here both of the parties can sample their master secret keys. And they can also create functional key shares so they can use their individual master secret keys together with a global to input function here to generate these functional key shares. And then these functional key shares, as well as the master secret keys can be combined. This is what this decomposability refers to here to obtain the full functional key, which is right. And the master secret key, which consists of the combination of both of their master secret keys can then again be used for encryption together with some randomness to encrypt two inputs. And then, yeah, the full functional key can also be used for the encryption of these two input messages. Right. And because we want to have good communication complexity in our resulting protocols, we need to have some requirement on the size of the keys here. So we also only need a dependency on the depth of the function with respect to the key shares in the setting. Because the idea in the construction is to somehow move the computation outside of an MPC protocol to achieve this communication complexity. So how does this look in more detail? So I'm going to recap now this protocol of Alanth et al. So they start from around optimal seven on a secure protocol. And then in the first step, both of the parties are sampling their master secret keys. And then they use these master secret keys to generate their functional key shares, which are then exchanged outside of this, this two round MPC protocol, so that both of the parties can reconstruct the full functional key. And then what is what is additionally happening to generate the ciphertext encrypting their inputs in this in this two round protocol, they both use their master secret keys as an input, their plain text and some randomness. And then the function that the protocol computes is simply an encryption using this master secret key together with a randomness. And then as an output, both of the parties obtained this ciphertext, and they can use this functional key that they have previously exchanged to do the decryption. Right. And why is this protocol succinct now? Well, the only messages inside the MPC protocol are independent of the function. And the only thing that relies on the function are these functional key shares. And they are quite small. So this is exactly what we want. And now the idea to go to the maliciously secure setting would be to replace this two round protocol with a maliciously secure four round protocol. So this is basically a one-to-one translation of how this transformation would look like. But because the adversary is more powerful now, there's several issues that still remain. So this is not secure yet. The first thing that is still a problem is that these functional key shares can be generated maliciously. So they might be some arbitrary values and not an output of this key generation procedure. The randomness used for the encryption that is generated inside the MPC might be bad, because it's like just two concatenated strings. And similar to the functional key shares, the master secret keys also do not need to be generated honestly. So now we're going to have a look at these three issues and how to solve them to obtain a secure protocol. So yeah, we're going to start with the first problem of the maliciously generated functional key shares. And here the security definition of the combiner basically guarantees that the only thing that can happen with maliciously generated key shares is that an honest party might receive a bad output. But it's not possible for an adversary to learn more about the underlying inputs of the parties. And this exactly refers to a notion in MPC, which is called privacy with knowledge of outputs. And there have been some compilers that turn MPC protocols that are secure with privacy with knowledge of outputs into fully secure protocols, while preserving the round complexity and also the communication complexity. So this basically solves the first issue that we had. The second issue is the concatenation of the randomness. Well, and what we can simply do here is like we can let both parties input longer random strings into the MPC protocol, and then it's all those two together. And as long as one of these strings have been random, everything is fine. So this solves the second issue. And the third issue requires a bit more work. And one idea that we can do here is like to do something similar to force the parties to honestly generate a key by doing some joint randomness computation. So what would the possible approach for this be? So we could have this protocol where in the first round, both of the parties input some randomness and their private inputs. And then the function that this protocol computes is it simply computes this master secret key using the combination of these values, as well as the ciphertext for the combiner. And then after this MPC protocol has been executed, both of the parties obtained the corresponding master secret keys. So they can use those to generate these functional key shares. And then they can exchange this and also finally compute the functional decryption output as it was in the previous construction. But the problem here is that this solution does not quite work because we want it to be round optimal. And here we would have to have an additional fifth round, because first the parties need to execute this MPC protocol to obtain their keys. And then they can exchange these functional key shares. So the idea to somehow circumvent this issue is to do this coin flipping outside of the protocol. Right. So what would the idea here be? The idea here would be that in the beginning, both of the parties commit to two random values and send the commitment of those to the other party. And then in the second round, what they're going to do is they're going to tell the other party which one of these random values are without revealing any type of opening. And then what each party can do is they can take the remaining value inside the commitment and use the value that they have received for the other party for some joint randomness. So this would allow the parties to obtain their master secret keys at the end of the second round. Then they can, as before, generate the functional key shares, exchange the functional key shares. And now something additional needs to happen. Both of the parties need to somehow prove that they have done all this previous computation correctly. So what the parties can do is they can generate a transcript that consists of the secret information for the messages that they have exchanged outside of the protocol. So here it would, for example, consist of the opening for the commitment and the remaining value, remaining random value inside of it. And they're also going to generate a transcript which consists of all the public values that they have received from the other party. So this is basically the commitment as well as one of the random values. And then they're going to use these three things as an input to the underlying MPC protocol. And then what the MPC protocol is going to do is basically computes this check. So it checks if the public values provided by one of the parties actually match with the private transcript that is provided by the other party to make sure that the honest behavior has happened. And then in the last step, again, the ciphertext of the combiner is created, output to the other parties, and the parties can again use the combined functional key for the reconstruction. Right, good. But we still have one problem here. So now, in the previous setting, we always said that the input for the MPC protocol has been used in the first round. But now we're requiring the input to be used in the third round. And additionally, we require the input to the inner MPC protocol to use some information that has been generated outside of the MPC protocol. And there is this notion which is called delayed input MPC, which allows for later inputs to the protocol, but not for this type of security. So we need a stronger notion, which we term K-delayed input function MPC. So maybe to again highlight the issues between the other properties of both of these definitions. So in the first, in the K-delayed input MPC setting, we have the situation where the input is needed in round K, as we also saw in the protocol before. But here it is not possible to adaptively decide on this input. So this input needs to be decided before the execution of the protocol starts. And our new notion of K-delayed input function MPC has the first thing is the same. So the input is only needed in round K of the protocol. But additionally, it is also allowed that this value is partially decided during the protocol execution. So that in this case would be this commitment, as well as one of the randomness that has been output afterwards. And yeah, for those values, we don't care about privacy. We only care about the fact that they are properly used inside the MPC protocol. And in the paper, we show how we can realize this notion of K-delayed input function MPC by using a general two-n-party K-delayed input MPC protocol together with an information theoretic MAC to obtain an n-party K-delayed input function MPC protocol. Right. So this is again the final construction. And now with this new notion of K-delayed input function MPC, we are able to argue the security of this construction. So this basically solves all these three issues that we have discussed here. And what we only need to argue is that the communication complexity of this construction really depends only on the depth of the function that is being computed. And well, we can see this because all of these messages are basically independent of this function that we are computing, right? Because the only thing that we are doing is we are exchanging some commitments and some of the values that these commitments have been generated to, then these values are checked and the ciphertext is generated that only depends on the input length. And the only thing here that really depends on the size of the functions are again these functional key shares. But yeah, due to this succinctness property of the combiner, we are guaranteed that this is also only dependent on the depth of this function. But still, this remains, so we still have some dependency on this depth and we would like to do better than this. So the question is if we can do something better than this? And the answer is yes, using another primitive, which is the notion of a multi-key, fully homomorphic encryption. So here, similar to the combiner, like both of the parties can generate their own public key, secret key pair, and then they can use their public key to generate an encryption of their input. Then the public keys of these both parties can be joined together and homomorphic evaluation can happen that takes as an input ciphertext provided by both of the parties. So yeah, Alice would provide her encryption of x1, Bob the encryption of x2, and then together with this global public key a function f can be evaluated that results in a single ciphertext that encrypts this complete function evaluation. And then afterwards, if decryption is required, then we would need to have both of the secret keys generated by Alice and Bob to obtain this function evaluation. And yeah, the idea here that we would like to use is similar as in the as for the combiner case. So we would somehow like to use this primitive to move the computation outside of the of the NPC protocol. So we also have to have some type of succinctness here, which is called compactness, which basically means that the resulting ciphertext after the evaluation procedure has been executed is independent of the of the function f, right? So now we can we can have a look how this actually looks like when we when we execute this in the protocol. So it's quite similar as before, both of the parties generate their their key pairs, then they encrypt their their secret inputs. And in the first round outside of the protocol, they're going to exchange the public keys together with the encryption. Then they can both locally compute this function evaluation using the homomorphic encryption scheme. And then in the third round, they use this again as an input to the underlying NPC protocol together with this transcript. So the transcript, the public transcript of Bob here, for example, would consist of a public key one of Alice and the ciphertext corresponding to x one. And then yeah, the corresponding secret information would be the randomness of Alice used to generate these keys, as well as the the randomness and the input needed for the ciphertext. So here, additionally, we assume that the underlying multi key FHG scheme is perfectly correct. And therefore, we don't need to do a similar type of coin flipping as in the previous construction. And then yeah, what the NPC protocol would do inside is again checking these two different transcripts, computing the the decryption. And then yeah, both of the parties receive the output. And yeah, this this protocol now has a communication complexity that is completely independent of F, because all the computationing is happening outside of the protocol. The only thing that is in relation to F is the evaluated ciphertext. And due to the compactness property, this is independent of F. So yeah, to summarize, we have shown how to realize round optimal and communication efficient NPC. And we have presented two protocols for this one protocol with a communication complexity that depends on the depth of the function, using functional encryption, combiners, and another protocol using multi key FHG, where we only have a dependency on the input output length of the function. And yeah, along these lines, we have introduced this KD late input, this notion of KD late input function NPC, which helped us to construct this round optimal protocol with enhanced properties. Yeah, thank you for listening. Questions. Not all at once. Okay, I have one little question. Um, I, maybe you said something, but I missed it. And what assumptions are you are you basing this this result, right? So, the first one is LWE for the combiners. And for the second construction is ring LWE, the decisional small polynomial ratio assumption. And yeah, that's that's it. And OT, of course, which is we do just need that for the for the multi key FHE Yeah, so it's not your compiler doesn't. No, no, no, the so the underlying NPC protocol can be instantiate using OT. And then we need whatever we need for the for the multi key FHE scheme for the for the second result. Okay, so if there are no questions, let's think the speaker again.