 Hi, my name is Jan Skolster-Golassen. I'm from the Ulberg University in Denmark, and I would like to present this joint paper with Ignatius Koskudo, called a secret sharing-based MPC protocol of Boolean circuits with good amortised complexity. So, first of all, I would like to give a small introduction to what multi-party computation is. So, in multi-party computation we consider n-parties. Each of the parties holds an input to some function, and the goal is that they should compute the output of the function. We assume that each pair of parties has a secure channel between them so that they can communicate without revealing anything. Furthermore, we assume that there is an adversary who can corrupt a number of the parties, and the idea in multi-party computation is that the adversary should obtain no information of the other party's inputs, beyond what is implied by the output of the function and the inputs of the corrupted parties. So, the only way the adversary can alter the computation of the function is by changing their inputs. So, one popular way to do MPC is to base it on secret sharing. So, in secret sharing-based MPC, we often assume that the function can be represented by some arithmetic circuit over some fine field of Q. So, if you just would like to compute this function in the clear, we can just use this circuit on the inputs. But instead of using this circuit on the inputs, we start by each party's secret sharing inputs. So, the first party here, she could share her input x1 between each of the other parties. So, each party holds a share for her input. Similar party 2, she could share her input between all the parties and so on. And then we just go through this circuit, gate by gate, but doing the gate computation on the secret shared values instead of the values. So, at the end, each party has a share for the output, and if they reveal this share, they obtain the output of the function. So, often the secret sharing schemes we are using are linear, so meaning that we can base the linear gates on the linearity of the secret sharing scheme, meaning that they are fairly easy. However, the multiplication gates are often more difficult, and there we often need a more dedicated sub protocol to carry out the multiplication gates. And when we say that this aromatic circuit should be over some finite field, we might be more precise because actually there are some limitations on which finite field we can use, because often we actually need a large finite field to base this MPC on secret sharing based protocols, because many secret sharing based MPC protocols need a large finite field. First of all, we can consider two settings at least, there are also other settings, but I will just go through these two settings here. So, the honest majority setting where we can achieve information theoretical security, there we have these BGW style protocols which use Shamir secret sharing schemes, but the Shamir secret sharing scheme requires that the field size is larger than the number of parties. Furthermore, in this dishonest majority setting where we can strive for computational security, and which is also the setting we consider in our paper, and there we have the protocol such as the speed protocol, which achieves active security via linear homomorphic message authenticated codes, but in order for those to be secure, we require that the field size is larger than 2 to the lambda, where lambda is the security parameter. Our protocol is actually a modification of the speed protocol, where we circumvent this requirement to have a large field solved. Now it just briefly goes through how the speed protocol works. So, in the speed protocol, we have this large finite field and we like to obtain active security against the dishonest majority, meaning that all except one might be corrupted. So, the idea in the speed protocol is these message authenticated codes. So, we have one global key where each party holds a share for the global key, meaning that the i's party holds alpha i, then we have the dates that we are computing on, so x could be an input for some party, and then we have a MAC, a message authenticated code. So, the idea is that this MAC should be the global key times the data. And again, we use secret sharing, we use additive secret sharing, meaning that both for the key and the data and the MAC. So, we have to sum up the shares of the alpha, we have to sum up the alpha i's to get alpha, we have to sum up the x i's to get x, and we have to sum up the m i's to get m. So, each party holds a share for data and for the MAC, and also for the global key. So, now we can see if the advertiser tries to output some wrong x prime instead of x. So, he has introduced some error, and he could also introduce some error in the MAC. Then, in order to cheat, he should be able to guess alpha, because alpha times e should be able to delta, and the probability of this is 1 over the field size, which is 1 over q. So, the probability that the advertiser is not caught when opening certain value is 1 over the field size, which is small for large q. So, yeah, it's 1 over q, and that's small for large q. If q is small, then this probability is, of course, too large, and then that breaks the security of this protocol. So, our goal is to come up with a version of speeds from arithmetic circuits over fq, but now with a small q. And from now on, we just assume that q is 2. So, one naive idea could be just to take f2 and then embed it in a larger final field, and then one could just use speeds over this larger field for large enough m. So, we could just make this m large enough so we have a good enough security parameter. However, this is very wasteful. We need m bits to represent one bit, and furthermore, we also need some zero-knowledge validation that our inputs are actually in f2 and not that the advertiser is not choosing inputs in the extension field. So, maybe a next idea could be, okay, what if we can bundle this data in batches of k bits, so consider x in f2 to the k, and then use max on these vectors instead. However, using coordinate-wise max, where we just take an alpha vector and use coordinate-wise multiplication on x, it does not help because the advertiser is still able to add error in one of the coordinates and then succeed with probability in one half. So, the idea of bundling, even though we have described how to bundle, seems not to work directly. However, Damkahn-Sakayas came up with a solution to this in 2013 with a protocol known as Minimac. This Minimac protocol is very, actually, they achieve what it's probably like to achieve, so I'll just briefly go through what they do and then point out where our protocol differ and how we improve this. So, the idea in Minimac is that the encode to a code word so that max is an element in f2 to the n, where n is larger than k. And what they achieve by this is that the probability of pooling in the mac is now 102 to the d, where d is the minimum distance of the code because if the advertiser would like to achieve, they have to open a code word so they have to change at least the coordinates in order to get another code word. So Minimac computes then on the data pages of k bits and this can be seen as either computing an arithmetic circuit over the ring of 2 to the k with component-wise addition and multiplication or computing k evaluations of the same Boolean circuit simultaneously. But they are also showing the paper that we can adapt this to a single evaluation of what they call a well-formed Boolean circuit and I will also return to this a bit later on. So the last approach where we can just compute k evaluation of the same binary circuit is just illustrated here where each party adjusts the input k different inputs and we get the output of the same circuit but on different inputs. And then I've just included just a few more details about the Minimac protocol since we would like to compare to this later on. So multiplication is carried out using Beaver's technique but when we do this we end up in the so-called Shure square of the code and since we do this we also need a high minimum distance of the Shure square because as we saw before the Minimac distance should be larger than the security parameter. The communication overhead depends on this ratio between the length of the code and the dimension and for binary codes with security parameters about 128 some of the base construction gives a ratio of about 10. Then Damgol-Ausen-Tufft gave a modification of this Minimac protocol in 2014 where it showed that we cannot use a resultant code or a resultant code over a constant extension of F2 but if you do this then we require much more pre-processing and I will also briefly return to that later on. That was a small description of the Minimac protocol and as I said our protocol is very similar to this Minimac protocol and I will point out where we differ. So we present an alternative approach to this set of computing k-instances of the same Boolean circuit simultaneously but instead of using error-correcting codes we use the notion of reverse multiplication-friendly embeddings. It was previously used by Kaskoort-Kramer-Jengen-Juan in 2018 but in the case of information theoretically perfectly secure MPC and more precisely adapted a protocol from Beliova-Turbini and here which actually will receive this TTC test of time award after this section so they adapted their protocol to small fields obtaining the same amortized communication. I will describe this reverse multiplication-friendly embedding how they work and what they do so they allow us to embed the ring of 2 to the K into the field F2 to the M but we do it in a way where we reconcile these algebraic structures or at least enough of these algebraic structures. A reverse multiplication-friendly embedding consists of two pair functions so we have a phi which maps from the vector of 2 to K to the field element to the M and then the psi function which maps the other way around so the idea is that if we map a vector x and a vector y using phi then we can multiply them in the field and then after we use psi to get the coordinate-wise product of x and y and the point of this is that we can make M not much larger than K so the idea is to replace this reverse multiplication-friendly embedding instead of the code so if this M is not much larger than K then the overhead becomes smaller than using binary codes where the length of the code has to be larger compared to the dimension so that's the idea of our paper more or less so first when I say that M is not much larger than K how much larger you could ask asymptotically we can show that M is the constant time K but if we go into non-asymptotically more concrete parameters we have two results presented here so the first shows that we could get M around 3.3 times K and the other results down here shows that we could get M about 4 times K but the reason why I'm including this is that we can get maybe some nicer extension fields for example we have 2 to 128 so this shows what kind of M we can get compared to K so our protocol as we said it's much like Minimac but also in some way one could say a combination of speed to Minimac even though Minimac is also a modification of speed protocol then ours in some way in between you could say because the idea is that we can keep the global shiat key in the field in f2 to M but we are still computing on elements in f2 to K which they also do in Minimac so we have kind of a combined so we both have shiars in so the data shiars are in f2 to K while the Mac shiars are in f2 to M and the relation between them is that if you sum up the data shiars we should of course get X but if you sum up the Mac shiars we should get alpha times phi of X so this is in the field while this is in the vector space otherwise the protocol uses the same idea so we do computations on shi-shared values now we use these brackets to describe when we have shi-shared with the Macs but the sum can again be computed locally due to the f2 linearity of this phi function and then what about the products we also rely on Beerus technique so we need a pre-processed triple where we have A, B and C and C equals A times B but we also need a re-encoding pair because we need a way to convert back from the field extension field to the vector space and here we use some re-encoding pair where we have psi of R this is a vector in f2 to K so we use this notation here with these brackets and then we use the square brackets for the R because that's an element in the extension field and here this means that we actually use the speech-authenticated sharing but with the same alpha as we use in this notation so we use more the Beerus technique where we hide this X and Y with this A and B and one can check that we show it in the paper at least that this computation at the end gives X times Y coordinate-wise product and we could also say that here we use this R to hide all the information here and when we open, we only mean partially open which is also the way they do it in speech and minimac they're not opening the max through the protocol they're just opening the values and at the end they do, they carry out a mag check to show that there hasn't been any inconsistency so in the paper we compare to of course minimac but we also compare to a protocol known as Committed MPC by Frodexon, Pinkers and Janai from 2018 this Committed MPC uses UC homomorphic commitments but this is also implemented from linear codes so the comparison between the protocols essentially boils down to what we could call the encoding expansion factor so in our case it's the ratio between M and K and in the minimac in Committed MPC it's the ratio between the length of the code and the K and this reverse multiplication friendly embedding has, as mentioned before, a smaller expansion factor in some sense then before I also mentioned this version of minimac by Damker-Lausen and Toft which could use resolomon codes and they actually have slightly better communication complexity in the multiplication phase since they can use resolomon code but as we also point out in the paper they require much more preprocessing and we have actually found a dedicated preprocessing protocol for this paper so I will just briefly explain what we need to preprocess for the online phase so first of all we preprocess this input pairs which is actually just a party choose this randomness and authenticate to this randomness and the reason why we do this is that we don't have to authenticate during the online phase so it's easy for the party just to broadcast the difference between its inputs and this random value chosen by the same party and then all the party is able to adjust this authenticated value to the input instead furthermore we needed these triples and re-encoding pairs to carry out the multiplication gates maybe I should also mention that they need actually some similar thing in the minimac protocol because they have to convert from the sheer square back to the original code so that's similar to this so that's more or less what we need for the preprocessing and we use techniques from Mascot, a paper by Kjell Ossino-Schall from 2016 and this mascot is based on bit.ot's which fits very well with the f2 linearity of phi and psi but I won't go into the details about this preprocessing but we describe in the paper how to preprocess all this so I mentioned before that minimac also described how to convert this idea of doing K evaluation of the same Boolean circuit to do just a single evaluation of the Boolean circuit so far we have only, in our paper only talked about doing K instances of the same Boolean circuit but we can also as the Dune Minimac convert this if this circuit is what they call well formed so well formed it means that it satisfies some requirements for example that the layers of the gates for example that we can order the gates in layers where the gates in each layer is of the same type and also that the number of gates in most layers are large or at least the multiple of K and the number of wires from layer I to J is large so if it satisfies some requirements like this then we can group the gates in a layer in batches of K and adding maybe a small number of dummy gates and construct maps between the layers that reorganize the output from the layer from one layer into another layer so maybe I think it's easier to describe from a picture so here we have organized and this is layers and yeah all this layer is addition gates so with this we have batches of excess X values into this and Y values so we need to add those so we add the X vector to the Y vector and then we reorganize here using some reorganizing functions and we can do this because of the F2 linearity again so it's fairly easy actually to pre-process what we need here and also to carry out this reorganizing if we know how to reorganize it so I won't go into details about this but what we need to pre-process is some random values and then using this reorganizing function on the random values because then we can as we did for the inputs just take the differences, open the differences and then adjust the difference by using these F2 linear functions and adding it to these authenticated values here and again the pre-processing is also easy because of the F2 linearity so to sum up what we did in our paper was to present a secret sharing-based MPC protocol for computation of Boolean circuits and our approach applies this reverse multiplication-friendly embeddings to the speech protocol more or less so we take the idea of using this reverse multiplication-friendly embeddings from Cascudo in 2018 and then apply it in the dishonest majority setting instead the structure of our protocol is very similar to the Minimum protocol but the max is an extension field instead of our vector spaces and this allows for shorter encodings and in the paper we also present how to produce the previous data that we needed for the online phase so I think that's it and thank you very much for listening