 Hello, thank you for watching this talk on improved single round secure multiplication using regenerating codes I am Daniela Scolero, and this is joint work with Mark Hapspol Even downward from a crumber and chopping sink. So let me begin by describing what the setting we're studying here is So our context is a cure multiplication with secret sharing based MPC. So imagine you have n parties P1 P2 up to Pn and They have two secret shared values We are denoting secret sharing by brackets of X and brackets of Y and the shares are X1 up to Xn and Y1 up to Yn respectively secret shared means I'm assuming background on secret sharing, but it basically means that each subset of at most the T parties cannot learn anything about the secret and any subset of T plus one parties can completely determine the secret and Imagine they have these two shared values the goal in secure multiplication is essentially to obtain shares of the product of X times Y and the shares I Will denote them by Z1 up to Zn And the idea is that the parties will engage in some kind of protocol in some kind of Interaction to be able to obtain these Z1 Z2 up to Zn so that they are shareings of the corresponding product of the two secrets and In this context in in this work. We are in the honest majority setting So we assuming and we assume in the maximal adversaries or we assuming that n the number of parties is equal to 2t plus 1 Where t is the number of corruptions, which is also the threshold of the scheme We assume in a paper we assume active security with a board But if for the purpose of the slides, I'm going to be talking mostly about passive security so this is the sort of main problem that we're studying and There are several different protocols to achieve this. So one of them one of the oldest one is bgw and in this protocol we have two shareings So we have X and Y the inputs again are like X1 up to Y1 X1 up to Xn the the shareings of X and Y1 up to Yn the shareings of Y And the first observation is that this uses Shamir secret sharing which uses polynomials of degree in at most t and With this secret sharing scheme if the parties locally multiply their shares so X1 Times Y1 so P1 executes so X1 times Y1 P2 similarly executes X2 times Y2 up to Pn who executes who computes internally Xn times Yn If the parties do this then one can easily show that these are sharing of the product, but now under a different degree So this is the first observation is bgw and then Because we don't want to have them with degree to t we want to have them with degree t We can execute the second step in which each party PI will essentially take its own share XIYI that computed in the previous step and it will secretly share it towards towards all the other parties So now all the parties have shares of degree t of each one of these individual shares and Because we know that these are shareings of degree to t we know that there exist certain coefficients That I'm denoting here by lambda There are certain coefficients so that if you take this linear combination of these shares You get XY this is just from the fact that the XIYI are shares of X and Y and then because we know that Such coefficients exist and now we have shares of these individual shareings Then we can take the same linear combination down here But now with shareings and obtain shareings with the same degree of the product and this is what is done in bgw so This is this is a very old Traditional protocol on the downside with this protocol is that the communication complexity is n square Because every single party in this step Which are any of them needs to distribute shares of this product and there are n parties to distribute the shares to so Like this this step it will take a communication in total of roughly n square The good thing is that it takes one round though because you only need to Do this the only interaction comes in this step and it's just like one part is sending a message to another party So this is one of the first protocols But then we also have a more updated a more efficient approach Which is the downward needs and protocol from crypto 2007 and this protocol again We start with the observation that if you have shares of X and the parts have also shares of Y Then they can locally get shares with with a higher degree with degree to T This is the same starting point But instead of proceeding as before what the parties will do is that they will assume some preprocessed data this consists of pairs a Random values that are secret shared both with a degree T and also with a degree to T So assuming this preprocessing the parties can first Locally compute this difference So they take X Y with a degree to T and they subtract from it are with degree to T as well And they obtain some value that we're calling E also with a degree to T and then each party PI Sends this value to P1 the share of E is sent to P1 And then P1 will reconstruct E It's fine for P1 to learn all this because even though P1 is not supposed to learn what the product is It is being masked with R. So this leaks nothing sensitive to P1 and This is the first step and the I mean the first interaction step and then the second interactive step is That after P1 reconstructs E P1 will send these value to all parties And then after all the parties learned E then they can They can locally compute at E to the other part of preprocessing to obtain shares of X times Y So in actual what what's happening here is that the parties are reconstructing these E value But to do so they are first sending the shares to one single party and this single party is sending the result to everyone This way the communication complexity is not n-square because not everyone is talking to everyone Everyone is talking to just one single party who is replying back So the communication complexity grows proportionally with N So the communication complexity as I mentioned well it goes with N But the number of rounds now is 2 because there is one round to send all the single shares to P1 And then there is another round for P1 to send the value the reconstructed value back So much more efficient protocol, but now it has two rounds So the question is Is the scenario the panorama looks something like this in terms of communication complexity You can have You can have a very small communication complexity in particular it can be soft and square It can be linear in fact as we just saw with the DNO7 protocol But then it will require more than one round to to be concrete And then the other approach is that maybe I want to have one round But then you have to use the VDW protocol which has a communication complexity of n-squared So these regime here is essentially unexplored. It's what happens when we want to Stick to one single round But at the same time we want to achieve a communication complexity that is better than the ones from VDW So better than n-square So can there exist one round secure multiplication protocols with soft quadratic communication complexity and Even we may allow some pre-processing just like DNO7 uses pre-processing. We may allow this and This is the question we're looking at in this work. So can we find protocols like this? so the first a The first idea towards solving the question is that they cannot exceed such protocols if in the process of Reconstruct in the process of getting the multiplication you need to open or reconstruct certain Share values and this is easy because when everybody is supposed to learn a share value in one single round There is no other option than letting each party here from at least two plus one parties because if a single party can learn a secret By hearing from less of this number of parties then this party like an adversary Corrupting tea parties will have known the secret to begin with So if openings are part of your protocol if somehow you need to open something that you have to stick to one round then Then you have to go with the n-square, but in general we don't know we don't know if you really need to to I mean, this is the case because if you look at bgw bgw is a protocol that does not use The bgw protocol does not use openings. It uses Re-sharing how it's called and still takes n-square. So so maybe every protocol will take n-squared even if it is Even if it is only it doesn't use openings. So this is a very interesting question and I Would like to motivate it also I mean, this is an interesting question on its own because most protocol seems to have this compromise But it's also a very useful question. So minimizing the number of rounds is very well motivated in especially in highly relevant especially in highly In scenarios with high latency Because these scenarios you will spend a lot of time if you have to communicate back and forth You want to minimize the number of rounds? It's okay if you maybe communicate a little bit more as long as you're minimizing the round comes So that's one motivation But besides that there are some protocols in the literature most notably we have these protocol fluid and Pc That aims at tolerating dynamic participants. So people can come and go in those protocols you want to do things as Fast as possible in the sense that you don't want to have a lot of communication rounds So every multiplication or every layer in the circuit hopefully takes you one single round because if you achieve this then it means that you have a You can sort of transfer the state From one set of parts to the other set of parts which is the setting they consider in their work So in other works in other words if you have one round multiplication protocols You can essentially play a plug this protocols into works like this one that require one round multiplication So what are the results that we achieve in this work? in in this Work we actually make use of of a very interesting mathematical tool called regenerating codes in order to design a one round secure multiplication protocol but Unfortunately, we don't know how to do that exactly like that. We need to design such multiplication protocol for many multiplications at once I Was I would be delighted to show a result that says we come I give you a protocol for one single multiplication that is one round and only take and takes less than n-square communication But we don't know how to do that. We don't even know if that's possible, but for many multiplications in parallel We can get a sub quadratic communication complexity amortized per product So of course computing all these single products altogether. They will take they may take More than n-square, but when you amortize divide by the number of products being computed then you get less than quadratic so this is the result we get in this work and It's been more concrete with this we present an NPC protocol with the following characteristics So first it's going to be actively secure And this is one of the big things that it's one of the big problems because passive security is much easier than active So moving to active requires some care It considers the maximal Adversary possible which is in the honest majority setting which is Essentially n equal to 2 to plus one if you go less than that then you can start considering using Paxi-crescering techniques that somehow can achieve this kind of result already So the interesting case is is the maximum adversary case and then our protocol would evaluate a set of Gates are more generally we generalize the circuit So we evaluate a d-layer circuit So circuit with d multiplicative layers We evaluate not not one of them, but actually log in copies of the same circuit And we do it in essentially d rounds D rounds, but with a little bit more that comes from the preprocessing basically and some check that must be done at the end but a essentially the rounds which Is essentially one single round permitt application layer and each a Gate each multiplication gate of each instant instance is going to take self quadratic communication And what I would like to stress that this is the first application of regenerating codes in the context of MPC Which is something that The community has been looking at for a while like how to get this type of regenerating codes to help applications in MPC and It was not possible to design one of those Until recently So let me give you an idea about why are regenerating codes useful? So let's begin with this diagram imagine we have a secret S and this secret secret is secret shared Into well shares of us Let's call them as one is to one is and and then the dealer whoever is dealing these values is going to distribute these values To be one P2 up to Pm. So this is the the sharing phase So imagine this is this is the beginning this is the first one of the first stages and then later on eventually you want to reconstruct this data imagine I don't know you are storing a file among multiple nodes and then later on you want to retrieve this file so then what you do later on is that Well, all these Parties will send the shares to whoever needs them and all the shares together with will determine that the secret S But actually only t plus one of them suffice because it's a threshold secret sharing scheme Less than t shares on the don't tell you anything about the secret but t plus one will complete the term of the secret So so many of these parts don't need to send a message only t plus one need to send a message So What is the the goal with? Regenerating codes or what's the deal with them? So it's the same setting as before Except that this time instead of P1 sending the whole is one for reconstruction is going to send It's going to apply a function to as one first Before sending it and the idea is that maybe now we need all the n shares Like instead of taking a subset of them. We will need all of them, but each mu i As denoted here will be a compression function. So down here You can see that each mu i takes elements in a big field and They return element in a small field. So these mu i's here that I'm showing up here They are all Reduction functions that take something bigger and they turn into something smaller. So overall, this is better than before because then you have like Sure, everyone is talking now, but the amount of data they're sending is way smaller So this is regenerating codes in the context of secret sharing and they're very useful because we can use them for for example a Reducing the amount of communication distributed storage applications And things like that where secret sharing is used in a Somehow static manner, but when you want to have some computation over secret share values Just like the task we have at hand which is secure multiplication It is not clear how to use regenerating codes for anything And I would like to highlight why this is the case. What is the challenging thing because they look very useful. So So let's begin by I mean, I don't know brainstorming a bit and mentioning how they could be useful in principle So How could they be useful? The main problem they have I mean they can be useful for MPC of course by by helping you reconstruct values, right? So for example in the context of in the context of Dn-07 protocol we saw that one of the main Steps that need to be carried out in this protocol, which I'm going to mention here again So one of the main steps that we need to carry out here is that each part API will send the share EI to P1 So maybe one of the thoughts could be well Well, why not using regenerating codes here so that this share goes in a reduced form or like Compressed form to P1 so P1 needs to receive less data. This sounds promising but then one of the problems with this approach is that This computation requires using regenerating codes requires us to go to a To a larger field a field that an extension field that has roughly an extension degree of low n where n is a number of parts and In computations in concrete context we want to operate sure maybe over a finite field but of constant degree We don't we're of constant size. We don't want to have a degree Like like an structure that grows with the number of parties Because typically your application is fixed and then you want more parties on top It's not like you decide your application based on the number of parts. So so this is one of the main dropbox of Regenerating codes that they they simply don't work over a constant size field They really refer you to go to a to an extension field that is that grows with the number of parties But then but then they're still useful because a solution for this is that Well, maybe you want your computation to be over Fp Z which is a constant size field that you can embed this structure into FPM Which is the one you can use regenerating codes over Well, and you use regenerating codes here to Avoid the overhead name. So basically it's like you do NPC here But your application is over here so You use the regenerating codes so that every time that you are supposed to reconstruct something here instead of Sending a big element you send a smaller element in the in the smaller ring in the smaller field and It's true regenerating codes can help in that direction But then I mean the issue is that we already know how to solve this Without going to regenerating codes and I want to show it very slightly here like really simple How is that you can do it? So this issue can be already avoided without the need of Regenerating codes what the issue of my second again? I want to stress it is there is if you do an NPC over Fp to the M how do you a Sort of avoid the overhead in terms of M if you want to compute over a fixed-size field so this can be already avoided and to give you an idea is that We can also use a different type of reconstruction functions that are like compressing functions that still allows a party to retrieve a The secret which is assumed to be in the subfield without the need of Using regenerating codes and this compression functions can be given by this for example, so The compression function row or row I can be given by the projection Into the smaller field of the appropriate Lagrange coefficient Multiplied by by by the input so basically what this is doing is that every party is Locally multiplying the Lagrange coefficient Corresponding to its own share But then instead of sending this whole big field element. They are just sending the let's say the First coordinate, which is the one corresponding to the small field Corresponding to the to the domain you are computing over so You can check that if you sum all these values Then this is the same as projecting over the sum of these guys and this guy here Inside because of Shamir secret sharing is just a secret and this is the main trick. The trick is that your secret Sure, it belongs to the big field, but more than that it belongs to the small field Because we chose it to be like that the secrets are even though the computation Could happen in principle over the big field. It's actually happening over the small field So so it means that projecting gas into the first coordinate. It gives you the same s and this is like why this trick works It's because this element could be very this element here could be very large But it's actually very small. It's just a one coordinate So we could already have solved this problem as I mentioned without the help of regenerating codes Which is it's a shame because we wanted to to apply them and the main observation is that we can actually use them because The unlike the previous compressing functions that I mentioned regenerating codes enable the reconstruction of the full s Instead of reconstructing just a projection So here you can see that we've reconstructed the projection and the projection happens to coincide with s because s belongs to this Small field, but what if we take s to belong to this to the big field? So now suddenly these regenerating codes enable the reconstruction not of p of s Which is is an element of of the small field and may lose a lot of information It will actually enable the reconstruction of the whole s Which is is is a much bigger element It has like a factor of m over c size a bigger So this is the observation that regenerating codes can be useful for is that they enable us to reconstruct big elements with little communication not only small elements so We basically use regenerating codes to optimize a cure multiplication over this extension ring So now we get like fast multiplication Over this big ring, but again This is a problem because this big ring is not very interesting because we're interested in computation over constant size ring But it well it turns out that because fq to the m is essentially as a spectro space is the same as m m copies of fq With the help of a very interesting tool called reverse multiplication friendly embeddings We can actually turn these Multiplication protocol over extension fields into a protocol to multiply essentially m copies of the same Multiplications over the small field So this is very very very interesting because at the end of the day it shows that sure you can use Multiplicated you obtain multiplication over here with the help of regenerating codes And that's not very useful on its own unless by using these RMF ease we can turn it into several multiplications over And this is indeed what we do So this is the main result in terms of MPC. We obtain a lot of multiplications each one of them having less than n squared to be communication and Taking exactly one round and this is with the help of regenerating codes I want to emphasize that if you don't care about If you didn't care about reconstructing the whole value you could follow this approach Back here, which is very interesting But and it doesn't have the overhead in terms of M But the communication complexity will be still n squared in terms of the size of the thing you're reconstructing So it's not useful for our problem. So what are the challenges that we face in this work? There are the main challenges that we want to keep the number of rounds they bounded by one so exactly equal to one and There are several issues with this one is that the use of RMF fees as Traditionally used will introduce an additional round to do something called green coding And this is very bad for us because we really don't want to have this overhead. We want to stick to one single round So to address this we will introduce a novel encoding method that removes the need of having a next round And this is of independent interest because it also applies to previous works that have used RMF ease The second challenge is that in terms of active security, you need to ensure you need to do some broadcasts in every single round and Well, if you want to avoid doing broadcasts in every single round You can always just check that all the values broadcasts were correct at the end of the protocol But then sometimes and we showed that if we do these things naively then this will be the case Sometimes this will introduce some attack vectors So to avoid this we make an overall use of the function dependent preprocessing We show that this will alleviate this issue This is an overall use of this type of preprocessing. It has been used in the literature before for Optimizing the communication count so dividing by two essentially the amount of elements sent but But the idea is that here we can also not use it we can also use it not only for Efficiency purposes, but also for security purposes And finally, we also have like some notable contributions to the theory of regenerating codes In particular we provide a new characterization of what a regenerating code Like when is a code regenerating basically in terms of certain properties of its dual So that's like the first interesting result But then all the also very interestingly we generalize all this theory that exists already for the case of Finance fields and we generalize it to something called galorex, which are a generalization of integers model to decay In particular as I mentioned because it's a generalization of these rings This type of rings already include that case which is very interesting practice because it is computation model For example to the 64 to the 128 which is more compatible with modern hardware So these are contributions in terms of the theory of regenerating codes now in what follows I want to give you an idea about how is that We get our results So first I'm going to go over the definition of regenerating codes over galorex So we're going to consider Throughout the the rest of the talk to galorex so s is going to be a galoreng of a Integers model an extension ring of the integers model of p to the k of degree l This is the degree and the second one are is going to be a galoreng also with the same base ring But now is going to have a degree extension m times L Which means that are can be seen as an extension of s of degree the extra factor here So the extra factor here, which is m So a galore ring is if you're unfamiliar with it is just polynomials over these base ring integers model of p to the k being It's just like polynomials, but you take model of some irreducible polynomial of degree L. It's literally the same If k is equal to one this literally boils down to f P to the L or the galore field with p to the elements So it's a generalization of that for k greater than one It's in general that's not a field, but it's still a local ring that has very nice properties Okay, and and we start by taking a c to be an Arse of module of r to the n plus one that is literally Another way of describing a secret sharing scheme elements in c are just vectors with n plus one entries The first one will act at the secret and the other ones will act as the shears This is just an alternative way of describing a secretion scheme It's very well known that there is a duality or like a relation between a codes and linear secretion schemes So let's begin by defining what these regenerating property have been mentioned in all this time a consistible so See which can be seen as a secretion scheme or as a code has linear repair over s if if there exist as linear maps Fee of I that map from the big ring to the small ring and also some scholars in the big ring Such that whenever you have a big vector in the in the galore ring, sorry in the code Then you can essentially reconstruct the first coordinate of this vector by taking a linear combination with the scholars zi of the compressed a Coordinates the other coordinates of the vector. So so the vector is x0 x1 up to xn And we can essentially reconstruct x0 by taking a linear combination of the other coordinates But without taking the full coordinates, but actually some compressed version of those coordinates We see that this value here belongs to r but these values here. They all belong to s so they're smaller than r So this is a regenerating code with linear repair in the first coordinate It's called repairable code or also regenerating. I use the two terms indistinguishably during this talk So one of our results will actually show that there exists a repairing or regenerating code Of r over s and it's based essentially on Shamir secret sharing. So it's essentially it's literally Shamir secret sharing a Assuming that certain inequality holds in terms of L and M so basically what this is saying is that L is going to be constant because it's just the the base string over which one over which we wanted to have the computation and M here after we apply this inequality basically turns out that M is going to be something like log n So think of M as something M. I mean remember is the degree extension of r over s so r is essentially m copies of s as modules and M is going to be size roughly log n and again because it's a code We can naturally use this as a secret sharing scheme and the regenerating or the repairing property will be used to Simplify and make a more efficient the reconstruction of a given secret This is exactly what I just mentioned. So so the repairing property Enables a very efficient one round of reconstruction if you have a shared secret Then each party PI will send the compressed version of its own share to all the parties remember that this is in the small ring and not in the big ring and Then the parties will compute these linear combination, which is the one that we have from from before So it's the because of the properties of our repairing code this linear combination here will give you the secret back I Insist these belongs to R these belongs to R, but these belongs to s so this is one of the main results we obtained like you can get such a such a Type of regenerating property from Shamir's secret sharing over Galois rings. It was already known that you can get it for Fields, but now we also can get it over the learnings How do we use them? So I want to guide you through our protocol about how is that you can use Regenerating codes to obtain multiplication in one single round So let's begin by reviewing what Bieber base multiplication or triple base multiplication is This is the the standard protocol that we typically use when we want to multiply two values So given two shared values, we want to compute the product So it happens in three stages. The first one is just preprocessing So I call it the phase zero where the parties compute a triple a b and c where c is supposed to be the product of a and b and a and b are random In the first actual step in the first interaction the parties will reconstruct the difference between x they actually input They want to compute over and a the value from the preprocessing and they reconstruct this value which looks nothing because a is random and In this reconstruction it happens in one round Because we use this reconstruction with where everybody just announced their share to everybody actually their compressor Similarly the parties open e which is the difference between y and b Then in the final step the parties compute locally These linear combination here on the right, which you can check very easily. It actually leads to the shares of x times y so this is a very standard protocol and here we're gonna use a Rating codes so the communication costs are the following so If you if you Come if you take into account how many elements are being sent here. They send O of n squared elements in s which are the compressed shares, right? So normally would be n square elements of r But because they are compressing before sending is n square elements of s And when you count that in terms of how many elements of are they're sending around Because remember are is going to be the same as m copies of s. This is the same as dividing by m here So so basically in terms of how many elements of are are being sent We get something like n square divided by log n which is south n square So it's we break the barrier of the quadratic communication complexity by a log n factor Which is not very big, but it really breaks the barrier, which is a very interesting result In terms of the of the count of elements over are this is fine And this is a very interesting result already, but the main drawback is that as we know as I already mentioned several times We are not interested in having computation over are we're interesting interested in having computation over s So let's let's continue. So so this is exactly what I mentioned We want them PC over s not over are but the intuition that we want to to To have is that s will be equivalent to many copies of Our as I mentioned before s can be embedded into our the same as as in the case of fields fpc as I was mentioning I was referring to it can be embedded into fpm But then all the efficiency benefits are lost if we use this because at the end of the day We will be communicating n square elements of s And we want to communicate if this is your domain of multiplication Then you will essentially be operating or sending n square elements of your domain. We want to send less than that so what we do is That we identified that this is not a new problem actually and to illustrate why this is the case We take this as an example So Shamir secret sharing as we know it it does not work over f2 for example because f2 is very small It doesn't have enough elements to allow for interpolation. So what people typically do is that they take a field extension a Of degree world coincidentally Also log n and now we use this extension instead and again. We embed f2 into f2 to dm Now at first glance we noticed that this is very wasteful once again because we have this overhead in M So there's this very interesting work CCx y19 because could do it all the crypto 18 Yeah, it should be 18 here my apologies. So There is this very interesting work that shows that shows the following like intuitively F2 to dm is the same as Well M copies of F2. So if we are computing over F2 to dm It sounds I mean every element here is like a vector and it has an entries and before we were using only this entry And now the others were completely wasted. So because we have so many Entries in these long vectors. We can hope to actually make use of all the individual entries to run one computation in all of them So that's like the intuition. The problem with this intuition is that even though these two Algebra structures are isomorphic. They're isomorphic as vector spaces What we would like to do over this space here, which are vectors is to multiply is point-wise, right? We want we want to execute a computation over each coordinate. So we have to add point-wise and I think it's easy because vector spaces addition are point-wise, but vector spaces don't don't even have a multiplication notion so These isomorphism here does not help towards getting what we want But the solution from this work that we're citing here is that we can use the so-called reverse multiplication friendly embeddings which are very interesting tools essentially is a pair of of Linear morphisms C and V such that well C maps from F2 to dm to F2 to the D and you can think of M as being very close to D for for illustration purpose I just think it's the same although it's not and it cannot be the same but let's think for a moment about that and We have these two maps and the two maps satisfy that well, maybe you cannot get the the company-wise product of two vectors Directly, but you can first map the two vectors with fee that gives you two Field extension elements over F2 to dm and now you can map them back with C and it turns out because of the properties of these two functions that will give you the company-wise product How can they be used so in CCX Y 19? They are used in the following way Oh before I get into that, I'm sorry I want to stress that the ratio aim divided by D it can actually be shown to approach a constant and In our case, we are not dealing with fields were dealing with galore rings, but recent work at crypto This year crypto 21 actually showed that we can also get our RMF fees over galore rings So that's not really a problem. So in CCX Y 19 The protocol operates as follows. So to secret share a value I'm going to stick to the field extensions just for because that's how this original work is described and I insist I'm sorry is CCX Y 18 To secret share one of these vectors they will secret share over F2 to the This vector here actually lives in F2 to the D and when they want to secret share it They actually do secret sharing of the image and their fee of this vector, which is an element of F2 to dm and And this is secret sharing over F2 to dm as we know it that can be made to work. Let's say Shamir secret sharing if you want to add Two values then it's easy because fee is additive or homomorphic if you want to Multiply then it gets a little bit more tricky because in multiplication You want to essentially obtain Shares of the product company wise product of X and Y But the company wise product of X and Y having shares of it means that you have shares Over F2 to dm of the image under fee of exactly that product So so that's why we want to get shares a few of the company voice product. How do we get it? They get it in two steps so in the first step the Parties execute whatever protocol they come up with over F2 to dm to compute The product of these two secrets and that's like there are many protocols We can use for example the triple base protocol I just mentioned and now they got this now However, this is not enough because fee of X Times fee of Y may not be equal and in fact is in general is in general not equal To the to the thing we want which is the image of the company voice product. So to address this They use MPC like another small protocol, but it requires interaction to apply this map this composition It's a map that takes elements of F2 to dm into elements of F2 to dm And they take this and apply tau to it So they take this value and apply tau to it again This is an interactive protocol. They have some interaction and then they get this and I claim that this is exactly equal to what we want And this is easy to see if you essentially Apply it use the definition of tau tau is just this composition and then use the properties of a Reverse multiplication from the embeddings that tell you that whenever you have C of two products of fee functions Then you get the Arguments multiplied point was so this is CC XY in a nutshell now Can we use this protocol for our goals or what we want here and the answer is no This protocol will not work in our setting directly like not out of the box because applying this extra function This function tau will add an extra round. I insist I want to emphasize we need to stick to one round multiplication So even though this multiplication of here can be done in one round This thing down here will require an extra round and we don't want to do that This is why in this work. We introduce an Additional contribution, which is that we encode values differently. So this is very important Now if you want to secret share a vector over SD So now I'm coming back to our galore rings. Remember s is a galore ring of pictures, okay? L is like a constant element. You can think of it as the analog analogs of FPC and Here you have our actually FPL To be concrete and here you have a R that is p to the k and L, which is the analogous to FP to the M So it's a it's a R is an extension of s of degree M So if you want to secret share a vector here We if in the previous context you just apply fee to that vector That gives you an element of R and then you do secret sharing over R of that vector now instead We're going to find some X such that X under the function P C Gives you the vector X that you are interested in sharing and this is the small X that we're going to Now we're going to a secret share So this is the new type of sharing or the new type of encoding to encode a vector X We encode instead of mapping through fee we encode with a pre-image of under C And by the way, you can always show that you can show by the properties of arm and fees that C is going to be Surjected so you can show that addition is still local Because of the linearity of C so I will just skip this quickly, but multiplication in one round can be done Also a quite fast in with very good communication So it works like this. We assume that we're going to have now a more complex multiplication triple as preprocessing of this form and with this at hand if you have X and you have Y Which you want to multiply so so the corresponding vectors encoded by them are just the images and the C with the goal is to obtain shares of the product which means shares of a value such that when you map it under C you get a The original two vectors the company widespread of the two vectors and to do this would you need to execute two rounds the first one? The part is open X minus a and Also Y minus be just like with normal Beaver base multiplication and here we use regenerating codes So this one is very efficient it only involves n squared divided by m also divided n squared divided by log n elements and Then the parties compute locally these expression here Which is essentially the beaver base multiplication expression, but now with Tau's So this is essentially what they this is all data from the preprocessing and They get this you can check that it's equal to this now. I claim that this is the Z we can take So in other words, I claim that if we take C to be this value then when we map Z through to see we get what we want, which is the component-wise product This is true because this is the same as C of these two things and remember because of the properties of iron my fees See of a product under fee of a product of fees is the same as the component-wise product of the arguments of the fee and Well, the arguments in this case are C of X which is vector X by definition C of Y Which is vector Y by definition. So this gives you the component-wise product So this novel method of encoding really rules or removes the issue of having to Apply Tau after the multiplication has been done So basically you can apply Tau at the same time as you do the complete computation if you will Now I also want to note that our new encoding improves C C X Y in another Like on its own like this encoding is not only useful for a work But also for C C X Y which I'm sorry insist is C C C X Y 18 So the extra re-encoding ground is removed as I mentioned So so this can also be our encoding can also be applied to C C X Y to remove the re-encoding over there And also there is this annoying thing at the beginning which is a subspace check for the input phase that Verifies that the input is in the right subspace. Well here You don't need that because every secret sharing is a possible sharing of a vector in the previous case only the Elements in the image of fee can be correct encodings. So now here everyone can be an encoding which removes this problem There are several additional challenges specifically when we move to active security in our protocol you need to deal with the correctness of the openings because the Regenerating codes don't give you any redundancy and you also need to deal with The consistency of the broadcast and so on we use function dependent preprocessing to address this in a novel way And we leave the details for you to check in the paper But like it's very interesting that we can solve it that way and then finally I just want to revisit in one single slide then The results on regenerating codes. These are not so important to understanding detail I just want to mention that we have this new characterization of what it means to have repairing ability in terms of the dual codes That can be interesting on its own and then we also have an existing theorem on Preparing codes with Shamir secret sharing. So basically to be concrete the function that you take is is a trace function with The generalized trace function where galore rings Where the alphas are some evaluation points that you need to take very carefully as powers of certain Root of unity and also the weights are the the W's are the weights coming from the dual code of three Solomon codes This is similar to the result of refills But it has several crucial differences for example the trace function is to be defined differently or more in a more generalized way and also the Evaluation points need to be taken in a very careful manner because otherwise this won't work So there are several differences that are very interesting With this I would like to thank you for your attention. You stick with me all this time. So I appreciate that you and When it is stopped, thank you so much