 Good morning. In another MPC session, the first talk is universally composable subversion of resilient cryptography and Bernardo will give a talk. Hello. Hi, everyone. I'm Bernardo. So I'll give, I'll talk about universally composable subversion resilient cryptography. So this is a joint work with Shuva Deep, Shakraborty, Jasper Bull-Nielsen, and Danieli Venturi. Okay. So first, like a little background on subversion. So this is about the setting where there is like this very powerful adversity that is able to like temper with the implementation of the Honest Parties, the Kina protocol, can like replace the party or change the specification and do all that. I mean, this actually started like back in the 80s and 90s, but it's really like picked up more steam after like Edwards know this revelation about like all these intelligence agencies like putting this type of attack forward and other players. So this is actually like took off like almost 10 years ago. And the current state of affairs in this area is that, I mean, we have different, like many different models. But all of them, I mean, they have the problem that it only imply like standalone security. So if you want to build like a larger protocol, like composed of smaller parties, like the current models don't allow you to show security like in this case of composition, right? So what we do here is we extend the universal composition framework of Kaneti to accommodate for subversion. So in particular, we look into reverse spirals, which was was proposed in Euro clip 15, which is about like, like a piece of hardware code that just sits between like a party and the external world that just sanitizes the communication to to and from this party. So then to showcase, I mean, our model, we just built like commitment sanitizable commitments coin toss, and then we build malicious NPC, like by sanitizing GMW compiler. So before, I mean, we start like talking about the model, like a very, very quick, like UC recap. So just to make sure I mean, we're on the same page. So in the UC, you usually like define what is called an ideal functionality for a task. And this is secured by definition, right? So for example, if you have like a commitment to functionality, I mean, this is just as like a trusted third party that will just take the messages and then sends the opening to the other parties when this is over. So the goal is to implement to build a protocol that implements this functionality. So what does this means in UC terminology is that you need to build a protocol that such that you build like the simulator as that will communicate with this environment, right? Such that I mean, the view of the environment in these two worlds needs to be indistinguishable. So the environment cannot know if it's like running the real protocol, the real adversity and the parties, or if it's talking to the simulator, like in an ideal word with this functionality that is secure by by definition. So now in our model to accommodate for this reverse firewalls, we split like every like party in the protocol into two parties now. So we have a core and a firewall. So the core is responsible to to computing every message of the protocol, like all the crypto everything happens in the core. And the firewall just sits there like sanitizing every message that the core sends or receives. So in in the UC, like this two parties now that was played the core and the firewall, they are like independent parties. Okay, so in particular, they can be like independently corrupted and they cannot communicate with each other directly only through the ideal functionalities. So in following like previous work, the type of corruption that that we allow is what we call like, like a species corruption, which is something that looks like good on the surface, but I mean still doing something bad is like leaking information like to via some subliminal channel, or has like some secret trigger message that is shared with adversity that allows it to extract some message. But it's not hard to see that, but some functionalities like arbitrary type of of corruption we cannot provide security for because like a firewall simply cannot sanitize in some cases. So we need to restrict to this type of species corruptions. Another notion in our model is that of sanitizable ideal functionality. So this is an ideal functionality that will have like, like a dedicated sanitation interface that we call, which is to communicate only to with the firewalls. So the course use this IO does input output interface on the top, and only the firewall that uses the sanitation interface. The protocol implementing something like this would look like, like each party is a core and a firewall right, and they talk to here is shown in this G hybrid model which is the protocol using some other functionality and note that, like the core and the both implement the, the, I mean the interface of F right the firewall here implements the sanitation interface, because later on there will be like some some other firewall that will connect to this right. I mean another thing that that we obviously can do is just sanitize like a normal. You see functionality that we have, for example this functionality F think of like as a coin toss for example, and then we want like a sanitize coin toss. The protocol that implements it would look something like this I mean the protocol would have a firewall, even though the functionality does not right, but like for the protocol to sanitize, it will have the the the firewall and the the core. And this dotted line here goes more or less like the usual flow of communication inside the sanitizeable functionality you can see like the, the C one I mean when sends a message to the functionality usually gets rerouted to the firewall that can do some processing. Just to later on get gets forward to the other party that goes first through the firewall of the other party, and then out. So to to to have a protocol to show that realizes this functionality, we need like some rapper around this functionality right because it's like a normal functionality doesn't have the sanitation interface. So we need something to take care of all this boilerplate code that just so you can show that I mean that the environment cannot trivially distinguish like the two cases because just like the number of parties and so on. Like important notion for like this regularly see functionalities that you're going to build a firewall from is transparency so. So if you have like an honest core and you put a firewall like in between communication, you don't want this firewall to change the communication behavior off of the core. So, so it needs to be transparent to the firewall, like to the honest core if this firewall is even there. So now that we have like this two separate parties, then we have like some other types of corruptions. I mean now we have like all this different like corruption patterns right that would have to build like a simulator from so so that's why like the crime phase because you have like too many like corruption patterns now. But I mean, so the idea is to map like all this corruption patterns into some behavior, like for the party in the functionality. I mean we don't need of course to take care of all of these, but just a few because the others are just special cases of some. So, for example, like the malicious here at the bottom, and then we look at the species and honest firewall, this would give something that is like, like a species corruption, let's say. We have like honest firewall and honest core and a malicious firewall, this will give what we call like an isolated party in the functionality, because this this could be that the firewall is just simply like cutting the communication of the of the party with the outside for example, and then like honest and semi honest like the semi honest firewall here is just to do. I mean to not allow for like trivial solutions where like all the secrets are transferred to the firewall and then it can just compute the protocol itself. So okay let's get rid of all the other ones. And let's focus on this for now. So species and honest. This is the most interesting one right is the one that we want to show that if you have like a species corruption and an honest firewall. This should like it gives us like an honest party in the functionality right I mean that's the whole point of the firewall. So we have this this notion of strong sanitation that we that we have. And what is very nice is that this allows us to to like, through an indistinguishability argument, like, like just show that an environment cannot distinguish like a species core, like to gather with an honest viral from honest core and an honest viral. So, I mean you don't need to analyze this case in the simulator in the simulator that's like the good news I mean you just prove like, like a separate lemma like on your final. Now, like we're back to these three cases. Now the isolated I mean could be think of, like for some functionalities to have like some security with a board type of thing. But like, for many cases, I mean you could also map this like to malicious if you don't care about the, like this flavor of security, for example, like in coin toss, we just need to map to malicious. And they were back like to this just two cases right that we need to consider the simulator so that that's good news I mean we only need to look at on a score semi honest viral and malicious malicious. And like you prove the strong sanitation on the side. So how, how do we do like commitment for example for example right in this model so the first thing we need to define what is a sanitizable commitment functionality. So you take like the normal commitment functionality where party send like message, they commit, and then later on the functionality opens to to the receiver, and we added the sanitation interface so this interface. Every time like a party like tries to commit to something the firewall like seeing the, sorry the functionality signals the firewall, and the firewall gets the chance to to blind like this message for example to sanitize it. And then like, when it opens it gets like the sanitized input. And I feel this, like an IUC commitment like was inspired of the Connecticut Sarkar one Asia trip 20 I think this is based on DDH I mean I'm not going like into the details of the of the protocol here is on the paper. But the main idea is that I mean the construction allows like for the firewall to like re randomize everything and then, like on the opening like readjust to, to what it did. The kind of theorem that we prove I mean that we call it this SR you see which is like for subversive subversion resistant resilient you see, which just means I mean the only difference is that I mean the, the environment and the adversary they are quantified over this species environment and adversity, which are the ones that can do this type of species corruption, like in the in the ideal model. So, now, I mean that we have this sanitize book commitment. So we want to build a coin toss. So now I mean, we do it in this f hat, like a sanitize calm hybrid model. So, so now I mean this, I mean the cool part is that I mean that was now this is pretty simple because you already have the sanitize book commitment. So basically just do like the coin toss as you would I mean you like each core just samples like some some value si tries to commit to the functionality. Now the firewall that that you're implementing will sample like some randomness and send to the firewall to send to the to the functionality to blind to the disinput. I mean, the output is like this s where it's like just like recombining all the, all the openings of the other parties, combined with the stuff of the core and the firewall and just just si and all right. And then I mean you can show that this is now we call like this w s r because like this wrapped subversion reasoning because this is like a normal functionality right. There's like this wrapper around it to be able to show. And there's also quantified over like a specious environment and adversity. So yeah, I mean this is like an illustration of how this would look like action so the orange one is the coin toss protocol, which is using the green one which is the commitment protocol to implement it. What I've said here is just like, like a sanitized message transmission functionality, I mean, where like all our protocols are built on top but yeah we don't need to worry about the now. So this is just to see like how the the the firewalls and the cause will connect. And here you can see like the orange. And this is just a normal easy functionality you don't need to implement like a sanitation interface right, but the green one does because like when you build the fire for the coin toss. The firewall will will communicate with the with the the firewall of the commitment protocol. So now, I mean how we build malicious NPC like with all this right like just combining the pieces. We, so we sanitize the GMW with gold like Mikali widgets on 87. So this is like how to turn semi on as MPC into malicious is secure MPC. The main idea is, like all those parties they run like this augmented coin toss I mean, to get to like, like a random string where the each party will use this as the random tape and the others will have commitments to this. And then like the parties will commit to their inputs and every message that they produce in the protocol they will prove in zero knowledge that this message was computed correctly, like with respect to like input transcript and random tape of the party. So I mean what one difficulty that we have that we cannot prove like things about you see commitments right because I mean that there's no notion of having like a bit string or something that we can prove like a statement about. So we cannot just use like the commitment plus the plus the coin toss and build it direct so we need like this larger like commit improve functionality that will do. Do like a commit and prove like inside of one functionality right. So we also build that but now we need like a sanitize both commit improve right. So, I mean the difference of the standard commit improve is that now I mean the firewall. I mean again we'll have the option like to sanitize the inputs descent and sanitize like the statements that is proving. We can build like this this commit improve in this SR you see a model. And I mean the construction is just basically the construction of the commitment that we have plus re randomizable music. And then we can show that that we, we can have the sanitize both commit improve. Now, I mean, let's put all together and build this GMW using the commit improve and the sanitize the coin toss functionality. So now, I mean everything becomes pretty simple right you have like the random type generation. It's like, almost exactly as before like all the parties will commit to a value. Like with this FCMP functionality. They will get like some random string and then they build like this random tape, like for core I, which is this our hat. Hi, and the other parties will have like the commitments to it. And now, I mean they all commit like to their inputs to the semi honest protocol with the functionality. But now I mean the difference is that the firewall of this, the commit improve will not blind will not change the message now right because you don't want to change the message that you're running the protocol. So not the fire I mean just just let the message go. And now the protocol execution. You just run like the semi honest protocol, like with the committed inputs, the transcript and the random tape of each party. I mean each party like runs with their own and every new message that you produce. You ask the functionality to, to, to make this proof that this this message was produced correctly with respect to like to all the messages that committed the transcript and the the random tape. Right. And I mean of course like the this is the statement that is parameterized by this by this proof commit improve functionalities is the statement that like this message is produced correctly and so on. And now the firewall can like sanitize the statement just means that I mean we'll check if this SI star which is the output of the, the F toss functionality. If the transcript is correct until now and so on, and everything is good. I mean just append this message transcript and just run everything over again. Right. So then we get to like a GMW like sanitized. So concluding. I mean we got we proposed this new model right to handle subversions and the composability. So I mean what else to be in here to they can be done here is like what what other functionalities can we build. What are the variables for right like OTS you know knowledge, etc. Like can we get more efficient NPC or like with adaptive corruptions or what are the possibilities. Yeah, and that that's all I have thanks. Questions. Okay, let's thank the speaker. Thank you for a good talk. I was wondering how you decided to keep track of corruptions in the model you made. I know that's like the kind of suggested way of doing this in the UC is having this like corruption aggregation it that somehow floats around. Do you use that or did you use a different way? I mean we have this like a species environment right that can just like give the corruption structures or like malicious corruption or like semi honest or a species for for each like individual party that depending on on the pattern like on the table. And you store this in the party somehow that that this corruption has happened or yeah yeah like yeah this is keeping track like by the by the environment yeah. Okay, thank you. Sure. Any other questions. So let's thank you speaker again. Thank you. And we have an announcement. Okay, it's quick enough.