 All right. Thank you. I really think Sam. I mean he Like the question. This is exactly the perfect setup for this This is ongoing work. I mean not ongoing. I mean it's gonna be presented that CCS. It's on e-print it's exactly to tackle some of the problems or the issues that were brought up and All the points that he pointed out clearly has a lot of experience actually digging deeper into these protocols and implementing them I Claim that hopefully what you get out of this talk that you would be forced to do these things if you use an interactive theorem Prover or computer aided verification for your protocols. All right So what this talk is not about there are no new protocols no hardness assumptions or I'm not also talking about UC yet We're doing game-based Verification sort of modeling also and we're also not what I'm not talking about having the computer find the protocol Approve for you. This is basically if you have a specific protocol and approve for it and you want to get high confidence in it and Also, what I want to hopefully you get out of this talk is that if you've been through the trouble of actually verifying it in a formal Model, I'm like a theorem prover You have to do a little bit of work and you will get an executable out of it that is correctly constructed Okay Just a regulatory slide on NPC Protocol that allows a bunch of mistrusting parties typically software to compute a function under private input Historically semi-honest passive honest, but curious the model that cares more about privacy and the active one or the malicious one Is where you basically can have cheating and arbitrary deviation from the protocol? I'm not gonna go through this. I mean this crowd doesn't need it But the reason why I just want if anybody doesn't know a secret sharing is you have the share and reconstruct protocol is because I want to show you actually formalizing it in easy crypt and show you I want you to walk away to basically the message is that this is possible like MPC I consider it as advanced cryptographic Protocol and primitive and it's not that straightforward to implement Especially once you get to the malicious and active setting and also adaptive adversaries One more thing I'm gonna talk about is okay over a long period of time If you have things are secret shared then mobile adversary can basically collect all shares This has been handled in the literature before in the Proactive setting where you basically are going to move sort of your you're moving the polynomials that you're sharing with over time So that somebody that is comp trying to compromise and get all the shares over times your resetting parties and Starting them from scratch. They need to recover their shares from the existing polynomial. So you cannot stitch things together And this is very basic. I just want you to be aware of this because I will show you formalism that actually influence this stuff All right, okay, so The reason what I'm saying MPC starts to become much more complex is once you start to worry about dynamic groups and things like proactive security very quickly you'll find that your MPC protocol has seven eight or nine sub protocols and That's why you see and composability is very important, but that's at the protocol design. What about the software? This is non trivial so just when you refresh shares the standard trick is if you basically had multiple shares on a Polynomial everybody gets the evaluation of the polynomial So a simple way to refresh it is be a sorry to refresh is generate a polynomial that evaluates to zeros That's the green one at where the secrets are share it again Everybody deletes their own their old shares. So basically what you get is a new sharing of the same secret when you recover share is basically when Party leaves the protocol and comes up again. You need to give it doesn't have any state. You need to give it existing shares so This is very basic MPC has been known since 88 BGW is one of the most fundamental protocol This is an interesting fact. There once no fully specified proof till 2011 and Once you get and I highly I mean, this is very appreciated that the work that I sure oven lindle go through to Specify everything in detail and have a full proof That's a long time. Okay, and when you look at that paper, that's like an 87 page paper 84 pages of it is about passive and static Static active when you start to get into adaptive Malicious adversaries and also Composability There's a lot of other papers that you need to refer to in theorems. This is highly non-trivial So actually even implementing something for adaptive active adversaries. I don't know if anybody did it But it's whatever happened is non-trip So basically why worry about verifying MPC because one it provides an existing proof that you can tackle in Computative verification complex protocols and then people now are talking about zero knowledge more for encryption I think we're ready to actually verify these in interactive theorem Provers They're becoming increasingly relevant in blockchain settings and not only that honestly if you were at the talk yesterday The blockchain Dalia Malki mentioned this protocol This is a consensus protocol Zizipa that was written in 2007 and 10 years later They found that the life-ness and safety conditions were violated in very simple scenarios 10 years to do that And this is something that doesn't even worry about Confidentiality and privacy of inputs MPC is way more complicated. So I'm not saying that the MPC protocols have issues in them. I'm saying this is like higher confidence Level of confidence. That's just not there yet And if we if we go there to build infrastructure that should last for decades I think this is the best way to go to get that level of confidence All right, there's already been some workshops and people talking There's some funding programs about so there is interest in the community of sort of merging Cryptography and the PL community and get formal verification. It's just that so far They talk I've been to that workshop. It was very useful different languages and it's it's unclear How soon we can sort of sort of merge both and hopefully this work provides evidence that it's gonna happen soon And people can build on it All right, so my opinion is that and just gonna mention it here And this is the end of the fluffy stuff It's all gonna be more technical from now is that when you talk about formal verification Don't only talk about the specification of the protocol might as well get executable software that is correctly constructed because what you what I learned in the last year going through that exercise is That you're almost 80% of the way through when you use one of the theorem Provers to actually formalize any protocol All right, so easy crypt in a nutshell if you haven't heard about it It's an interactive theorem Prover that basically allows you it has a probabilistic language that allows you to describe a lot of the Primitives and the constructions we use in papers so that you can sample from random distribution can be hooked to interactive theorem Prover and SMT This is satisfiable modulo theory solvers To automate the proofs This is following the construction of it the architecture is following some work from Bilari That basically is proposing to verify crypto protocol as good code-based sort of objects Typically you would describe a construction or a protocol as a module some some global variables and procedures and definitions And then there's multiple levels at which you would verify something like this So for instance like take a gamma textbook a gamma I mean if you write an easy crib as you would write it in your paper you have three function three Procedures algorithms key generation encrypt and decrypt And they have already operators to sample randomly from Let's say a group of a specific order So this is not far away from how you would write in tech your your protocol So what I'm advocating is if you write this first and then extract from it tech, which is actually possible I think that's the right way to do it So what we did in this paper, which we presented at CCS is we did the following we Defined abstract definition for a lot of the primitives that you would have an MPC not OT I mean, I think that should be done But what we did is we did secret sharing a lot of variations of secret chain your standard Shamir one based on polynomial additive gradual We did MPC an abstract definition and then we proved some compositions theorem at that abstract level then basically what you do is for each abstract primitive you need to have a Instantiation of it, which is like in Shamir's case basically you select a polynomial of a specific degree over a field and then you prove equivalence of the abstract definition and the actual instantiation of the primitive and then basically you would Prove security this equivalence you can extract from the specification. You have a security game also and then you basically Prove you have to specify the proof in easy crypt and I'll show you examples of this It's not gonna stay that high level. It's just I want you to walk away with the overall sort of mental map of what happens So we have game-based security definition for each of the primitives We code the actual proofs and verify them in easy crypt But that's only half the story or like two-thirds of the story The interesting thing that we observed is the language of easy crypt is very close to a language called YML Which you can automatically extract from it O camel, which is executable in a verified way exactly the question That was raised before before I walked on the stage. I think this is possible and I think this is how things should be done because Expecting software developers to learn cryptography. It's just not gonna happen The cryptographers also are not going to write the code So it's much easier if the protocol designer uses a tool like this and then you get the whole thing automated and again in a Verifiable way. All right, so this is not happening in vacuum I mean in the last several years have been a lot of interesting efforts sort of slowly making progress towards this but till this paper nobody has verified a generic MPC protocol for end parties for the active setting active adversaries and Had the whole proof in easy crypt and also nobody had extracted Software from that proof Specification or that formalism that is actually executable and that's what we have in in our paper previous work has looked at like The sv-17 private counter-trivial, that's a protocol for three parties That is basically counting the number of records that match a certain query in a database and they only did it for the passive model in CCS 17 They did it for yaw two-party computation global circuits But also only for passive and they did extract executable software and then finally in 18 HK plus they they did it for a protocol by Uli Maurer That's very simple the beauty of that protocol is it's not only for threshold adversary structures for general adversary structures But there's a catch here that the protocol they didn't have the whole proof actually formalized in easy crypt If you read the paper carefully you find that they did At some point prove that the protocol satisfies some properties and then they have a manual step where they say oh if these Properties are satisfied they're a protocol then you have a simulator, but it was never really coded in easy crypt I think it's doable. They just didn't and they also didn't bother Extracting executable software from the formalism in easy crypt So okay, so I've been talking now about these abstract definitions So this would be for instance an example of an abstract definition of secret chair, okay? The same way you would define it in a paper right when you start Writing about let's say I'm going to use secret sharing you define you have a definition that says secret sharing Which is essentially what matters is what's in the red box? There are two algorithms share and reconstruct takes an input some randomness a secret and outputs some shares and the Reconstruct takes the shares and outputs some I mean all this code is in the paper and we have it on also on github So I mean the link I'm happy to turn it or see people taking pictures It's all open source So basically this is you're not really specifying how you're going to share or reconstruct So you do this once the the good thing about this working at the abstract level is you do it once and then later on Every time you instantiated using a different. I know mathematical structure. I mean now you can do additive sharing or Using polynomials. It doesn't matter. It still will satisfy these two conditions Sorry, I mean these two definitions then you can also do it. I mean we did it all this code is there for Verifiable secret sharing now similar the way you would Implement in C++ or any other object oriented language where you inherit a class and then overload for instance some of the members We're doing here the same thing now the the share and reconstruct operations are a little bit more involved because it's verifiable when you can do the same thing for This is now not this is not the abstract definition anymore now that we have the abstract definition This is the concrete implementation of secret sharing that's following Shamir basically What this map says is basically saying you you take some randomness and a secret of type secret T Which you define up there and then you evaluate a polynomial Using that secret as the the free term and the randomness. I mean if you were here on the talk before I think it was Was it was talking about the IETF standardization process to be a last signature at some point he said You might make the the sampling of the the the keys I think make it deterministic and pass it a random Randomness and don't verify and don't don't sort of explicitly say how that randomness You should be chosen this issue actually encountered that when we're trying to formalize this We realize that it's easier to pass the randomness as sort of like the polynomial Typically when you write a secret sharing sort of like on paper or so you you say, okay I'm going to select random coefficients and then put the secret in the free term and that determines the polynomial here What I'm saying is the randomness you're gonna pass me is going to be a random polynomial With a zero free term and I'm then adding to it the actual secret so that it's in the free term and then share it What I've found is when you try to formalize things There are multiple ways to do to do the formalism and each one can make your life later on easier or harder There's no right answer. It's really up to your experience and your your taste So we have the same thing for instance This would be additive secret sharing a concrete implementation of it that satisfies that abstract definition And then we have the same thing like commitment scheme Peterson and Feldman and then What we have in the paper is at the abstract level We're we're proving that if you combine a primitive that satisfies the honest but curious definition for basically that the shares are independent of the secret and and then with a Commitment scheme then you get for a viable secret chair again this is proven at the abstract level and This framework basically anytime you instantiate any of these primitive using a concrete mathematical structure That protocol will inherit the composition proofs that you did at the abstract level So that's actually very useful because these compositions you prove once I mean, this is not you see composition. This is composition in series So we did the same thing for MPC. Just this is an abstract definition of what could be an MPC protocol I mean again, this is very high-level. It's like you have inputs You have players you have ideas of players you have phases you have an output phase Doesn't say anything and we think of this as sort of the ideal function out But then you have to instantiate this actually using concrete protocols and and we have also because we're Initially when we started we're very ambitious we wanted to actually do a proactive protocol for dynamic groups and This honest majority that didn't happen. We settled for just honest majority and proactive, but no dynamic groups yet so we have some some Composition theorems on in the proactive setting that are formalized at the abstract level All right This for instance would be the recover protocol that I tried briefly to describe earlier in using just curves for MPC when you have proactive MPC when you have basically a Party that has been reset and it's trying to join the MPC again And it needs to get a the shares How do you give it the shares what I showed you before that every party would generate a random polynomial with zero at that Index of that share this this basically is the protocol how you'd implemented in easy crib All right, so so far that I'm talking only about abstract definitions and concrete Instantiation then there's the proof. How does a proof in easy crib look like well? I mean the proof script will basically be a bunch of definitions at the beginning you're importing the modules the abstract definitions You want to satisfy in the concrete instantiations and then you're defining the type of the adversary We're defining here an oracle because in the security game you have to allow the adversary to query the share functionality with any secrets that it wants and the game that you're the security Game that you have is basically similar to how you would define security of an encryption scheme is Come up with two messages and then encrypt them and you should distinguish We're doing the same thing here for the semi honest model with secret sharing It just doesn't fit here. So this is how it looks like basically you generate two secrets or it could be two lists of secrets You you have the adversary choose one flip a bit you send them the shares of one of them The adversary can then query an oracle on sharing on any any secret they want to get the shares for and at the end They should have guess a bit and you basically in easy crib because you can compare to probabilistic distributions you compare if the two bits are Similarly distributed or not. I mean depending on if you want to have a statistical Lee secure or a perfect security or Computational you can do all this in easy crib. So we've done all of this for this is the list of primitives and proofs We have so there's like 30 files was was a lot of definitions in the semi honest and the malicious use cases All right. So as I said, this is only half the story that you basically wrote the abstract definitions and the concrete instantiations in easy crypt and you wrote the proofs and verified them The interesting thing is that we found that the easy crypt language is very similar to yml And I think this is not an accident I think yml is a framework used for the deductive verification And I think the easy crib people when they developed that they were inspired by or Affected by that language. So it turns out that you can actually easily translate from easy crib to yml And then if I know if you've heard about okamal or not before it's a language that's been around for a while It's very heavily used in the verification and pl community and it's really Designed for safety. So what we basically did is We wrote a small tool that translates from an easy crypt specification of a protocol to a yml Specification of that protocol and then once you're in yml You have the all the power of the y3 framework. There's a lot of tools that can Verify yml code and that can also extract code from yml to other languages So there's already work that extract. Oh camel that the tool is verified Like a verified tool that would translate yml to O camel and O camel is already executable There's also some tools online that claim to translate from yml to C Haven't tried. I mean we tried them, but they didn't work. They're also not claimed to be verified PVS is the proof verification system, which is the interactive theorem prover of SRI. It's been around for 20 years The language in PVS is very close to yml So you can easily translate from yml We haven't done it yet to pvs and my colleagues have a pvs to see translator So so I think it's it's very plausible that in the next year or so somebody can write a tool that Translates from yml to C and or like we will maybe do it from pvs and then you can get a C implementation So now the interesting thing when we did this. Oh, this is just showing you how closely similar I mean like the languages are so on the right here would be the easy crypt I mean part of the easy crypt code for additive gradual sharing Sorry gradual sharing is basically you take a secret you share it Additively and then take each of the summons the additive share and share it using Shamir sharing The reason why we use that is because it's used in some protocols for mixed adversaries that basically if you were at the talk yesterday from Dalia You can have these trade-offs between how many passive party how many parties are corrupted passively and actively and it's sort of a dial So you look at the protocols that do this they use the notion of gradual sharing with a different Polynom increasing the degree of polynomials for the different shares The additive shares so on the left. This is the easy crypt code For a gradual sharing scheme and on the middle. That's the yml code and on the right the OCaml code They're very similar if you mean if you can see sort of a bunch of variables and then let's for instance here You're defining in yml a function when you say let's very similar to easy crypt in OCaml It's a little bit different. So I mean Basically even I want to show you that languages are not far and that's why we have verified tools to translate from each to the other So then the question is that code that you extract which is executable. How bad is the performance? So I think it's a little bit unfair to expect that you would compare this against sort of an optimized tool So what we do is we take charm, which is a Python based framework and we it has its own secret sharing class So we compare against what's there for different field sizes different number of parties and It's not that bad. I mean listen This is something that you get within a minute once you write things in easy crypt People spend months and years optimizing implementation. So I think it's it's it's premature to say we expect the same level of Performance, but there's nothing that precludes was from also implementing a lot of the optimizations that people do in easy crypt And getting similar performance. So in some of the cases actually one thing I'm gonna point out is when you look at charm sharing and reconstructing and then our extracted code so it reconstructing in charm for 15 parties. Sorry, there's a The second row and extracted that should be 15 15 party. It takes two millisecond to reconstruct It took us six millisecond and to share 23 and one out the reason why we're faster in sharing and not reconstructing is because We actually needed to implement in easy crypt a library for evaluating polynomials and doing the interpolation And that's what we're actually synthesizing in OCaml and that's what we're using for interpolate If you take that part out and let's say replace it by NTL a C-based implementation for as a lot of Algorithms that you would need it will be much faster The point is that NTL is not verified Whereas the OCaml code here is is formally verified because the tool is formally verified that extracts So so this area there's a lot of research to be done here on optimizing I think there is hope I mean in the next five years Even if you have I think 20 percent 30 percent overhead it might be worth it For the increased confidence you have in the correctness of the software Anyway, what's missing what we haven't done and yeah, we're working on some of this But please if you're interested in this area be glad to see people working in it is adaptive active adversaries you see Verifying the underlying broadcast protocol for instance. Nobody has done this before The underlying implementations of fields groups and all these things nobody has done a verified implementation of this as far as I know Recently there's been some project like project Everest or ever crypt Using this as the underlying Primitives for an extracted MPC code would be would be an interesting avenue. Nobody has done this before Comparing against sort of more optimized protocols and implementations I think if you start to look now at optimizing what we have implemented an easy script We might that would be a fair comparison, but so far it was really all the basic stuff just wasn't there So I mean it took us a lot of time to implement the very basic libraries and building blocks and then translating to see OCaml's performance is not bad. I personally think I mean you're not gonna get much more out of C But the memory footprint of OCaml is bad because OCaml doesn't release memory. So It's not gonna scale at some point. So there is need to also work in C. I think anyway, and obviously the future work is basically addressing these things and There are other interactive theorem prover other than easy crypt easy crypt is really focused for cryptography So like things like cock is a bell and pvs if you now want to start also reasoning about like this MPC software it's gonna be sitting either in a virtual machine or like Container or something like that. I mean if you people are starting to talk about not starting It's been around for a while verified microkernel if you've ever heard of SEL 4 I mean ultimately what you want to verify everything end to end Then so SEL 4 for instance is verified in Isabel if you want to verify the composition of both of those two things You need to do the work. We've done like an easy crypt in Isabel and it's in principle doable It's just takes was gonna take a lot of effort Which is good. I mean there's a lot of research To be done. Anyway, whenever I talk to people about this kind of work and this sort of approach These are the questions I or objections I often get and I just want to put it out there because honestly I don't think these are valid objections. I was like, oh, what if your security definition or abstract definition of implementation is wrong? I mean you have the same issue with papers. There's nothing different I just took it and implemented the just a little bit more rigorous formalism what if The computer checked proof has a problem. It's exactly the same as you have it in a paper You can always if you find the bug if it's nothing fundamental you can fix it and within a couple of minutes of Worst-case hours you verify the whole pipeline What if there's a bug in the verification or since it's a stool chain easy crypt and easy crypt is a big computer trust the computer base that you're sorry trust the code base that you're trusting fair enough But easy crypt has been around for ten years a lot of people are using it Eventually, I think bugs will be discovered and if you really want to go there You can the same way you in a programming language You design a new programming language or the first ones you bootstrap a compiler You can do the same thing like I can imagine a verified Interactive theorem prover like easy crypt that is you start with a very small kernel that you have enough confidence in it manually and you bootstrap things and that's how people did things was compilers And what if the use tool I've had that discussion actually I just added this today because I had in the discussion in the morning What if I spend if I'm a student I spend six months nine months learning easy crypt and that goes away after two years I mean first of all I am not involved at all in easy crypt I am not I have zero interest in easy crypt But I think they did a great job and it has matured a lot over the last ten years I think anybody if you told anybody ten years ago, I'm gonna verify something like bgw for active adversaries with a computer They probably would have dismissed it. We've done it Okay There's no reason not to take this approach moving forward Because if things get more complicated and more automated We need we need that level of rigor. It takes time. Yes, but it's worth it and also if In my experience same thing was programming languages The first one you learn will take you longer But then once you've learned it the second and the third one will be much much quicker if you learn one of the Theorem Provers and sort of use it as an extra verification step in your work If if for some reason you need to switch to another one It's not gonna be that much time to learn and pick up the other one and if you're a student that's starting and expecting to have a 30 40 year career in design in as a cryptographer. I think it's worth it because You might not use it on every paper, but every once in a while you need that Anyway, I'll leave you hopefully with these conclusions that one computer aided verification and automated verified synthesis of software for cryptographic protocols complex ones is achievable today And I think they go hand-in-hand. I mean verification by itself is important but if you've done 85% of the work, why don't you just do that extra part and it's reasonably low performance now to Given that it's automatically synthesized code and that is you have a much higher degree of confidence in its correctness And there's a lot more research to be done And yeah, was that I'm happy to take any questions. I hope I hope that this is something expose you to something new today Couple quick things I might have misunderstood but I think maybe on slide 49 or 50 did something about There's no formally verified field arithmetic implementations I didn't say I said we didn't use one. Oh, I see I see I didn't say there's Because certainly there's a bunch of sure. No, I said in our work with no, no, okay. I know there is it's just No, I was just trying to okay. Yeah, the actual question I have is this In a lot of these it seems like I've been following this work and related work because it's extremely interesting It seems like there's on the one hand getting correct by construction software is excellent on the other hand like one of the reasons When we implement something in C for example, one of the reasons that we do that is not just because we want it to be fast but because in some sense like Maybe the compiler gets out of the way a little more than the OCaml compiler does which I mean kind of sounds like a joke because of course you always fight the C compiler when you try to write code in it, but specifically If I'm trying to write constant timecode I have basically no prayer as far as I can tell of doing that in OCaml I could probably get that done or at least you know if I know what compiler I'm targeting I could probably get that done in C. So it seems like maybe there's a little gap right between on the one hand correct software But it's in OCaml and on the other hand what I maybe want is really like Software in C that's constant time and maybe I can examine the you know internet the IR or the assembly afterwards How do you see how how will we close that gap fair enough? Are you worried because of leakage resilience and I'm constant time? for example, yes, I Think if you want to do that, I don't think easy crypt in its current format has to do that I think you would need to move to something like cock or Isabel and I'm not very familiar with that that area of research Some of my colleagues actually have been working on a sort of these constant time implementations I Don't have an answer. I haven't looked at it. I don't think there is We're required a lot of work. It would require probably like I can imagine it as I said if you move to pvs Which is very close to IML you can extract C from it and pvs is based on higher order It's very generic. You can start reasoning about these constant time. It's just nobody has done it yet Okay, I'll tell you what we did honestly we did OCaml because at some point it was just two of us It's a lot of work and this is excellent. It's great. It's just like of course as you said there's there's tons more work to do And I'm worried about one thing. I'm sure everyone else is worried about, you know, ten other things But yeah, so this is a good answer. I really appreciate it. Thank you. Thank you It's more common on the question does now so I mean you mentioned ever crypt I mean one of their motivations is to get Software down all the way down. There's a team in Taiwan very fine the assembly level So there is other works in high assurance cryptographic engineering It just has to interface some question to you You mentioned a whole bunch of Shamir things and I see like sharing and re-sharing and well reconstruction But is your protocol taking into account that you don't actually have to reconstruct it? I mean, do you have for instance a Diffie-Hallman where you never? Reconstruct the share, but you just apply it using the shares I'm sorry. So I mean like when you when you do secret sharing and you want to run Diffie-Hallman Okay, you don't have to recover the secret. You can just apply the dish Yeah, we don't the recovering is I mean you it's up to you when you want to recover. I mean we're composing these I Mean you can just share and just up Are you asking? I mean, but I mean do you have the functionality to do Diffie-Hallman? Without ever reconstructing we don't we haven't implemented anything in the film I think other people implemented the because I mean that's that's what would really interest me because I mean once you Have recovered the share. It's gone. I mean if if you're a malicious Member gets this in the combined share or you assume that the combination takes place somewhere else I mean in your model are you assuming where we're doing NPC So we're take everyone would be would share their inputs and you're computing a circuit as additions or multiplications You can leave the the result secret shared without reconstructing So you can leave it without reconstructing. I mean so when you say reconstruct you reconstructing the result of the computation Yes, you're not saying you reconstructing the shami a secret. No, no, no, no, no, no, no, yes Reconstructing the result of the computation. Okay, yeah, and then I'm happy