 So I'm going to talk about standardization from the point of view of someone who actually goes and implements and verifies and does all that stuff. So I'm a cryptographer and I decided to join the dark side. I decided to go into private industry and I work at unbound tech. So what does unbound tech do? So our goal is to prevent keys from being stolen and we want to prevent keys from being misused. If you're an adversary, you don't need to steal the key to do what you want. If you want to sign malware, you just need to be able to use the key once. If you want to steal millions of bitcoins, you only need to sign one transaction. So for us, it's important to protect the keys. So how do we protect the keys? So now we have a very short video explaining how we protect the keys. Blit into multiple random shares placed on different machines. These shares never get united, even when they're in use. So I have to thank the marketing department for this, but this is essentially what we do. So what's the talk going to be about? Well it's going to be about one of the things we're going to talk about is, well we're cryptographers. We want to imagine the worst case. But in this case I will think of, and from experience of my grandfather, I will explain what can go wrong from standardization. And we're going to talk about lessons we can learn from civil engineering. And then I'm going to say, well what good can it do? And I'm going to give example of OT and OT. I'm going to focus on OT extension to say, well standardization really is important and why it's important. And then I'm going to talk about how we can make standardization nicer in terms of descriptiveness. How we can make it very easy to implement and verify the standard is met. So civil engineering is about designing buildings that are robust, that won't collapse because something, you know, some strong wind happened. And cryptographers design protocols that are robust. So it's a very similar idea and it's very easy to learn from other people's mistakes because it's free. They've made the mistakes and we don't have to suffer. So if we can avoid mistakes that they've made and we can take lessons from what happened to them, then we can use those experiences for our gain. So this is, I'm presenting you Dr. Knoll. So he was a civil engineer on the CN Tower and the Pyramid's Louvre. So two I think really cool structures and he's a civil engineer. He's been a partner as his firm longer than I've existed. He's an insurance investigator. So he's somebody that, you know, when there's an accident, sometimes he goes and actually explains what happens and gives detailed reports. So he's an expert and he's also been on committees for standardization. And he's also, he's my grandfather so I get to hear stories and his knowledge about what he thinks and what happens in standardization, why it's been a negative influence on his field and I'm going to describe to you those things. So his view on standardization, so I'll tell you actually in terms of when I sent an email, I said I went to the NIST workshop with which was really nice standardization. I really enjoyed it and I say I went to, I was part of the panel and then he sends me back this response. Well, don't go too fast, don't go too far because in my field has reached the point where it hinders new design developments and innovation and we definitely do not want that to happen. And so in his opinion he has a couple of things that why it's wrong. Well, because one of the things it became too complete, complicated and cumbersome. The standards became too big. Before what was a couple of books became an entire library of standards. So you want to keep your standards in checks to know what you really want. And one of the problems is that if you're always focusing on trying to meet the standards, well you might not have a good view of the overall problem. You might focus on too much on trying to fulfill the standards. So I'm going to discuss problems and solutions. So one of the things is that standards can be comploded. They can be very complicated. So as somebody, as the previous speaker said, well we should have full-fledged implementation. Why? Because if you have to fully, if you have to implement your full standard, you're going to realize at some point something is way too complicated. So I have to, it'll allow you to reduce the complexity of the standard. And of course test vectors is very helpful because you know if you've done it correctly. Then there's a problem that you know, it can hinder new designs. If I standardize a primitive and then somebody comes up with a new algorithm or you come up with a new algorithm that improves that, well if they won't certify it because you're using a new algorithm even though you provide a proof, you provide formal description, well that's going to be counter to why it would be a negative for standardization. And so it's better if we do, we build standards in a modular way so that sometimes when there's pieces that you know you can really improve that you're not stuck using those old primitives or the old constructions. And there's another problem that in the field of civil engineering, when they have standards body or they meet to create standards, often there's professors there who want to include their work in the standards. And then, but they've never built a building in their life. And this can be very, you know, if I've never built a building in my life, then, and I want to add something to the standard, it might be counterproductive to people who actually build buildings because you don't have this experience. So I would recommend for these people that are going to build standards if they have the chance that they should do internships at private companies to see how these protocols are actually implemented in their product. And an excellent time to do this would be, for example, when there's a government shutdown, because then they're free to do something else. So the next question is how can standardization help? So I'm going to talk about OT and OT extension. So oblivious transfer is one of the most important primitives in secure computation. There is so many protocols that build upon this primitive. And OT extension is a way of getting each individual oblivious transfer very cheaply. Oblivious transfer is a primitive where the sender sends two messages. The receiver gets one of the two. The receiver gets no information about the other message. And the sender does not learn which message the receiver chose. So how would you implement OT extension? As somebody who wants to implement it, currently, how are you going to implement it? Well, the first step is you're going to go to Google Scholar. And you're going to type in OT extension. You're going to look at all the relevant papers. You're going to look in a few links. And then you're going to see, okay, which one is the best? And then you're going to check, okay, this one looks like the best. Well, okay, but is it from a reputable conference? So let's see. So the second one is the candidate. Let's see if it's from a good conference. So it's from crypto 2015, I'm pretty sure. You think that's a good conference? So you feel comfortable going, okay, I've done all these steps. I've gone to do OT extension. It's a really good conference. And then you go, okay, everything's rosy and I can just implement that. Well, when things can go wrong, they go wrong sometimes. And in this case, there's two papers that recently came out that both show that most OT extension protocols are insecure. It's a bit more technical than that. That there's ways to instantiate these protocols that are secure and ways to instantiate these protocols that are not secure. But it's still a major issue and yeah. And then one lesson you can learn is that these papers in these conferences can even be wrong sometimes. And standardizing can help ensure that our protocols are more secure than just trying to use the conference peer review system, which has a lot of pressure. And then there's extensions of that work. There's like correlated OT. It's a variant of OT where instead of getting one of two message chosen by the sender. First of all, there's some fixed value that is selected. And then each time they do a transfer, an oblivious transfer. Well, there's a message that is given to the sender. And then the receiver either receives the message or the message X or this correlation, this value, this delta value we call it. And so this is also a very useful primitive for like garbling. And then well, if you look at the best protocol currently there is, there's not a formal description of the protocol. And there's not a formal proof. So it's very considering the previous thing that there was issues. And then you have this protocol that doesn't have a proof or a formal description. This is very dangerous territory. And standardizing this and making sure that these primitives are secure. And a good building block will go a long way to making sure that we have that when we have protocols that they're going to be secure. So how can standardization help? We can have good places to know. We just look at the standard and then we're happy. We don't have to verify the rest of the literature. And I want to emphasize that there's many protocols in the literature that are just wrong. And then when you read the paper, you'll be happy. It's going to be efficient. And you'll have to read like many papers to see that it's wrong. Like for the example, the OT extensions paper that are not, that show that the original one has issues. You have to go through seven pages of papers on Google Scholar before you see this paper, and then you have to read each paper very carefully. So it's very time consuming for an implementer which might not have much time. So one thing that would be good is to standardize OT and OT extension. That would be a great favor to the community, I think. So now the question is how to improve standards. And so papers are good. But there's something about writing papers that's aimed for academia. And that doesn't make it necessarily very easy to implement. So MPC in general is just challenging to implement even for experts. And so how do we make papers easier to implement and understand? Okay, so first of all, we'll start with what we know has been done that's really good, protocol decomposition is wonderful. I can say, just use oblivious transfer here, I don't care what oblivious transfer you are using. This makes it so much easier because I can choose the implementation I want. Okay, but to see why papers aren't necessarily maybe not written the best way for implementation, we have to look at a historical perspective. So excluding the participant of TPMPC 2019, who know the answer? Does anyone know what the sequence is? I'll give you a clue. It's related to crypto. Come on, you seem to know what it is. Something about submissions. Okay, somebody who already knows. All right, so it's the submission page limit at crypto. So this sequence is the submission page limit at crypto. Now, if you ask me 12 pages, how can you write an entire paper in 12 pages? That's insane. And so we wrote papers in a super condensed manner, because we had 12 pages, because we were writing like if it was for a book, which it was at the time. And then we also used to tell a very funny, well, light that's very painful for when you're trying to figure out something. This will be shown in the full paper. And so I think that considering this history that we can do better. And so the way we write and read papers, so right now we write papers in the style of if we were writing for a book, which we did before. But now most people who read papers, they read papers in PDF format. So I have a few rules that should make it easier to read papers to implement and verify the implementation. So the first rule is simple. This is convention that's really nice for me. You might not agree, but assignment equality this. This makes it clear when you do equality, you can just say equals. This is a better one. This rule has never, typically what we do is we write player P1 computes this, player 2 computes this. And what this does is it saves us lines for the protocol description, which if you compress in 12 pages makes sense. But now we have more pages so we can make it clearer. So how do we divide protocols by player ID or rounds? Well, we write it like this. So this is much clearer, I think, much less noise. And so then you can ask, what's the next thing you can do? Well, now you can remove useless keywords. So locally computes, if you know who's going to do the operation, well, it's completely pointless. So then you end up something compressed like this, two more rules. Well, sample, this is often you can replace it by a symbol. But then you can see say one action per line. Because often when you're describing protocols you see in the literature, they're putting multiple things in one line. But when you're implementing and verifying, what ends up happening is that you implement the first part of the line. And then you have to go back, you have to write that. Then you have to read the entire part of the line to get to the second thing. And then sometimes it's even worse because sometimes you have to jump to the end of the line for the protocol description because they say something like compute f of xy where y is blah, blah, blah. And then you end up jumping around and then it makes harder to verify, harder to implement. So then just one line per thing. So in this case, they'd say verify. And then you can just, yeah, so I give the reason. So you often reread the same line if you do different things. Order and code, you have to jump back and forth. And it makes it just more difficult to verify and implement. So then there's a rule that I often see this in text. I see, check that this condition is true and if not abort. Let's just use a keyword for that. Just give a semantic one and then you don't need to have to worry about that. So this is the original protocol and this is the, I think the cleaner and nicer one that you can actually do. And I think it's much easier to understand this than to read this. And then when you're implementing, you're just gonna do each of these operations very easily. And there's another thing that I hear. If you look at ideal functionalities in the UC setting, they're often very heavy in terms of writing. You often see upon receiving this message, execute this, compute this value, and then send this value to this. But then I would recommend coming up with keywords that remove which we associate to the given semantics. And so if I take something like this, which looks quite heavy, by applying these rules, I end up with something that I think looks much nicer. Then you can verify each of these conditions. This is literally pretty much exactly how we would code it. Actually, exactly how we would code it. And it makes it much nicer to look at and to understand. Versus this one that I will go say, okay, I'm gonna add this line. Okay, let me reread the whole part again. Okay, now I need to add this line. Then I have to go, okay, now I need to add that. So I'm gonna go back and forth, and this is much nicer. And I'm a bit ahead of time, but that's fine. I can have lots of questions and you'll be happy to go early. So there's a few things to understand from the statements. First of all, from this presentation. First of all, standards can be harmful. And it's important to know that, yes, you can do harm by standardizing stuff. And in the wrong way. So this is something to keep in mind. But also, standardization can give you a way, can give you a safety net, can give you a clear and nice way to make sure that your protocol is insecure. Because for NPC to become generally used, we need that it be robust, that we know that the implementations are really solid. Try using some of these rules to simplify protocol description. If you want, you can send it to me, and I'd be happy to simplify the description for you. You can contact me at samuel.ranalushi at Unbound Tech. And have fun, come see the dark side. Come to private industry and see what it's like. Thank you very much for the chance to present. What a surprise. Hi, thanks for the presentation. So I like the rationale in the presentation, but I think there's one element that is missing, which is what is the reason for standardization? And I think there's many answers. And so one might be in a particular place trying to standardize something to be the selected algorithm that is going to be used now for everything. But I think you've mixed into that the goal of standardizing for the purpose of having a better specification and a better peer review. And if we isolate that goal, then actually there's no inherent problem in actually standardizing everything that we want. We could just call it papers version 2.0. You could even have a conference where people bring their papers that were already accepted, and now they're going to specify it in a much better way, and which much better peer review. So the comment I want to make is just that I think it's interesting to separate the goal of standardization as a way of improving the specification detail versus the potential role of standardization for the purpose of selecting the one algorithm that we want to use in that remand of all of the others. So it's more a comment than a question. Yeah, so there's things that are, okay, so selecting the algorithm sometimes. So sometimes there's so many trade-offs for algorithm. There's simplicity, efficiency, like if you choose an algorithm that's simpler, but maybe less efficient, that's easier to implement, then there's people that are gonna decide that they really need the efficiency, and so yes, these choices can be interesting. You're right, there's different perspective of, am I an expert cryptographer? Am I somebody who just wants to use a particular building block? And it's a hard compromise. There's hard compromises to be made about deciding what to do. But having access to some way so that they can build it and be secure that they're using the right thing would be good. And if they want to extend the solution, okay, so it depends on, are you trying to standardize something that is narrow or something that is general? In particular, in my role of making the question, neither or neither. Or both, maybe. I'm just saying that realizing that a lot of papers are not well specified, and they should have descriptions that are better specified. That can, to a certain extent, be modularized away from what is the goal of standardizing something to be the elected protocol to serve as the ultimate oblivious transfer that is gonna be used, and forced. I think the difficulty with that is that academia is academics when the public's papers, they're okay making small compromise. They're okay saying, yes, this looks fine. And first, if you wanna use these protocol in industry saying, yeah, I think this is fine, it is very dangerous. And I think this is where standardization can help is really making sure that if I use a sword that it's not gonna, whoops, that it's not gonna crack, that it's not gonna crack. And I think that's part of the reason why I like standardization, is to make sure I have a sword that's gonna stain one piece. Before your presentation, I have one concrete question. So it looks like, I appreciate your space, like placing all those rules to make paper more readable, like more understandable. I'm just curious, do you have, because it looks to me, you're making a suggestion to syntax protocol papers. So for the syntax to be precise, like to be specific, I think you need like formal models of how to express your securities. Do you have any of those formal models when you're listening of your rules? Are you talking about formal specifications? Yeah, it's like in the UC papers, we use like a three machines interact your three machines, but I think when you actually read, like write all your rules or specifications, like writing a program language, you need like a formal model. Do you have any, actually, like, I don't know if you're in your, like, where you are. I think, I think at this level of specification, I think it needs to be higher, high level enough, that you're not bogged down by details of like, how do I send a message to others? Right, then not actually, not by like those details, but I mean, like, at least you need some kind of like semantics. Are you talking about formal semantics? Yes. No, I haven't looked at formal semantics, but it would be an interesting thing if I can actually give formal semantics to what, you know, to rules for formal semantics would be very interesting. Okay, thank you. So, two comments or questions. So one was going the same direction as the previous questioner. It would make a lot more sense if this was executable. So, I mean, for most lower level primitives, we now have a rule that you have to survive an implementation, so then it's also non-ambiguous what a certain line means, whereas you just change the syntax, and it's not even getting to what I would say is better practice by having like the different parties separated by distance. So when you use, for instance, the crypto protocol, the normal ones where you put a cent for an error and so on, I find those a lot more detailed than what I've seen in years. But that was a comment on this. If you plan on going to that, I would rather ask you to specify something which can be executed than something which is just on paper still. So get one more step, make it implementable, have something behind it, be it Python something script. Actually, yeah, first of all, that's an interesting research question. Of whatever, you can take something like red-eye road and turn it into something more, some higher level description like I did and turn it into some, you know. The next talk is about the error. Yeah, but then you, okay, so I will say there's a compromise here, because if you do that, then the difficulty becomes, so there's solutions that could do that, I think implementations that do that. But then what I think there's a compromise between doing that and then you end up with problems where if you wanna debug, you're using your intermediary language and you don't necessarily have to access to the compiler versus if you write the code in C++, then you can actually have your compiler, it's easier to debug and maybe there's ways to make it actually, so I'm not a PL person, but maybe it is possible to make it, do exactly what you said and make it very easy to debug. And the other thing is I don't have as much confidence and standards as you seem to go. When you say, oh, I would really like to see O2 extensions being standardized because it makes it more secure. To me, it means once there is a standard, it kind of closes the door on improvement, so maybe I'm more with your grandfather there on seeing the dangers of standardization and if you have something which doesn't need interoperability, but just something within your company, then why would you want to have standards rather than just ask for code review or getting more eyes on it? I mean, in particular with OT, we still see developments, we're still seeing stuff getting broken, so to me, this would be, it's not right for standardization. I think that, so I think one of the difficulties for us is that we don't have infinite amount of time. We don't have a team, a large team of 20 cryptographers who are able to look at all the issues and can keep updated on all the papers and verify all the latest improvements. And so sometimes the literature is kind of dangerous and it's difficult to go through the literature and understand everything. I agree, but I mean, if then some higher entity says, hey, he has a standard, it does give you a cover, you ask, you can say, oh, they told me to implement it, but it doesn't make your product magically secure. And unfortunately history has shown that there is not so much attention by academics towards standards. So if you, I don't know whether you went yesterday to WACC, there were lots of standards. Basically, every presentation was showing at least one standardized and commercially implemented product getting broken and smashed often enough years later. So these standards have been sitting around and just they don't get to the attention of academics because standards are typically behind paywall. And so to me, it's like, well, if ISOI triple ME makes a standard somewhere, I won't look because I'm not gonna pay it. And you might think, hey, it's been standardized by those people, so maybe it's better. Yes, that's... So I see that as a risk as a standardization body will tell, will convey to you, hey, this is good to implement, while actually nobody has looked at it. Yeah, that's something difficult. Thank you for... I mean, like, this is not a comment of NIST, everything you're doing with competitions, actually getting lots of attention, that's a good thing, I think. But otherwise just saying, hey, could somebody please standardize it so that I get at least somebody to blame doesn't make your product more secure. Sorry, probably taking too much time. No, that's fine.