 So hello everybody, thanks for coming here after the run session my voice didn't survive so bear with me But I know that you survive. That's nice. So I'm gonna talk to you about a bunch of Nice properties that we can get in your sick commitments and the new techniques that we used to get there First of all, I want to give you an introduction About what are commitments? What are they good for and what are the properties we're looking at? Then I'll talk to you talk to you a little bit about these related works which are a kind of a long line of research that led to the result we have here today and our results the general framework that we used to achieve that and yeah our actual construction I'm gonna jump the proof because it takes a lot of time to talk about that Then I'm gonna talk to you a little bit about the concrete performance and the open problems that we still have in this line of work So first of all what are commitment schemes? You can think of it as a box Where you can put a message? Lock the box send the box to the other side from Alice to Bob now Bob has the box with the message inside But Bob doesn't know what the message inside the box is later on Alice can just send Bob the keys Which open the box which allow Bob to get the message out of the box now Bob is also guaranteed that Alice didn't change the message From the time when she actually set the box and the time when she sent them the key so this we call this the hiding property that Bob cannot see the message and the binding property that Alice cannot change the message after she sends the box now Why we care about these things? So commitments are good for a number of applications in different cryptographic protocols. For example, multi-party computation You might have heard about that already a lot, but just a quick introduction the millionaires problem You have to reach people who want to know Who's the richest? But they don't want to tell each other how much money they actually have so they can run a protocol that just tells them Who's the richest but doesn't really reveal how much money each of them has So that's one of the applications multi-party computation can be used in a number of scenarios such as the bit routes auction which I have to mention of course is a work in Denmark and We also have some new results showing that we can use this kind of commitments to get very efficient garbled circuits not adaptive as there I was talking about but Steadily secure playing y'all, but it was very good efficiency. So this is a nice primitive. We have good implications for that now We also talk about universal composability So we get this universally composable commitment and why is that good for well in universal? Composability you you can be sure that your protocol can be executed in parallel with copies of itself Or with other protocols without losing security Which is a property that we use when you especially for protocols for schemes like commitments There are an important building block of other Protocols we also know that in the UC framework Commitments are actually complete meaning that if you have commitments you can compute any other well-behaved, let's say computable functionality and Which is nice. So they're very powerful Primitives and if you get that with certain efficiency It's nice. You have good applications So we also know that in UC you need set up assumptions if you want to do any interesting To multi-party protocol So you need something that was there in the beginning like a common reference string a common random string a Public registration facility and so on in our case Will we use OT as I'm going to tell you later, but just so you know that's something you need It's not it's not like we're cheating now a bit about the properties The title is really big rate one linear time and efficiently computable blah blah blah So what is all that stuff that we talk about first of all homomorphism? We are going to talk about basically Editive homomorphism, which is the following imagine you have two boxes now One with a yellow orange message and one with a blue message and you want to combine them You know either you get a box with a white message inside So you can do that if you have a negatively homomorphic commitment it means that you can give Bob both boxes with different messages and he can add Those boxes in a way that he obtains the result the added message inside Oh later on you can send him a combination of opening information Let's say the keys added together in a way that he can open this box and obtain only the addition of both messages But not the original messages Now also rate we talk about rate one. So what does it mean when I talk about rate one? Let's say you have a message that send bits long and a commitment. That's L bits long The rate here is the ratio between the n bits and the L bits So in this case if you have a small message and a very long commitment We get a bad Low rate because we have a huge communication overhead Let's say where we blow up the commitment size compared to the message size now in the case where you have a message That is small. Let's say and bits and a commitment of L bits that is also close the message size You get a rate approaching one, which is a good rate, which is a high rate, which means you don't have a large communication overhead Now I'll start with related words pre 2014 what did we have in terms of you see commitments? we had this work by Yehuda Linda at first and then an improvement on that by Blasee and others where they build efficient you see commitments from DDH They use a common reference string as a set of assumption But there are some problems They have high asymptotic communication and communication complexity why because they need to keep doing a lot of Modular explanations for every message they compute them for every message. They commit to They need zero knowledge proofs which require several rounds unless you're willing to use a random or coach We're not dealing with here. So you need several rounds for proving the statements they have lots of explanations and They have lots of rounds that are consequently implied by the zero knowledge protocols So we want to get rid of all that We don't want the high complexity both computation on communication and we don't want this huge round complexity now, how do we do that? this is our general framework as a as a Set of assumption we have OT We assume a set of face we do the set of face Where we do a fixed number of OTS only and after that we can do it unbounded Of course a polynomial number of commitments that only use primitives in mini-crypt After the initial these which we can really minimize using of the extension. So let's say we do 200 T's after that we can just use our correcting codes pseudorum generators and hash functions to commit to an arbitrary number of messages Which is nice because then the complexity of these 200 T's gets amortized Over the number of messages that we commit to Now back to who did that? I mean this started in 2014 With the work by Garay, Shaikou, Maresen, and we And he re-crypt where they showed how to get rate one using more or less the general framework But still they didn't get linear computation and complexity nor homomorphism then in the same year we I mean me, Ivan Dango, Irene Giacomelli and Yes, for Boos-Niesen we showed how to get linear computation and complexity for the receiver Getting fully homomorphism meaning that we can both compute additions and multiplication of commitments But still we lost rate one. So you see we have a compromise here still on Related notes Brandon showed this year in PKC how to construct the similar results to Garay and others where you have rate one, but not the other properties using different techniques So Garay and others used VSS, OPs, and Tudorum generators to construct this primitive while Brandon It started from a standalone Equivocal commitment, a standalone extractable commitment and Bound all those together with collision resistant hash functions and got efficiency through Tudorum generators. Now last year We showed how to get full linear complexity for both of the center and the receiver, but still no rate one and We got only additive homomorphism, not full homomorphism, but at least we got good computation complexity then this year's TCC Yes, for Boos-Niesen, Toa-Ferriksen and Roberto Trifiletti and Thomas Jacobsen showed how to get Rate one and additive homomorphism, but they lost linear computation complexity So we're here in a conundrum. Can you actually get all these properties at once in one single scheme? The scheme that we had last year in PKC doesn't get rate one. We get the other properties, but not rate one Now the scheme by Toa-Ferriksen and others this year's TCC gets rate one and homomorphism but still They have bad computation complexity Can we actually get both? Yes, we can. Let's make commitments great again We can combine the same general structure of our results from last year with a new the first actually Rate one linear time in codable codes Plus a new nice technique to check that many strings are actually cold words and we get that So on a theoretical side, what do we do? We get optimal communication complexity Amortized communication complexity. Let me be clear because we need the initial set of phasalties But after that the complexity gets amortized over the many commitments that you can make we get rate one asymptotically we also get optimal computational complexity the commitment and the opening phases are both linear time computable and We get additive homomorphism Why do we put an asterisk there because we can actually get multiplicative homomorphism? Although unfortunately when we do that we lose our other nice properties and we actually have a very good reason to do so There is a basic fact An upper bound on coding theory that we can use to show that using our techniques you won't get multiplicative homomorphism with rate one or Good computational complexity we would need radically different techniques to do that So that's one of the other problems. I'm gonna talk about later. We we also round optimal After the set of phase after the initial tease The sender only needs one round to commit and one round to open So that's also nice now on the practical side You can actually implement I mean if you read the tires a very theoretical paper But it can actually implement that very efficiently I'm gonna talk more about that later and the building blocks are readily available We can use any PRGs based on a SNI and so on and the actual Error correcting codes that you can use to implement this efficiently Although we doubt the asymptotic properties are readily available. For example in the Linux kernel Now if you compare with the previous results in Our results we get all the properties we get linear computational complexity we get rate one so optimal communication complexity and And we get it the additive homomorphism one cool thing I hope they will make you look at the paper. We can get rate above one actually if we're committing to random messages So for random messages, we do this funny trick and we can we actually reach a rate above one Which seems funny, but we have some words about that in the paper So please look at that now our building blocks. We need error correcting codes to gather a symptotic results We built the first rate one linear time encodeable codes We had rate one codes and linear time encodeable codes before but not Codes that had the both properties at once we built the first such codes from expended graphs I'm not gonna elaborate much on that because it's a complex matter, but you can of course look at the paper and We need to believe is transfer any the COT will do so you can choose your assumptions these are basically the Many public assumptions we need so any if you like when a mark or you can instantiate our scheme from in a miracle If you like to the age you can instantiate our scheme from the age if you want pulse quantum security You can stay shade our scheme from coding assumptions or lattice based assumptions We need a certain random generator. So to get our Asymptotic results. We need a linear time sort of random generator. There are some fortunately by what hum and others and We need an almost universal hash function that is both linear and linear and linear time computable which we also build In our paper the first such function with some ideas from Michaels Now a bit about the primitives it'll tea you you're probably most familiarized with that But you know tea a center puts into messages a receiver gets only one of the messages And he's oblivious to which was the other message Now we will start with a set of face that is based on OT We start with several several random seeds to the sort of random generator We put them into all teas. So for every pair of seeds We put them into an OT box the receiver will be actually putting in random choice bits and you'll get health of these seeds right health of the seeds in random subsets Now we'll next proceed to a commit phase basically where we commit to a bunch of strings Let me just be clear that you don't get to do only one of these batch commit phases after the setup You can do a limited number of those But of course a moment describing one so in this phase you start by stretching each of the seeds with the PRG Into big strings now these big strings will be composed by some random data that we want to stay random anyway And by some trailing data that will eventually become error correction parody parody bits basically so You first get these random strings that you stretched from the seeds That's one of the reasons why we get actually rate one You don't have to send all this information at the beginning. You just send the small seeds that get amortized later Now for every of these big strings Maybe you saw here. I'm saying R1 0 R1 1 R2 0 R2 1 I have pairs of the strings for every pair of seats now for each of these pairs What do you do? You exhort them together or you sum them together because this result actually works for any field Then you get a final string, which is your final random stuff and your final almost parody bits Now, what do you do with this? You still don't have a cold word because your parody bits are still random stuff So you compute the parody bits for this random string in the beginning of our long string You build then the final cold word. Let's say you have a code in systematic form So you have your random data and then actual parody bits error traction data Then you compute correction bits Let's call them that when exhort with this random stuff will actually give the actual parody bits You compute it all Now you end up with this information as a standard You have several cold words or at least they should be if you're honest And you have several correction bits that when exhort to the first random Parity bits that we send will result in the actual parody bits Now you use that's where the magic comes in in this paper when our main techniques is that we show How to use this almost universal linear hash function? to to prove that these strings are actually cold words because you see a Adversary could just send you random stuff that when exhort to gather later when you're doing homomorphic operations Opens up to completely arbitrary messages. They're unrelated to the actual messages he committed to but Then we show this nice technique that you can use Almost universal linear hash functions to show that these strings are actually cold words and but by only sending Very little information only the result of the hash functions. So what do we do? We get this matrix of supposedly cold words and We hash each row With the almost universal linear hash functions. We get the result of each row We we of course send it out to the receiver And now the receiver will only receive The correction bits that will turn the random stuff you receive the first into proper parody bits and The hash values now Why is it good? Why is it because at first we don't have to see we are actually sending less information Than the size of the messages we're committing to we are committing to this big ours We're only sending the little correction bits. So here. We're actually getting right above one That's why we can get right above one for random messages if you only want to have random messages here now the receiver gets this and The receiver does some checking That's part of the technique as well. The receiver only has some of This our values for each pair. He cannot reconstruct the whole Community string. He only has half of it, but we show that it is enough for him to have half of it such that when he does The computation of the hash values of the roles of the only of the columns he has which are only half He can still check with with high probability Later on when the when the opening happens that whatever the center committed to was actually a Proper code word a proper commitment now to get the actual parody bits He needs to add the correction bits, but you see here that everything he's doing is Computing and encoding of a linear time encodable code and doing addition linear time operations, so we get linear complexity and Right right here. What happens? We are committed to random messages. We're committed to this ours Well, not these ours the proper ours. They were computed by XORI each pair but Actually, you don't want to just commit to random stuff all the time you might want to commit to arbitrary messages So what do you do standard trick? You use the random messages you committed to as a one-time-pack key to One-time-pack encrypt your actual arbitrary message then you get Commitment of the same size as a message now again on our on the note of our rate one You get you've got the correction bits and now you get the message But the correction bits can be really small because we have rate one codes That's why I get rate one in the opening. We basically check the error correction parody bits and then we Subtract the random message from this small commitment and we get the message standard trick Relinear time and we get rate one by doing that now. What can we do in practice? I'm here to tell you use the commitments are practical. You like OT extensions So you should like our our commitments. We get basically the same performance as OT extensions for 50,000 commitments. We can do commitments in 29 Microseconds and the commit in 0.2. We only need bch codes are implemented in an external and PRG's AS and I Of course, you have a trade-off We need the OTS the setup of these 50,000 commitments takes 1.5 seconds so some we're here and We have some open problems how to actually get multiplicative homomorphism basically we while maintaining our Nice complexities, which we don't know how to do right now. Let's work on it and thank you for your attention