 Thank you very much Fabrice. So this talk is about public key encryption as maybe the session already suggests. The accepted security notion for public key encryption is indistinguishability against chosen ciphertext text. So just a brief recap what that is. So it's the same pictures as in the RUM session talk about fair security notions, but this time maybe we want to take the perspective of the challenger. So the good guy is the challenger and the bad guy is the adversary, that's exchanged. So the adversary gets a public key, outputs two messages and has to decide whether he gets an encryption of the left one or of the right one. And in the end, the attacker will output this bit. We say that the scheme is secure if no attack, so if no efficient attack can do this better than guessing essentially up to negligible differences of course. The first observation is that the security definition is simple and you can even make it simpler if you consider for instance key encapsulation mechanisms, but it also covers only a non-practical scenario. It covers a one user, one ciphertext scenario and so it's not really a realistic security notion in itself. You can always say that this is like the standard argument, if you're only interested in polynomial security then you can argue that with a hybrid argument, security in this NCCA, oops, sorry, security in this NCCA sense also gives you security in a multi-user, multi-cyphertext scenario. However, it does not give you quantitatively the same security guarantees. You will lose due to the hybrid argument, you will lose a bit of concrete security. So you will get security guarantees that degrade in the scenario size and in particular if you don't know into which kind of setting, into which size your scheme is going to be deployed, you may have a hard time giving reasonable security, reasonable key length recommendations in order to give guaranteed or to assure guaranteed security. Okay, so what we're interested in this talk is tightly secure public key encryption and in particular, at least for the purpose of this talk, so papers are a bit more general but for the purpose of this talk, we're interested in multi-challenge chosen cyphertext security. This means it's the same as before but the adversary gets many encryptions of pairs of messages or of one message where the adversary selects pairs. So this encryption step is just repeated and you always get an encryption of the left or always an encryption of the right message and in the end he has to decide which is which and this gives you, intuitively this gives you secure communication and setting with one user and many cyphertexts with a tight reduction. So there the security guarantees immediately relate to what you would encounter in an application with one user and many cyphertexts. And what we're interested in is to get a scheme, to get an encryption scheme with a reduction that does not lose any, that does not lose any factor in the number of cyphertexts that the adversary gets, in the number of challenge cyphertexts that the adversary gets. So we're interested in a reduction to a standard assumption for instance DDH or any other pick your favorite computational assumption and this reduction should be tight which means that it does, the security guarantees or the reduction loss does not depend on the number of cyphertexts. And in particular this enables you to give security guarantees for scenarios of a priori unknown size. And the problem with this is that standard techniques to prove chosen cyphertext security for public key encryption, they do not give you, a reduction that fulfills this property. I can give you maybe a bit more examples or a few more examples for that. So first of all, the picture above changed a little bit because now I've neglected the decryption oracle and I've neglected the public key and the final decision. I've just focused on the encryption queries. So the adversary makes many queries of message pairs and then gets the encryption either in each case either of the left or the right message and always the same. If we want to construct public key encryption then we have a bunch of paradigms we could do this with. One paradigm is that because it's chosen cyphertext security you're in that dilemma right that you need to answer decryption queries in your security experiment in your security reduction but at the same time you should not be able to decrypt the challenge cyphertexts or you should at least be able to randomize the challenge cyphertexts in some sense so that the adversary tells you something new if the adversary tells you what was encrypted. So one paradigm to do this is inspired by the identity based encryption setting is that you have a reduction which knows a punctured secret key such that you can answer all decryption queries but the secret key is punctured in the sense that for the challenge cyphertext it will not work it will just give you some division by zero error or something some syntax error so the decryption key will not work for one particular cyphertext and that's the cyphertext that you can randomize in the reduction or that's the cyphertext that you can argue that the adversary tells you something new if the adversary tells you what's inside. Unfortunately this only allows you to randomize one cyphertext at a time so the ideas in all these experiments you want to randomize all those challenge cyphertexts but this punctured secret key approach only gives you a means to randomize one cyphertext at a time. Similarly if you construct your encryption scheme from in the hash proof system regime or with hash proof systems then the strategy is a bit different so there the reduction always knows the full secret key but somehow makes the challenge or one challenge cyphertext special and then kind of you offload additional entropy in the secret key into the encryption of that cyphertext so it's a bit funny because you use then the secret key to create encryption queries and that secret key that secret key makes the challenge cypherter works on the challenge cyphertext in a special way such that additional entropy in the secret key is kind of reflected then in this challenge encryption but also this is kind of an entropy argument it argues that the secret key has more entropy than the public key and more than the adversary knows a priori and this only gives you leverage to randomize a very limited number of cyphertext or one at a time so this also doesn't work in a setting where you want to where you want to randomize many challenge cyphertexts in in few reduction steps or with with little reduction loss so then there's a now young type double encryption and maybe this is a bit already a kind of a complicated description of the now young of the now young paradigm and I'll go into more details how this works in in a few slides so maybe just ignore this for the time being there's also something with some some very old method to to obtain chosen cyphertext security but it requires a very strong zero knowledge non-active zero knowledge proof and that's kind of the difficulty that makes everything very hard when you go to the multi-challenge setting okay so what what's what's in this work in this work okay so first I should explain maybe the table so the known schemes are in the upper part and green means green color means this is something good so of course there's there should be several shades of green maybe so you have crime ashoop or cross over desk mat and cross over desk mat is one group elements is all in terms of group elements in the first two columns here this is kind of one group element better but it's all green so it's all efficient it's all practical it's all something we could live with then there's the red part which says this is something bad which means in crime ashoop and cross over desk mat so in the upper part these are the state of the art schemes with a non-tight reduction you have a non-tight reduction and this means that the security guarantees degrade in the scenario size the assumptions from which we on which we rely here they are very very mild and standard and well investigated so this is already green this is again green so then there's a bunch of works on achieving tight security and the problem with those was that something was inefficient there all the time so in the beginning this was sort of the reduction was very very tight so it just lost a constant factor but then the cipher text was huge this relied on tree based signatures which were in the cipher text and this was this led to a very large cipher text this was improved and now we're in the situation where you can choose whether you have a large public key or you have a yeah still kind of large cipher text and the other thing is then small again and you have a you have still a kind of a tight reduction so this is lumped as a security parameter so this should be something like you lose something like a hundred or so you lose maybe like seven or eight bits in the of security due to the reduction but this is much better than losing I don't know two than losing 30 or 40 bits if you have a loss of Q here the number of encryption queries okay so this work what does this work do we construct new schemes that are bad in different metrics or in a different combination of bad so one scheme still requires pairings so P of G means pairing friendly group this is not what we would like to have we would like to have a scheme based on a standard assumption like DDH decision linear is not so bad but it requires you to use symmetric pairings this construction and you you have a pairing and this makes it all pretty inefficient but still it's kind of six group elements that's still something you could live with and the public key is yeah kind of somewhere between bad and good so this is a new a new scheme we get in the in the pairing regime and we also get a new scheme from the DCR assumption in fact what's interesting is that the main contribution of this work is that we we present generic new techniques to solve the problem and to randomize challenge ciphertext and in particular as a demonstration this gives the first tightly secure public key encryption scheme so tightly chosen ciphertext secure public key encryption scheme from a DCR or DCR like assumption okay so maybe it's conceptually very interesting but this is I'm not suggesting that this is practical in the end so you still have 30 group elements in the ciphertext so in the remaining talk I will just give you like a hint or a glimpse at the techniques okay so basic strategy is so the first part is just copied you can ignore the first part of the slide it's the idea is to start from now young double encryption so this is now young double encryption and there are ciphertext consists of two encryptions under a mildly secure encryption scheme under chosen plain text secure encryption scheme of the same message so in an honest encryption we have M0 M1 under different public keys and you add a non-interactive zero-knowledge proof that the two M's are equal so that M0 equals M1 and we call this a consistency proof so consistent ciphertext of those where you really encrypt the same thing so we're both ciphertexts C0 and C1 decrypt to the same message and how would you go about to prove the chosen ciphertext security of this scheme so this is known and this is a known way to prove now young secure I don't think it's the first way to prove it secure but there have been several proofs proofs of now young and this is one of them which particularly mesh as well with what we want to do okay so you start with the CCA experiment and there in the honest scheme you use the secret key 0 to decrypt so if you want to decrypt you just need one secret key right you rely on the consistency proof that it's all that M0 is M1 and you would get the same thing if you use the other secret key but actually you just need one secret key right so then the first thing we do in the security experiment so we have a few game hops here and try to randomize all challenges the first thing that you do is you simulate all proofs that you can do by relying on the zero knowledge simulation property okay that doesn't sound difficult then what you could do is the next thing is that you randomize all the M1 encryptions here since you simulate the proofs you don't need a witness so you can you can mess around you can play around with the right part of the ciphertext okay and now you can randomize all those because it's a mildly secure scheme like El Gamal think of El Gamal in the right hand if in c1 here because it's a mildly secure scheme it's very easy to get tight security there because you don't have these decryption oracle dilemmas there that you need to be able to decrypt but you cannot decrypt the challenge ciphertext that you don't have here you can just use El Gamal and then it's very easy to to replace or to randomize all the c1 parts and challenges that the adversary gets at the same time without any additional reduction losses just one step so this is easy the difficult part and what makes this really challenging is now the next step where you say that okay now we actually what we want to do is we want to randomize also the left part of encryptions but in order to do this we must forget the secret key sk0 because we still use that to implement the decryption oracle okay and in order to do that we need to switch into changing the we need to switch the decryption key that we know to implement the decryption oracle so we use sk1 instead in order to do that without changing anything that the adversary observes we must rely on the soundness that says anything that the adversary generates for us anything that the adversary sends to the decryption oracle we can decrypt with either a secret key and it gives you the same results so here we will rely on the soundness of the proof system and in fact on the simulation soundness we've simulated many proofs for bad statements for false statements and now we need to rely on the soundness so this is the hard part the red part then we randomize m0 and we have randomized everything that the adversary gets were done so the difficulty is kind of outsourced into this non-interactive zero-knowledge proof it needs to be secured with a tight security reduction in the managed many challenge setting and it seems it seems hard to construct those creatures so in this work we give a slightly varied simulation or random randomization strategy and a new way to prove now young in the multi-challenge setting specifically geared towards market challenge multiple challenges one ingredient that we would that we use is hash proof systems and this is just a short recap of what hash proof systems are these are designated verifier non-interactive zero knowledge proofs so there's a public key in a secret key with a public key you can generate proofs if you know witness and with the secret key you can verify proofs how does this work how do you verify proofs in the particular case of hash proof systems there's a proof which is uniquely determined by the instance and the secret key and you can compute that unique proof either with the secret key just from x from the instance or from public information using a witness and the verifier just checks if his proof that he computed from the instance alone matches the the thing that he got in as a proof from the prover it's easy to similar because because we can use the secret key to compute proofs it's easy to simulate you just apply the secret key to the instance that's it and we have this is the nice property of hash proof systems we have statistical soundness in the sense that if you know only proofs simulated I mean they're unique right it doesn't matter if they're simulated or honestly generated if you know only proofs for true statements then any proof for a false statement in the sense of proof in the sense of the thing that the verifier then compares it with is information theoretically hidden so the best thing is you can guess and you had your statistical security there exponentially small error probabilities or soundness errors there okay so this is the thing that we're going to use and we know efficient hash proof systems both for linear languages so linear in the exponent from Kramer and Schup already and for all languages so for languages of disjunctions of linear statements this is particularly relevant for this talk from a work of Michel Fabrice and David so Michel Abdullah Fabrice Benamouda and David Ponchival okay so here's the idea for a proof system so the ciphertext looks just like with now young and the proofs look like this we have actually two proofs and a hidden bit or a hidden a hidden value tau which is a random bit so I'll go into so this looks already is it smells a little bit like Katz Wang signatures and I'll give a little bit of relation to this later on and the two proofs are simply proofs for the statement that the ciphertext is consistent that M0 is M1 or that tau has a particular value right so we give so pi zero proofs that ciphertext are consistent or tau equals zero and pi one proofs that ciphertext are consistent or tau equals one which means you can always get away with simulating or with just maybe because you created the ciphertext maybe you know that with generating one of those pi b's one either pi zero or pi one for any ciphertext even if the ciphertext is inconsistent because there's kind of a simulation trapdoor here which you can select where whether it's the left or the right proof system that you want to simulate but you cannot get away with simulating both at least then you break the soundness in some sense right that's what I just said a simulated proof breaks the soundness exactly for for a bad ciphertext breaks the soundness exactly for one of those hash proof systems for the hash proof system HSK or with the secret key HSK one minus tau but the other one you can simulate because there's a statement is simply true okay so before going into the actual proof strategy how we randomize things here's a here's a picture picture is always good so these are all the ciphertext the challenge ciphertext that the adversary gets the CIs so because I'm lazy I just I just wrote five there and our goal will be to partition the set of all ciphertexts of all challenge ciphertexts into two parts the ones with tau equals zero so remember that tau is sorry can't really operate this thing half of the ciphertexts have tau equal or about half of the ciphertexts have tau equals zero tau is the thing that parameterizes the proofs and half of them have tau equals one so what we're going to do is in each step of the proof we're going to randomize one half this means that well the other what the other half will going to be is going to be untouched and one half will be simulated one half will be randomized so green means the corresponding messages have been randomized in the next step we're going to create a different partitioning again with random towels in the challenge ciphertexts and we're going to randomize another half of all the ciphertexts so until at some point we have partitioned a bunch of times all of all of lambda time security parameter many times and then we have partitioned so that each time we have randomized half of ciphertexts and after at most all of lambda steps we will be finished at least with high probability so that's the strategy but how does this work in detail so first of all during the security reduction we guess tau star tau star is so think of this as an experiment where the adversary tries to convince you of something false where the adversary tries to break the soundness of the proof system where he wants to submit a decryption query where he can detect whether you use the left or the right secret key and this tau star is the tau value for this particular ciphertext where the adversary first tries or the first adversary first successfully cheats so this is a bit so we can guess it and intuitively this means that we have just guessed for which proof system the adversary breaks the soundness first either HSK 0 or HSK 1 and intuitively the adversary breaks the soundness for HSK 1 minus tau star then we randomize all the ciphertexts that that will not require the soundness of this proof system so that all the ciphertext that we that can randomize without breaking that particular proof system those are half of the ciphertexts those ciphertexts that do not lie in the same half as the ciphertext from tau star or the ciphertext with tau star and then we re-randomize we re-randomize the partitioning we partitioned the ciphertext space in a different way and then we go back to one so the difference to Katz-Wang signatures is that they also have a signature scheme where they have a different well it's a different it's a different tool but they have in a similar way they they kind of use some soundness of a proof system with a with an additional bit in proofs if you consider signatures as proofs at this point for zero-knowledge proof system and the difference is that this work was in the random oracle model and there it was easy to have this partitioning bit public but the simulation capability is hidden meaning that it was hidden to the adversary whether you can simulate the left or the tau equals 0 or the tau equals one case so the difference is kind of where the simulation capabilities lie in our case because we're not in the random oracle model here we we have to decide in advance what we can simulate and what we cannot simulate and this is what tau dictates okay so here's another illustration the only difference to be for was that we now have a challenge ciphered or we have a c-star here which is actually a decryption query and this is the decryption query for which the adversary breaks the proof system okay thanks okay and the rest is as before so we randomize everything but we randomize it kind of around c-star okay I'm running out of time there are some omitted details how does the switching of partitioning of partitioning is really work so if you just if you kind of in this picture if you change from here to here how does this really work how can we forget the bit tau and that really it requires a change of the scheme such that you don't really randomize initially but you decouple ciphertext so that you kind of replicate the proof system so you kind of you work your way towards a setting where you don't have two instances of the proof system but exponentially many so not exponentially many but as many as you need to handle all ciphertext differently so but this is a very technical part I won't go into the detail for that and the last problem is how do we get suitable hash proof systems and we can rely on the work I already mentioned in the pairing setting yeah we can work in the pairing setting with the work I already mentioned and then the DCR setting so in an RSA type setting with composite order groups we simply we construct a new proof system that uses that where we can compute a DCR that we can compute discrete logarithms in the DCR setting for disjunctions of course for disjunctions of linear languages okay so that brings me to the summary the main goal was a new strategy to obtain tightly secure public encryption schemes and the main difference to previous approaches work was that the the way in which we randomized ciphertext in which we kind of randomized many ciphertexts at once many challenge ciphertext that the adversary gets we need to randomize very many ciphertexts and very few steps that this is how we do this and how we partition the set of ciphertext is chosen adaptively at encryption time it's not kind of hardwired into the scheme it's chosen in the simulation or in the security proof adaptively with this special bit tau which was not there before so the main benefit or the demonstration where this is a useful thing and gives some benefit over previous work is that we have a DCR based solution and the technical means that we is that we have a new or that we require and we also construct a new type of disjunctions in particular in the DCR setting of disjunction proofs okay so and there's more there we have a follow-up work that also shows that you can actually get this efficient in the and without pairings in the cyclic group setting so if you're in the DDH regime you can actually construct a public key encryption scheme with it which is green in on the whole in the whole line basically which also has a small public key small ciphertext and a good reduction from DDH and we also have a result on structure preserving signatures in follow-up work where we show that you can also get tightly secure structure preserving signatures which are compact at least asymptotically compact using the same ideas okay that's all I wanted to say thank you for your attention