 Thank you for the introduction. Hi, I'm Lisa. I'm PhD student at Castro University of Castro Institute of Technology. I'm currently visiting IDC Hetzliya and I'm happy to present our results on tightly secure non-interactive key exchange today. This is joint work with Julia Hesse from Teodamstadt and my advisor Dennis Hofheims. So let's begin at the beginning non-interactive key exchange. What do I mean by that? So we have Alice and Bob who want to establish a secure channel and they can do so by generating a public key, secret key pair publishing the public key and then at any point of time they can derive a shared key with their own secret key and the other users' public key without any further interaction. Of course, we want correctness so Alice and Bob should derive the same shared key and we want security for now. We want that anniversary seeing both public keys still cannot distinguish a key, a shared key from a random key. As the title says, we're not just interested in security in this talk, but in tight security. So what is tight security? Generally in cryptography we prove a scheme secure. By showing that if we find an adversary attacking the scheme, we can construct an adversary attacking some problem which we assume to be hard. And then show that we can bound the advantage of the adversary attacking the scheme can be bounded by some L times the advantage that the adversary has in attacking the problem. And if we're just interested in asymptotic security, we're fine with L to be polynomial because if we assume a problem to be hard, then we assume the advantage that any adversary can have to be negligible, polynomial times negligible is still negligible, so we're fine. But for tight security, we want more. Namely, we want that the loss is small, for example, a small constant. And why do we care about that? So there's two reasons. In theory, it's interesting because it gives a closer relation between the scheme and the underlying problem. So it's interesting to think about constructions, impossibility results, that's also more just called that this work is in. And for practice, it's interesting because it gives smaller keys because we have to account for the security loss in the key size and therefore more efficient instantiations. And in settings that we will also see now where the loss depends on the users and we can have a huge amount of users and this actually this can actually make a difference. Okay, I claim that everyone of you knows Nike and namely the Diffie-Hillman key exchange. So Alice and Bob can exchange, can choose a scaler and just publish a group element to the scaler and the shared key will just be G to the A, B, if A and B are the exponents that Alice and Bob choose. We have security from the Diffie-Hillman assumption which states that given G to the A and G to the B, G to the A, B looks like a random group element G to the C. So there's no loss here. So is that what we mean by a tight Nike? Well, if we would, I would not be standing here. We would have one since 1976. So we consider like a different security model. First of all, we want not just to consider two parties in isolation but we have to scenario where we have many, many parties that want to want to communicate with each other. And any two parties at any point of time, for example, Carol and David, should be able to do so just giving the public keys of the other parties. And even if all the other parties come together and leak all their secret keys, they should still not be able to learn about the shared key that Carol and David derive. This can be captured in the following simplified security model. So the adversary gets N public keys like you saw in the picture before and then at any point, and then he can choose which two parties to attack, I star, J star, Carol and David. And from the experiment, he gets back the secret keys of all the other parties. So that's what we call extractions. He can extract the secret keys of everything except the one he challenges on and either a shared key or a completely random key. And he has to distinguish which one it is and we say his advantage is in how much is he better than guessing in doing so. So how does Diffie-Hillman Key Exchange look like in this setting? So in this setting, how can we prove security? Well, we can just guess, the reduction can just guess I star and J star that the adversary will later query on and will embed the DDH challenge in those two public keys just as you saw before. But because the reduction has to give out the secret keys for all the other public keys, whenever an adversary does not query exactly this I star and J star, the reduction has to abort. So we have a security loss which is quadratic in the number of users. And maybe this is just not a good reduction. I mean, this is the simplest we can think of. Maybe there's a better reduction and we can prove it tighter. But actually, this is not true. So Bader-Jager, Lee and Shage proved in 2016 that for the Diffie-Hillman Key Exchange, this loss is inherent and not only for this but for a broad class of non-interactive key exchange schemes. So what are our results? Well, first, can we do better? And yes, we actually achieve a Nike with where the security loss is only linear in the security parameter. And the question is, well, linear is still far from tight before it said like small constant. If we have many users, linear is still not what we aim for. But we give some intuition that this is hard. Namely, we prove that for a broad class of Nikes, including ours, again, this linear loss is actually inherent. And additional, because the security model that I showed you before is not what we aim for, but we aim for active security in the end. We give a generic transformation for any Nike with passive security to Nike with active security and tight instantiation for our scheme. But I will probably not have much time to talk about that. Okay, so I want to talk about mainly the first two parts of the result. And in order to do so, well, to understand how we can circumvent the low bound of battered L, let's take a look at how the low bound works. So first of all, it applies to all Nikes where public keys have unique secret keys. So for Diffie-Hillman Key Exchange, this is the case just because the discrete lock is unique. It rules out any tight, simple black box reduction. How does it do so? So reduction, here we have a reduction B, transforming an adversary, attacking the scheme into a solver for the underlying problem. The idea or the technique now of this low bound is the so-called meter reduction technique. So the idea is we don't actually have an adversary, but just a reduction, and we want to simulate the adversary employing the reduction itself. And how can we simulate the adversary? Well, it's certainly sufficient to compute Ki star, J star, because then we can compare with KB and give the result. But of course, this is generally hard. How can we use the reduction to do so? Well, we can rewind the reduction because of our extractions. The reduction has to give us, actually give us back secret keys. So when we rewind the reduction, and we are successful for like, say, I star J, then the reduction has to give us the secret key for J star. And when we have the secret key, we can now compute Ki star, J star. Why is that? Because secret keys are unique. And because a shared key is derived deterministically from the secret key and the public key. So unique secret keys, we have unique shared keys. So we can perfectly simulate the adversary. So assume there would be such a run where we, on which the reduction does not abort. Well, then we could, actually the meter reduction could efficiently solve the problem because it just uses the efficient reduction and doesn't need an adversary anymore. But this is a contradiction because we assume the problem to be hard. So we have a security loss of at least lambda, of at least omega of n squared because there can be only one run where the reduction does not abort. In other words, the intuition is that the reduction doesn't know the secret key for at least two indices because otherwise it couldn't make use of the adversary because it's a simple black box reduction. And because it has to give back the secret keys, it has to abort on all other runs. Okay. So basically for continuing, for explaining our construction, all you have to get from that is that uniqueness of the secret keys implies uniqueness of the shared keys. And because we have uniqueness of the shared keys, the lower bound actually works. In our scheme, public keys, so how can we circumvent that? So in our scheme, public keys obviously cannot have unique keys if we want to make use of that, but public keys have many, many secret keys. This alone is not enough because of correctness. All these secret keys have to be in some sense equivalent because the correctness of the scheme requires that no matter which secret key we use, the shared key has to be the same, otherwise we would not have correctness. But what we can now do, well, what reductions often do, we can introduce invalid public keys. So invalid public keys have, of course, to look like public keys computationally, but for invalid public keys, of course, we don't have any correctness requirements. So what we will show in our scheme is that the shared key of an invalid public key together with a valid public key will look completely random, even knowing both public keys. And so how comes the many secret key, how does it play a role here? Well, this is only possible if there's entropy left in the secret key given the public key, so if there's many secret keys that are possible for one public key. This is how we will employ our many, many secret keys. Okay, so now it shows the question how to instantiate it to get the computational indistinguishable and to get the equation here. So for the computational indistinguishable, we employ the subset membership assumption which states that for a language, it's indistinguishable whether we choose from inside the language or from outside the language. And you can see the colors here, green and red, so maybe you recall that on the last slide the valid public keys were green, the invalid public keys were red. So this is exactly what will be valid public keys and invalid public keys. And then we have the second part. We can instantiate using a hash proof system. So how does a hash proof system work? A hash proof system gives us two methods of evaluation, so we have public evaluation and private evaluation. Public evaluation on a word from a language, we take the public key of the hash proof system, but we need additionally to the word, we need a witness that the word is actually in the language. Private evaluation, we just need the secret key of the hash proof system together with the word. And whenever the word is actually in the language, we want that public evaluation and private evaluation yield the same key. So we have correctness and further we want universality. So whenever we're outside the language, then even giving the public key corresponding to the secret key, private evaluation of the secret key on some word outside the language should look completely random. And maybe you can see that this looks very much like what we needed before, and actually this is the case. So our Nike, it's simply Alice chooses a word from a language together with a witness and Bob chooses hash proof system, public key, hash proof system, secret key. And now they can both derive a shared key. Alice can do so by taking, by evaluating the hash proof system using the hash proof system public key of Bob, which he published as his public key. And his own, in her own word and her own witness, but Bob can take, can evaluate privately because he has the hash proof system secret key because he generated himself and he has the word of Alice, which is the public key of Alice. And okay, now we have the multi-user setting, so actually we'd not just have Alice and Bob. So what they will do, well, they will generate both and publish both, and then either you can think of multiplying both evaluations or you choose the hash proof system parameters always off the party with the larger index, so there's several possibilities to make it work for more parties. And what's very, what's crucial here is that first of all, we have that secret keys are not unique because hash proof systems, secret keys are not unique. This is what allows us to get the universality and we can switch X to X outside L. So this is, this is the crucial part what allows us to get a tighter reduction. And now the security proof is, is, is very easy. And the idea is we don't, now don't have to guess both in the indices, right? We just have to guess the smaller of the two indices, right? So we, we just guess one index and then we embed the subset membership challenge, Xi star, which can be either in the language or outside the language into this, the public key of this party. And if we were indeed right, as I said, if this was indeed the larger index, if, if, if we always use the hash proof parameters of the larger language, then the shared key will just be the private evaluation of the hash proof secret key of the J's party together with Xi star. If Xi star was in the language, that's a perfectly, perfect shared key. But if Xi star was outside the language, this looks completely random because HSKJ is unknown. So this is simply the universal, universality of the hash proof system. And the intuition is now this gives us a security, a linear security loss. We don't have to guess two indices, right? We just have to guess one right. So this gives us a linear security loss. And to issue is we shifted one key from the red box to the green box. So we have in the green box, we now have all keys except for one. And in the red, red box, it's not only the reduction doesn't know the secret key. It's possible that there doesn't even exist a secret key anymore because it might be an invalid. It's generally an invalid public key once we switch. Okay. So this is our scheme. And now the question is, is this the best we can do? We can get better than linear security loss. And well, as I said before, the answer is no. But why is that? So let's recap the low bound that we saw before. And the low bound from before. The idea was that we can either obtain SKI star or SKJ star. We are rewinding to compute our unique K I star J star because the secret keys are unique. This shows that the reduction has to abort on all runs without except for this one run I star J star. And this means we have a loss that is at least N squared. The problem, of course, now is our secret keys are not unique. We exactly did something to circumvent the low bound. But now we want to employ the techniques of the low bound. Again, so of course it doesn't work. But what can we do? In order for the meter reduction technique to work, all we require is that K I star J star is actually unique. Well, between valid public keys, K I star J star is unique. This is the correctness of the scheme. So we have that. And furthermore, which is crucial for the low bound, invalid public keys of our scheme don't have secret keys. So how can we employ this in our meter reduction? Well, if we can now extract SKI star and SKJ star, then we know that PKI star and PKJ star are actually valid public keys. And then we know K I star J star is unique. And if we know K I star J star is unique, we can argue exactly as before that we can solve the problem without actually employing an adversary just by using the reduction. So the idea of, and so in our case, like what changed now is before it was we said there can just be this one run I star J star on which the reduction doesn't abort. Well, now it can report on all runs without I star or on all without J star, but not on both because if it would on both, it can. I think now I said it the wrong way around. I'm sorry. We know that reduction has to abort on either all runs without I star or on all runs without J star. If it would be successful for both, then we could we could solve the problem ourselves. So now we have a linear loss instead of a quadratic loss. And okay, I have, I think I can say something how we get from passive to active security just very shortly the active security every user can what can register public keys himself. So he can register whatever he wants as a public key. And this makes proving security harder. But how can we force him to actually register something, something right as a, as a public evil just let him prove it. And we need like kind of a strong kind of strong. So he proves that actually his public he has a secret key using unbounded simulation sound non interactive serial knowledge proof of knowledge. And where we need the first part. So unbounded simulation sound and is it we need that we can simulate during the reduction where we might not know the secret key as you just saw before. And the proof of knowledge allows us to extract the secret key from corrupted users, which is also necessary to to prove security. And we give a generic instantiation, as I said before, from standard components. And for our, for our, for our Nike, we can actually achieve optimized tightly secure instantiation, because for example, we can use the linearity and also actually a one time simulation sound music is sufficient. And this lets me get to recap our results. So instantiating the construction. So it's all modular constructions instantiating it with the hash proof system of Kramer should we have the first Nike passive secure Nike requiring three group elements and having only linear security loss. And using our trans the instantiation of the generic transformation, we get 12 group elements for active security. So as you can see, there's still room for for future improvement to get not just from red to orange, but actually from orange to green. But it seems it seems it seems inherently hard to do so. As our low bound applies to all schemes, which, which, which were also like in our scheme invalid public keys have no secret keys. So you will need inherently new techniques to get there. And yes, I think that's all I want to say. Thank you very much for joining. And I'm happy about questions.