 Okay. So thanks for the introduction. This was actually joint work with T. Bojaga, Daniels Lamanik and Christoph Strieks who are also in the audience today. So if you're interested or want to discuss something just approach us and let me directly start with some kind of motivation why we worked on this topic. So essentially if you want to send some kind of encrypted payload data between some client and some server, you usually have to establish a key first and therefore you have to send back and forth different messages in order to eventually have some kind of key which can then be used to encrypt payload data. In case you're using for instance TCP you even have to send back and forth more messages before eventually being able to send encrypted data and what we looked at was basically how can we reduce the messages which need to be sent back and forth here. So one obvious change would be to switch from TCP to UDP in order to skip one such round trip but the probably more interesting question would be how can we actually also get rid of this key establishment and directly send cryptographically protected payload data in the first message to the server. So this is basically the question we addressed. First off there is of course quite trivial protocol which allows us to actually do zero round trip time and encrypt the payload in zero round trip time so we basically just use an asymmetric encryption scheme at the server and the client simply encrypt some key under the public key of the server and encrypt the payload which needs to be sent under this symmetric key and then sends it all to the server and the server can again then unwrap the symmetric encryption key and therefore read that data in the end. However if you use such a simple protocol we basically have some major deficiencies. First of all obviously there is no forward security or secrecy so essentially if a key leaks I leak all the data which I've previously sent to the server and typically we want to achieve forward secrecy so which basically means that if we divide our time in many periods and if a key at a certain period leaks then we basically only leak the data from this point on and everything which was encrypted before so with respect to previous periods it's pretty much useless or it does not leak any information about the plain text being encrypted there and of course such a protocol is also vulnerable to replay attacks so one can basically simply capture this message here and send it again to the server and probably the server will reply if I use such a protocol. So if you look at existing approaches which actually give us more features than this trivial protocol we could for instance look at zero round trip time in TLS 1.3 or quick and actually they already have quite some nice features to reduce the round trips which are required to establish a key and essentially both protocols could have in common so that we have a session establishment so in the first session we need one round trip and upon resuming those sessions we actually can do a zero round trip time communication so those protocols handle replays quite nice and the only question which remains is do we also achieve full forward secrecy which basically means are all messages which are sent protected in a sense that forward secrecy holds and the answer here is that most of the messages are actually forward secret but the first or the payload data in the first message up on session resumption actually only has some limited forward secrecy and essentially what we want to achieve is that we have forward secrecy for all messages at the same time replay protection and zero round trip time so for a long time it wasn't even clear if such a thing even exists but there was a really nice work last year at Eurocrypt by Günther Heyl, Jagger and Lauer and they basically showed that you can indeed achieve those properties simultaneously by relying on a primitive which is called punk dribble encryption which is due to green and meals basically punk dribble encryption is pretty much like a conventional public key encryption scheme so you have a key generation algorithm and encryption algorithm and a decryption algorithm and additionally the only difference is that you have an algorithm which is called puncture and such an algorithm basically takes some secret key and some ciphertext and outputs an updated secret key which is indicated with a prime here and the properties are basically that this updated secret key is no longer useful to decrypt the ciphertext on which it was punctured but it's still useful to decrypt other ciphertexts basically and we can repeatedly apply this puncturing algorithm on keys and therefore punctured on multiple different ciphertexts so how does this help with zero round trip time key exchange basically it's actually quite straight forward you encrypt a message under the public key of the server and the server can then obviously decrypt it using the secret key and once the decryption was performed the server punctures the key on the ciphertext so later on this key will no longer be useful to decrypt the ciphertext and then it deletes the old key so if we have a puncturable encryption scheme which somehow provides nice properties in this context we are done and we have basically our efficiency round trip time key exchange protocol so what are the downsides of the existing approach basically they are quite expensive when it comes to puncturing and or decryption so at the zero grip paper there was a generic approach how to use puncturable encryption for zero round trip time key exchange and when plugging in the existing schemes of puncturable encryption schemes you either get quite inefficient decryption or quite inefficient puncturing and both have to be done online which is why it's currently only a feasibility result and we basically ask how can we improve this and on our way to basically achieve this we looked for some ways how to offload those expensive operations to somewhat less critical phases and in this context we made some observations so for instance we can say that usually the secret keys are held by relatively powerful servers and therefore it's not that of an issue if we have somewhat larger secret keys and if it helps us to somehow reduce the computation times upon decryption or puncturing then it's perfectly acceptable furthermore we also made another observation which is quite unusual to the concept of public key encryption namely if you encrypt something you typically expect that you can also decrypt it later on because otherwise it would be a bit pointless of course but however in this application we can say that we are fine with some kind of decryption error which is not negligibly small but some very small number which is non-negligible but still sufficient for our purposes so for instance if we assume that we want to establish a key and one session in 1000 sessions fails for instance it will probably be fine so we can if we can arbitrarily adjust this this will actually be a nice trade-off because for instance in the zero RTD key exchange we can always fall back to a one RTD key exchange in case of such a failure and then we have to do this once in a thousand times and for the rest we have zero RTD key exchange okay so having these observations in mind we basically came up with a novel primitive which we term bloom filter encryption and bloom filter encryption can essentially be seen as a puncturable encryption scheme which has those properties which I just mentioned before which we can accept and therefore gives us the advantage that we have blazing fast and encryption sorry decryption and puncturing and I'm sure most of you will probably know what bloom filters are for those who don't let me just quickly recap how bloom filters work so what we basically have is we have some kind of finished bloom filter state t and this is basically simply an array of lengths m and initially this is set to all zeros and we use k hash functions which which essentially map from the set which we want to insert into the bloom filter or from the domain of the set which we want to insert in the bloom filter to one index from one to m so essentially we can obtain using such a hash function an index for a particular value which we want to insert in such a bloom filter and for our example we let k be equal to three so just to give a simple example and what we do if we insert some values here x y z for example we basically simply use our three hash functions in this context compute the hashes obtain the indexes and set the respective indexes which are addressed by these three hash functions to one we basically do the same for all the values which we want to insert if you want to check whether a certain value is in the bloom filter we simply recompute all the hash functions and check whether all positions are one so essentially since we never set a position back to zero again we have no false negatives however we could ask what happens if some element is not in the bloom filter so if we are lucky we actually get some indexes where at least one of the indexes is zero and therefore we learn that this element is not in the bloom filter but if we are unlucky we can actually get some kind of indexes which actually would indicate that the element is in the bloom filter but actually this is not the case so essentially we have the possibility of false positives and by adjusting the parameters of the bloom filter namely the number of the hash functions and the size of the bloom filter and the number of elements we want to insert we can basically adjust this probability to some value which we are willing to accept so how does bloom filter encryption work then so basically up on setup we set up such a bloom filter and to each bit of this bloom filter we associate a key pair so essentially we have a secret key and a public key per bit in the bloom filter and all those keys then combined yield a secret key and a public key for the bloom filter encryption so this is just a very abstract overview this is not our actual construction but this should basically give you the the basic idea of this bloom filter encryption stuff and if we now want to encrypt some kind of message m we basically associate some tag tau to the ciphertext which we then obtain in the end again compute the indexes which are hit by this tag and basically then use some kind of fancy encryption scheme which allows us to do something like we want to encrypt a message with respect to in this example here key number six key number 11 and key number m minus three and we encrypt this message and the ciphertext should then be decipherable either by using the secret key corresponding to key six key 11 or key m minus three in this context to puncture a key on a certain ciphertext we again use this tag which is associated to this ciphertext so tau prime in this case obtain the indexes and then we essentially simply delete the keys which are associated with the positions indexed by this tag and very informally you could basically say that if we delete all those keys then the secret key is no longer useful to decrypt the ciphertext which was encrypted with respect to this tag tau and we can already observe here that if we only have to do those deletions so this is an operation for instance which is required in any other scheme anyways and we do not need to do more so we only need to delete portions of the secret key and then we are done with our puncturing so in the end we update of course the bloom filter state that so that we know that this key is no longer available to decrypt it's also quite simple we again use the tag which is associated to the ciphertext determine the indexes and look for the lowest index where we still have a key and essentially just perform the decryption using this encryption scheme okay so just to give you a basic idea of how we could adjust this for instance so we have some concrete numbers for one example here so we set the maximum number of elements in the bloom filter to be two to the 20 so just to give you an idea this would allow approximately two to the 12 puncturings per day for a full year so this is quite a reasonable sum setting for smaller servers probably and the false positive probability in this setting is set to 10 to the minus three has also used in the example before and the sizes we get are bloom filter size of two megabytes and the number of hash functions would be then or the ideal number of hash functions which we get from this setting would be 10 so also this is quite important to note that the number of hash functions which essentially is quite important for the ciphertext size is quite low in such a setting and we actually also obtain quite compact ciphertexts using this example so in the the very abstract idea I gave you we've seen that we essentially have keys or which we have keys which are linear in the size of the bloom filter and obviously we asked if we can do better and in doing so we basically looked at different schemes and we ended up using ideas which are based on the board net Franklin identity based encryption scheme which essentially helps us to compress the key to a constant size public key and then instead of associating key pairs to each bit in the bloom filter we simply use this public identity based encryption key and associate identities to each bloom filter position so that each identity based encryption key is essentially useful to decrypt to decrypt the ciphertext which was encrypted with respect to such a position the ciphertext is then essentially linear in the number of hash functions so in our example for instance this would actually be in the order of tens and we additionally use shared randomness and a hash variant in order to further compress this thing so that we end in the end end up with about 3000 bits for our setting so those numbers already correspond to the recent adjustments which are required due to progress in solving discrete logarithms and prime extension fields as I said before we are willing to accept somewhat large secret keys and in our setting here with the parameters from before this would be about 700 megabytes of keys we additionally introduced some other technicalities which we require I'm not going into detail here we can achieve cca security so the details will be in our upcoming eurocrypt paper and we also presented some alternative constructions which allows us to achieve constant size ciphertexts using attribute based encryption we extended all this stuff to achieve multiple time periods so in a sense we can then do two to the 20 bank drinks per time period and if we have used all our bank drinks we switch the time period to the next period and again have two to the 20 bank drinks and for this we use a similar approach as done in previous work currently there's also work in progress on other instantiations so kai gellat and diva yaga are currently working on alternative instantiations in order to optimize some further parameters of such plume and filter encryption schemes and this already brings me to the conclusion of my talk so basically the take home message would be that the existing approaches are basically conceptually very nice because they essentially give us the first time the possibility to achieve all the nice properties we want but current instantiations are not that practically and so what we basically achieved with our work is that so in a sense you could say we do not let all those operations disappear but somehow we found the possibility to offload all the expensive operations to less critical phases namely the key generation or the switch of the time intervals and this allows us to obtain very efficient decryption and which basically requires just very roughly El Gamal in the target group of the pairing and upon puncturing we only need to do deletions and evaluate the hash functions for the plume filter so those are also very cheap operations and we expect decryption at puncturing times in the order of millisecond so this should really be practically usable and of course we also expect that plume filter encryption might find other applications in addition to zero round trip time key exchange so what are the next steps we are planning basically we're planning to actually evaluate this in practice and also try to deploy this in a somewhat larger setting and we would be really interested in also finding out about how the scales so if you want to do more puncturing sense on so we are currently also looking for partners who would be interested in implementing such a thing at maybe at larger scale so if you're interested just contact us and now I'm happy to take questions and thanks so I want to ask how did you do deletions in plume filter as I know if you want to do deletion you have to use an accounting plume filter right sorry so how did you do deletions in a plume filter uh yeah we actually do not do deletions we actually insert the tags which correspond to the bank chat positions and we just delete the associated keys so we do not delete from the plume filter but we actually insert in the plume filter but the positions in the plume filter which correspond to the inserted elements indicate which keys we have to delete basically okay thanks hi thank you my question is practically speaking in something like TLS 1.3 would this be a replacement for what essentially is a session ticket encryption key mechanism and in in such a case every place that the session ticket encryption key is replicated would have to also share the same deletion state so to the first part essentially it could be seen as kind of a replacement of such such a key but I think we achieve something stronger because we achieve forward secrecy with respect to every message and if you have such a session key then the first messages are only protected as long as this key does not leak if I understand this correctly and regarding the replication I guess you would have to replicate the whole state yes okay let's talk after yeah hi thank you very much for this really nice work and presentation and does your threat model extend to consider an attacker who is a malicious client who chooses encryption tags to exhaust your plume filter currently we do not but so informally we could for instance use techniques which which are also used in other other contexts where we simply would not do the puncturing operation before for instance the client sends a second message using the the key which was for instance established in the first round trip maybe but we did not formally think about this so far but informally this would probably be a potential direction to look at okay thank you hello uh so currently in tls the current thinking is that we're going to use a timestamp to limit the amount of replays and we're going to use pair server strike registers and then the huge issue is well what happens they have servers on the other side the globe which are also getting replays and it's not immediately clear to me how this work would compare to that in terms of reducing number of replays because it seems like you get the same protection for a single server and with multiple servers around the world well you can't be this me delight yes so this this would be definitely an interesting direction to to have a look at but uh yeah we currently didn't have it in our scope last question hi so what techniques did you use to do the time interval based approach because there's a subtle distinction between those two works and that's actually why the second one is way less efficient than the first uh yes so um what we used for the time based approach is basically hierarchically identity based encryption scheme and it more or less uses uh the the usual techniques which are uh you always used in in the context of forward security uh and essentially it's very similar to what is done in previous work but uh this is a slight difference regarding all this this key generation stuff and so on uh which we have to do in the context of the bloom filters thank you okay let's thank the speaker again