 Next slide was about moon, Earth, and Earth, and applications to fish and fog as computer power to T.T. exchanged by that data, T.Boyard, and Sunning, and Chris Osteis. And Dave is going to give that up to me. Okay, so next slide, introduction. And then there is a slide on this, slide on elevation. I'm glad to open this up here. So basically, after our amazing, very key exchange session, we see T.E.S.A.R.S. as the main application of our work. And if you, for instance, have looked at some great T.E.S.A.R.S. that was a product called T.B.L.S., you will notice that you have to send back and forth by some message, just maybe, before we actually can read on some sequence and then use the tools to change encrypted data in the case of T.E.S.A.R.S. or something. And the question we basically ask is, how can we actually reduce those routers to short wire? So often this can, for instance, work by drillers. It takes time. It's just a lot of work on something like that. But the problem with one interesting case would be, how can we actually work to get rid of these messages which are actually then again used to basically establish some kind of T. And what we want to do is, that we somehow can send encrypted data directly within the first message to whom. So this is what we refer to, 0.E.S.A.R.S. for the .E.S.A.R.S. change and 0.E.S.A.R.S.A.R.S.A.R.S. is what you're going to find out. So first let's start with some kind of drill and protocol. So you are actually using a G, 0.E.S.A.R.S.A.R.S.A.R.S.A.R.S.A.R.S.A.R.S.A.R.S.A.R.S.A.R.S to make history use some kind of encryption key and do the conversion key that both curve has wanted to pay the data. However, a key that both curve has programmed will be that instead of the guy control, for example you cannot achieve all these. So many of you will probably believe it's all simply for those who are not or you can see that they really dissolve a concept where you include this in your set of modules, you're inside, and if the key leads in a certain time period, then only side effects to the child are included. After the key leads, it will be affected by the key that they could take, or at least, before the key leads into the system. And of course, it's a truly broad organization, so I don't know if we'll replay it, thanks. Okay, so both of us currently exist. So, we actually did that on month three, and we did it for a last five or a nice job in establishing those several properties. So, basically, we had already achieved zero ground-to-climb communication establishment in many sessions because they only established a session with one round-to-climb protocol that could consume the session for quite a while, using that zero ground-to-climb protocol. And we also achieved real-time protection, and at the same time, we had more DPC for those messages. But of course, it's a natural question. So, somehow, we had achieved full-volume DPC for more messages. And indeed, this is my interesting question. It was, I mean, we had a competition for quite a long time until Europe, Europe, Indonesia, and Lava basically came up with the solution. And all of the workings were basically based on the DPC protocol and the production. And after the production basically is kind of developed as a connection. The production is being very good from each generation, from the production generation to the production generation. And we have one position that we're in which is called architecture. And this part of it basically is simply here, that is, that is, and how close an update is going to be. And the properties of this standard are basically that this update secret is although used for the encrypted side-text, side-text from which it was marked, it's still useful to encrypt another side-text. And we can repeat it in the marked key side-text so that it restricts our key side-text. How is this useful? How is this useful in the context of zero problem of each generation? Before the CPC at the same time. Basically, it's actually very secret. So once you've got a file for your purchase team, which is a addition of new, in such a setting you may receive a group message to send to CERA, and then go to CERA. Then the CERA is encrypt using our secret key and once the encryption was formed, it simply packages the secret key of side-text so that later on it's possible to include it in CERA. So the approach for last year's bureaucrat already provided a very nice solution to this problem. But however, they were summed up what needed to be shared with photographers, they had to do some operations up on factory, or eventually on the institution, maybe more in the context of encryption, which are really interesting. So they basically report a site that is more user-proof. So yeah, I think, which could be 30 seconds, 70 minutes. So since bartering and encryption is a thing which basically has an online hierarchy on the next transaction, this is not really a breach. So in order to get into practice, you basically ask, what can be, maybe, set in favor of more efficient schemes which tend to be really easy to practice. So the first thing, of course, is we could say, we can use this on a large piece, because typically, our servers will hold those secret keys, and if it is on a large piece, we can use it as a key set. And secondly, the main observation, which is why I'm going to tell you how to start encryption, because typically if you include something, you expect that it can make a lot more of a decrease because, otherwise, it's 30 points less in most states. But, however, in this context, we basically can observe that each one connection has an address in one cell, or, for example, a place. And this exception, since this stage, we're going to have one round of key changes, and for the remaining sessions, we basically have zero key changes. Okay, so having this observation online, we basically can now move on to our construction. So, before I had an idea of construction, I just want to briefly repeat the group with us, Schumann's group with the Global Group, because we have just two students, everyone, so we basically had to start with RAIN, which is some size M, and it's initialized as more zeros in the beginning, and then we have key batch marches, which basically map. So, first of all, flow filters are some kind of data structure which has to address approximately set membership problems. So, I can insert elements there in the check, because they were used before. And this batch marches map from the universe, where elements have to be inserted into one index between one and M. And throughout the talk, we will use some time constraints to be used as a decrease of a free dashboard. So, how does this work? If we... How does this work? If you want to insert elements, there's actually a table. We simulate elements, use our dash actions, compute the indexes, which are... in which are, basically, given time value of the dash marches, all those elements which are indexed values of the dash marches are close to one. And similarly, we do the same for values which are closer. And we can assert that since we only set bits from zero to one, and we now set a bit back to zero again, we're going to obviously have to go for the matrix, because it actually is being inserted and then we're going to have to go for the same as. However, we can ask what happens if we check for some value which was about insertion frequency, and if we are actually lucky, we basically get this one dash marches index which will tell us, okay, there's zero to insertion frequency more, but because we are lucky, and we can, for some element which is actually not in the bloom filter, get a result that is actually bloom filter, so we can have more quality. And this basically approaches bloom filter and we can determine the probability in which we get those four positives by choosing a number of dash marches that are bloom filter and a number of insert elements which will be inserted. And basically, this probably will give us in our construction at the end the probability that we can do the same for a few people. Okay, so how the bloom filter works are all very accurate and you know this bloom filter approach. So basically, we set up a bloom filter and then we associate a heat beer to every big bloom filter on the American island and then the bloom filter heat beer or bloom filter heat beer, as you can of course, form probabilities and more C2. And to refer to the message, we basically represent some kind of tag and use these tags to determine indexes which are indexes, bytes, tags, and the bloom filter and those tags basically say us okay, we could do the side effects. So the complete increase in heat beer needs to have some platform to support but somehow it needs to realize the functionality that we can, if we increase the message under a heat index, bytes, tag here then we want to be able later on to bridge it with respect to every single key which corresponds to one of those effects using our example here, T6, U11, and Tm-3. To measure such a text, you also can do very straightforward things essentially, what we do is we add new indexes which are preserved by the tag and what we then do is we simply drop all the keys in charge of the respective keys and we can basically then see the keys and allow useful group, grid, side effects associated with this particular tag. So the grid side effects remains basically the end of the indexes and we then simply choose one of the keys in charge of one of the indexes for instance, the lowest one and we basically then bridge it as each other in heat again, choosing some group to see if it's enough enough to actually drop the key. So here, an idea of how large this can get we have come up with some kind of example so we thought, for example, if we want to have an active model we can choose to add Tm as the new feta which basically allows us to choose the size of our coins per day for a year and for a positive probability we can add Tm-3 and we basically get the new feta size to megabytes and what's important is the number of hashtags which directly basically determines all the keys between the indexes is quite low, so for this we have okay, so regarding the sensations we actually present three different sensations which have different probabilities and different results so in the Mercedes version we present a static fraction which is a barricade fraction which we use from an anti-basic version then we present an an anti-basic version and in any version we also have human fraction which we use on an anti-basic model and as I said, one of those is basically a lot of different results so, for instance, for an ID construction we have a complex size of X and one supply for keys and size of X for an ID and for a test version we simply have an ID around it and a complex size of X and one supply for keys for an actual test version so in this case we have a complex size of X but the construction doesn't really exist as a wide large key so, again, with that idea let's take a look at the ID-basic sensation so, in this sensation so, in the previous sensation we are calling it the ID-basic because it's just a white chocolate but in an anti-basic version we basically exhibit an anti-basic solution which has been used in both writing and construction and for the ID-basic study that everybody in the room could work out to have an ID and a secret key has a solution for each and every one of you and for each and every one of you and the size of X basically consists of a key for the reference size of X it's very useful to have reference size and add edge area to the space and you have O of A size and actually for A's and O's we can achieve this for instance for our parameters with a free size of X and as much as before the secret key is about the data we basically achieve for our parameters we also look into how we can achieve a secret key so basically what we did was we basically applied a statistical promoter to our scheme and the main thing that we learned here was that basically a statistical promoter would buy us to basically see what is the critical energy that we use in our processes and some sort of making a critical parameter that we see and this methodology requires perfect correctness or reasonable approach that we do for a particular business to the correctness error but in the context of our scheme we also have an online business to the correctness error which is why we basically test with our readers and basically look at the properties which we require and essentially what we need is the systematic set of correctness notions which essentially capture all of the situation and things to do with those and we also need to have this so we essentially need some means to check better the operation of the algorithm will actually be able to be critical so that in our relation we can basically simply reject some psychedelic or including some psychedelics which would also be critical to the iteration of it and we basically introduce the pharmacy in check from the market feature which is actually a rule to the product simply works by having that would say a rule filter and we can actually write a piece of it in a neighborhood to increase the aspects of it and then we can write this rule which is correct and this step is in as kind of an way which works in order to use all of the approaches to present the piece and we also have any questions in the questions so we want to look at how can we work that way which is for a time based rule to the approach so a time based rule to the approach basically can be solved of priorities in your piece and what you basically do is you have five thoughts and in each five thoughts you can then do the number of barterings which you basically have chosen to present which I thought would go to the barterings and if you thought those would be barterings you should be more of a period and the approach you use here is basically by similar approaches which are mostly by real use and with a guide with different for basically similarities that are of art so it means I run a professional team and we can do the keys and some kind of free and the above all can be basically to exactly the same as they did and for the low above basically a room taker approach again we owe you of the warranty achieved with our manager so here we see a comparison with respect to the institutions with the real use key which we want to present we just have the partner and the manager partner and the manager and plus the main take away from this slide is that we have a virtual operation which basically has built up online the partner operation for side effects which has built up online and we have the partner for the in-car so if you switch to one in-car which can be built up online and the work point is that if you have to look at those keys those keys are quite expensive and it comes to the on-line operations and basically those expensive parts of the on-line operations basically do an on-line face which then basically makes it practical so we have to make some more more but this basically brings us to productivity for the on-line parts so to conclude once again, the most for our safety approaches the most critical operations are very demanding and with our on-line we have this on-line as said in the previous slide so we have very sufficient encryption so basically we almost only need to do a single encryption for instance it comes to an encryption installation only to do a single encryption which should be really efficient and a relational network for both and after encryption we only need to do the switches and we don't need to update the keys especially because the operation in the other keys and we basically move this to the on-line face and we should achieve encryption after a few seconds so that's why I'm saying we should achieve but I will be very surprised because those are really single operations and finally I will conclude with looking at which might also have other applications beyond the zero so this brings to the end, thank you and thank you of course the security is acceptable but now you can listen and get started so I'm wondering for your conditions could be not horrible but maybe if you could have the keys now you could need a real session for the last day but maybe you know so because I don't have you with this notion and we do you think that the extraction will be large enough so you can say essentially since you haven't seen the data that you need I mean this will be a very interesting approach and I think it should lose the vision a lot of part I'm not sure in the context of yes maybe you achieve something similar I guess with this session because there also is a part in this and if this is a destroyer then you have to get some kind of logic to see so you can't accept it as it might make more sense to look at this but in the context of other things like messaging or for example something like this or something like this because there are great news data that it is available in the structure for the future things basically suggest that this is a conversation so maybe this obviously could be a really interesting idea Is there an efficient between applying the several versions for example if you have more hash functions you have more based operation on the one client or what's the cost of the client for the center of the second text For the center it should actually be urgent so you simply need to do an encryption and you need to get a release of this transaction which basically is a huge but this encryption is like especially encrypted but for example if you have an encrypted it's actually a much bigger instance if you want to say you have an encrypted text and even if you don't have a full encrypted text just say who got it right what else do you do with this right on this do you throw it away so when you actually do it throw it away from the launch filter or do you like to do you like do you like to do you like to throw away the space do you like to be repopulated or do you just don't use that space or does that make sense okay let's talk to the next talk and the same day again