 Zdaj. Tako, znašljaj. Zdaj. Ok, tako so. Ok, zdaj. Ne znašljaj. Ne znašljaj, da sem sem utekl, da bo se ovo včak delal. In potem lahko skupaj in izmah vse se ni ste, da je do, da se daje za te, da je so, le pa Že... da se, da se znašlocka za srednik zzx zasroh, za srednik zz, za za za... zazim, če si pristra za informacijo, da zazim, kakr za zezes, če priberi pojena 50% za srednik zzx, If you want, you can say that it is information, so the stolen information, but if is kind of like one half, per so called shifted bit. Protoždač spetem je vseč, da je začelčen, da jste boječni, da je so vsečen. In jste vsečen na strunji, na kajom zezu, z z, z, z. To je začelčen, protoždač spetem. So what is the horizontal bo motel informacije? Let me call this motel informacije of F. So horizontal bo motel informacije is, I mean this actually is simple because it is computed from classical information theory. So remember, what is going on between kodin in kodin je banali-symmetričanel. V svoj probabitih p, ko je bilo tudi kubar, zato da je tudi kubar p, kaj je 1,25. V svoj probabitih banali-symmetričanel je tudi kroz probabitih p, 1 plus p, 1 plus p, seles in bob. No, tudi je bilo tudi probabitih 1,5. Zato da je entropija, zato da je bilo tudi kubar, zato da je entropija v zelo, da je, zato da je zelo entropija v zelo, da je bilo tudi tako, zato je bilo polici, zato je bilo, ovo prej glop i termah, kaj je bilo, zato v salitranjom, tudi je bilo, razne mana za steadyotnu entropija Xiaomi Antropy ou Alpha givem beta, ok? So, that's the more information between us in Bob. Ok now the Xiaomi entropy between—a meen of Alpha is one bit ok. Is one. Ok so in this case this is like a binary Xiaomi entropy ok. Now here is so what is this? It's like ok suppose that Bob gives um I mean an vsega beta, z tem nekaj alpoli beta, vsega je to povrste vsega vsega vsega vsega in prišlo v objevku, vsak je to prišlo vsega in prišlo na vsega in prišlo vsega. In vsega za vsega vsega vsega vsega je to povrste this channel entropy. And it's like, well, this is actually given by, is actually for this particular process, this is actually equal to P, the transition probability. Okay. So the probability that bo is getting alpha correctly is, sorry, it's basically one minus P, actually, is the probability, basically is the probability that, is this probability here, okay, in this case. So what you want to, so what you compute then is, this is actually, what is it called? So that's, okay, also, to be, this is the binary entropy, so that's actually H2P. So for those who don't, for those who don't remember the, the binary channel entropy, so that's, that's actually, I mean, it's just this guy here, so that's H2P is, well, it's like a minus P log 2P, right, minus, 1 minus P log 2, 1 minus P, okay. Okay, so you get something like this, so you get 1 minus H2P, but in particular this P is 1 alpha, okay. And also, yeah, so, so that the problem is that when you compute this quantity, and you put here, well, one quarter, okay, so that quantity is less than this, for this attack, okay. That's the problem, okay. So somehow this cube, this value is too high, okay, is too high, and it's making this, this mutual information too low, with respect to the information that, that there is drop as stolen, okay. And in the, okay, you can find the exact numbers in the, in the, in the document, I think this is around the, this is like around, like, 0.19, so I feel this, so it's quite, it's quite, it's quite lower than that, okay. So what's the point, problem is that, okay, you can actually, well, look at this. So this is actually a theorem from classical information theory by Cesar and Conor. So a rate, I mean, a key rate, it can be distilled, okay, it can be extracted if you have something like this, so I resemble both the information, minus, say, the eavesdropper information, if this quantity, okay, is bigger than 0, so bigger than 0. So if you are in this situation here, right, so you get, like, okay, say, that's my reasonable information, so up to here is this amount of the part, okay, which is given by, which is eavesdropped by the, by eave, so that quantity, this part here is going to be the rate, okay, asymptotically it's going to be rate. I mean, there are some procedures to transform the data, okay, into a key, which has this rate, so when I say rate here, it can, it's basically it's a number of bits, it will be a number of secret bits, okay, per use, per use of the channel, for instance, okay, or per safety bits, I mean, it depends, okay, let me define it in different ways. Let's say, typically it's per use of the channel, so every time we use the channel. So it's a number of secret bits per use of the channel, okay, and it's this quantity. So basically you see, this is like a shortening, shortening of the data into something which is secret. Now, as I tell you, in this case, it doesn't happen, because this noise is too much, and this hour will be negative, okay, so if P, the cube bar, is too high, the rate is negative, this means that you are at the board. But if P is lower, so if the cube bar is lower, then you can have like a positive Q rate. So it's very important that the elizem Bob, they are able to estimate the error on the line, so that's the parameter estimation, okay. So if you want, okay, we started with like, I mean, some kind of long strings of values, like 0, 1, 0 and so on, 0, 1, so on, and say, any uses of the channel, okay. Then we got basically, they became like half of the size after sifting, okay. So this becomes basically an half of the size after sifting, because of the basis problem. Then you have to take away one part for parameter estimation, like a random subset, okay, let's say that's m, that's part there, and so you go basically you have like a shorter, is like n over 2 miles m. Oh, let's put that, this, okay. Now it's not finished, now if, okay, so we are doing this kind of shortening of the key rates. Now if, after using this parameter estimation from here, okay, they say, okay, let's say the cube bar is less than a value that I'll show you in a bit, which is like actually 11%, so not 25. So this means that the attack is not so strong, okay, so now it's not so, it's not interceptor send, it's another attack, so we can actually, we can actually have a positive key rate, okay, and so we can then achieve basically the difference, okay, we can achieve the second key. So what's going on there? Okay, but it's not finished, because I mean, basically these two bars here, kind of the total bar there, okay. Now you have to take off this quantity. So now there's the two procedure, I'm not going to explain them, but the point is that, okay, there are two procedure which are error correction, so assuming that the cube bar is acceptable and basically you have a rate which is positive, okay. Now I have to clean, still I have to clean these strings from first errors, okay, and second I somehow make them to the couple, these dropper from them. And these are basically the two procedure of error correction. The second one is called privacy amplification. So these are classical pref... Classical procedures which are done on the data, okay, so it's kind of processing, okay. So when I do these two procedure of error correction and privacy amplification, okay, basically I'm cleaning them by this, basically by the errors and by this part, okay. So first, I mean, of course there is some kind of reduction as well, so basically this data is going to be reducted as well, so you get like a short, okay. So now minus two, minus m, amount of contribution from say error correction, okay. And then yet, can you go even shorter? Okay, and then you have like something like this, minus error correction, amount of privacy amplification. And then you have like two final strings, okay, which are your key... Yeah, the same. And these are your... This is basically... Sike key. Well, now the point is that since I'm starting from something that can be very big here. And so actually these two can be, I mean, it can actually extract a lot of bits, I mean, a lot of bits anyway, okay. So at the end of the day, I mean, the number of... I mean, the length of these strings, I mean, it could be of the order of, I don't know, 10 to the 8, 10 to the 9. I mean, a lot of information. I mean, a lot of bits you can actually share. So this procedure, which is basically happening overall, I mean, in basically every QQD process, every QQD protocol, you have these kind of steps. When you are sure about this, then you can use error correction and privacy amplification to come up with your Sike key there. Sorry, I had to drink some water. Okay, so let me tell you about the optimal attack, because I mean, it's like, okay, where this number comes from. So the optimal attack is the attack, which somehow gives you the lower cube. So somehow, okay, I mean, you have Eris and Bob, estimating the noise on the channel, what is the minimal cube bear, they may tolerate. So there is, okay, this is the value. So anything which is above 11% is bad. So you need to abort the protocol. Anything which is below is fine and you can go on and start a key. So what is an attack that gives you exactly 11% of cube bear? Okay, which is what I call an optimal at least dropping strategy for BBIT4. So it's not based on deceptive sand. It's a more complicated one. I mean, clearly it's a kind of less invasive attack, okay, where the system is not directly measured by the dropper, but she attach and seal us. She attach basically auxiliary systems to the input signal, and then it is upper measure of them in some kind of optimal way. So it's something like this. Now, suppose that, suppose that Eris is kind of encoding. So suppose that you have A. So A could be like 0, 1 or plus, minus. Well, depends on the basis. Okay, so it's completely symmetric, right? And so for simplicity, just consider like the Z basis. So let's say A could be like 0, 1. And then let me define A orthogonal as the other choice. So if A is 0, A orthogonal is 1, and so on. If A is 1, A orthogonal is 0. So basically I have two possible inputs here, which are orthogonal states. Okay. And the same for the other basis. I mean, you can say, okay, plus, minus. Okay, this could be like in the X basis. This could be for the Z, but it could be the same for the X basis. Okay, so she's encoding this bit here. Now suppose that she's sending that state. Okay. And now what's going on is that Eve. Okay. Here's a bob. Now Eve, instead of doing interceptor send, she can actually, well, apply a unitary here. Okay. With some input state that she has. Right. And then she has some output. Okay. And this is going to bob. Now her output here could be even a store in a quantum memory. It could be taken there a long time. So she's actually collecting a lot of outputs in this way. So every time she's applying this unitary. And she's collecting a corresponding output for any input that she's sending. So then she's collecting a lot of stuff in this quantum memory. She waits. And isn't bob communicating. She waits that she are, they are kind of agreeing the basis. They are doing it. She waits as much as she can. And at the end, after she waits all this. Okay. And she basically, she's able to, she knows about all the classical communication. She applies some kind of very general measurement. Joint on all this quantum memory. So powerful that he's able to achieve the level bound. Okay. Which is the maximum information you can derive extract from an ensemble of channels. Okay. So somehow this measurement gives her this called level bound. Okay. So it's a very powerful protocol. Okay. So let's talk first about this. Okay. The first part I want to talk about this interaction. So the dynamics of this protocol. And then second, like the information tratic part, which is basically the level bound, and what is the rate, giving that cube bar of 11%. Now, what is this interaction here? So this interaction could be like representing this way. Like, okay. So it's a unitary, which, well, called that could be, can we just simply call that e? Okay. So it's kind of, so it's applied to the input. And then you get, well, you get a, the same input with some, okay, state, which is called f a. So that is the output state of this dropper plus a perpendicular stuff like this, d a. Okay. Yeah. Oh, sorry. Yeah, there is no cloning. There is no cloning. Even though there is no cloning, you still have to evaluate what is the best performance that this dropper can achieve using some unitary. It could also be a quantum machine. So quantum machine is not going to go on everything perfectly, but still you have like an output, which may be imperfect. So if it is like a quantum machine there, right, here you get same a state for this dropper. Let's call that like some state condition on the input. Now that state is not, say, the input, okay, because of the cloning. But it may be like, not so far from that, you need to quantify how far is from the input. So somehow the fidelity between that state and the input. Because a quantum machine, they work in the realistic quantum machine, I mean realistic, I mean they have a noise. Somehow you have like two outputs, right, and you may tune the quantum machine in such a way that one output is good and one output is bad. Now for instance, if this dropper could give this to Bob and she can get like a good output. For instance, that's one possibility. So you need to understand basically what, so a quantum machine like a realistic one, you can have like two, which are two clones, and then you may have like two clones. So let me call that for Bob and for Eve. Now a quantum machine can say, okay, it can be tuned in such a way that, if this is close to the input, it is not or could be the opposite. If this is close to the input, it is not. So for instance, it depends how she use that. So if you use, you know, in a very bad, if you use this case for instance, she's actually getting a lot of information from the input. But there's a lot of noise because what's going on to Bob is very far from what Alice always trade off between, say, the information that Eve is dropping is getting from the input and somehow the noise she is inserting in the channel. So in this case, she has a lot of information but a lot of noise for Bob. If she uses this, a little noise for Bob, but she gets the bad clone, so her motor information is below. I can hear you very well. Sorry. Well, okay. You can actually really correct errors in that case, because it's like somehow data processing inequality. So whatever, I mean, if you want to do that, you need to use Karakoraction here. So if it's like a quantum-marachorating code there, right, so I can actually do some quantum decoding there. If you don't use quantum-marachorating code here, right, and do it like a quantum... I mean, it's just a protocol like this, and they like signals, which are not quantum-marachorating code, there's not much you can do. It's just, I mean, an informational quadratic comparison between one output and the other. There's no real error correction you can do, because anything which is here, okay, by data processing inequality, gives you a lower information that what you get here. But, I mean, we should discuss that later in more detail, because... Okay, so, what's the deal here? Is that, okay, so when she does this, and when she does this, then, okay, then now, doing this kind of... So, this is basically what is left to Bob, and this is basically her state. Okay? And, now, this actually what is encoded here. Okay? Now, without going to many details, because there isn't enough time, what I want to tell you is that when you analyze this protocol, actually it's quite simple, okay, because in terms of, I mean, if you know about basic knowledge, if you have basic knowledge in whatever bounds, this kind of stuff, it's quite easy to study, but, okay, so basically, if you see here, I'm talking about this 34 there, right? Exactly. And then, okay, somehow the visdropper... Okay, you can make some choices of the... So, these choices here tells you a potential form the visdropper can use for that unitary, okay? That's one potential visdropper and some potential unitary. There are many actually, okay? I mean, this just specify one potential, okay? So, if you use that, okay, so what's going on, actually, sorry, I'm skipping this, but what you get you have to compute the... You compute this one. This is the level bound I'm talking about. Okay? This. And so, that's the maximum information she can get. So, you fix that kind of unitary interaction. She collects these old states and she'll apply... You compute the level bound, that's the maximum information she can get. And if you assume this situation here, then what is the rate? So, the rate is the difference between Alice and Bob, mutual information and the visdropper level bound, okay? I'll tell you about this level bound now. So, the rate of the protocol, that would be like Alice and Bob mutual information, remember as before, but now you don't have like... I mean, now you have like this guy, okay? The level bound, okay? Which is given by this kind of more complicated process and so on. So, if you remember, what is the level bound? So, the level bound is this. So, suppose you have like an ensemble of channels, an ensemble of states in general, okay? So, that's probability, okay? So, these are states, okay? We probability pk, okay? So, for instance, it could be like the simplest case, you have like probability p0, you have one state, probability p1, you have another state, for instance, okay? Then, you compute the average state from this ensemble, let me call that e, that ensemble, that you compute the average state, which is given by averaging, okay? All these states, okay? Then, what is the problem is that I mean that this ensemble here what is collected here, she's collecting an ensemble of states okay? Now, the level bound now is basically given by of this ensemble, okay? Is the is the phenomena entropy of the average state this guy, okay? Manus somehow the phenomena entropy of the single of the single states in the system. Is the quantum generalization of the channel entropy, if you want. So, it's something like this, okay? Manus, trace, raw, log to this, you have formulas to compute this, okay? So, that's the phenomena entropy, okay? So, you have to imagine that that so, she's collecting a lot of these outputs here. These outputs are now an ensemble of states for her and which have memory of what was the input, of course, okay? So, this label here somehow is like the the input label of Alice so, it was the encoding, if you want. In that case so, a, a, a, right? To make it more precise for the scheme. To give you an idea. So, she want to understand what is a, what is the encoding, okay? And basically the olevo bounds, which is basically the mutual information between that encoding a so, the mutual information between that encoding a and if that is it is bounded by this olevo bound. So, that is really the maximum information that if its dropper can steal about the encoding a so, okay? So, you compute this ensemble of state, you compute the average state you compute the phenomena entropy of the average state, the phenomena entropy of the single of the individual states you do this operation you have this number, you have this basically this olevo bound and you have this, okay? So, details, you can find details here I mean it's not, as long as you understand the concept, it is not such a big problem but when you got that you do the calculation remarkably for this protocol, the BPD4 so, with this attack this strategy you can compute that olevo bound and finally you find that the rate okay, the rate is the same as before and you find that the rate is basically one of one was minus at, yeah, two yes, there should be there, yeah, 2h2 the cube bar so, d there is the cube bar okay, so now this is actually now this is the rate you achieve okay, so this basically is the rate for this protocol for this attack so, BPD4 under this optimal attack when this rate is positive now okay, so if you simply solve this you put this equal to zero okay you have basically the cube bar that you can tolerate so by imposing r equal to zero solving is a question remember this is actually this comes up to be the the banalisation entropy here by the calculations even though we started with the phenomenon entropy, that was quite a very huge simplification so if you put that r equal to zero which is the threshold is really like the threshold value it gives you a corresponding threshold value for the cube bar okay, and when you solve this a question you find that the cube bar threshold value is 100% and more general analysis okay even with more powerful attacks than this okay that is actual minimum so that is actually the minimum threshold of the BPD4 so no matter how strong is your attack how good is your attack you cannot go below that so what does it mean if I am my cube bar below 11% okay then I'm fine, I know that there is a positive rate so as long as I'm below 11% I have a positive rate no matter what is the attack because any attack would be included okay so below 11% is a positive rate I can do then error correction, private simplification and extract a key rate which is given by this quantity this positive quantity here in terms of bits per use of the protocol or per use of the channel if I am of course at the threshold the rate is zero and I have to bore the protocol and that's basically the content here and here in the notes you can also find actually more general proofs about this, it is about the conditional security the same cube bar the same cube bar turns out to be I mean even if you consider more general attacks more general scenario approaches whatever it doesn't matter that is the threshold okay so we have the break now and after the break I hope to have enough time to talk about cvkd and some ckd capacities we will go through the main concepts but I really wanted to show you the bbt4 the concepts and the mathematics of the security proofs at least the most interesting and some hours at the same time easier to do so that's why I spent two hours instead of one probably thank you, see you after the break