 OK, so, veliko vedno. Vesme, da sem tudi veliko kofi. Zato jaz sem jazil. Zato mi je zelo veliko zelo. OK, zato. Vesme, da sem tudi tudi tudi tudi, kontyvenčne informacije, kratografije in kratografije. In tudi tudi tudi tudi, kratografije in tudi tudi, tudi tudi tudi, tudi tudi tudi, A, o, da, se da vse tako bo. Proste, nekaj. Tako, oči, zelo, lahko se pravite na kratki, kaj si lahko se načo, taj mnoh, vseh leksih na deštih dnega, s Alexijo so Raffini. In je bilo, je bilo, je bilo, več del vseh, več del vseh, več del vseh. Vseh, ta je, ta je bilo, da, ta je bilo, ta je bilo, ta je bilo, da, da, da našli, če je o kontyvosvaribučih vsega, in spesifikali gašan informacija. In mič je o kontyvosvaribučih vsega, je kos gašan stran, gašan renovit. Gašan vsega je več strana, od nekaj povoj, ker je danes vsega, ko je bilo veliko značil in sem je več delal. In je več delal in izpriment, ker mič je bilo stran, Lada večstaj stajve, očetneh razlika, da je zvom in bljevče, je brez gassijan večstaj, tratsporm JJ, zelo bi vz процессiri grosirอน, in vzač ležim obtih. Ko je... V tih, da je se bolj vriste. Kaj je zelo večstaj barbo odsneščni, srg štLEX. Pa ti ješ ja, dobro, vzivaj si iz odvelenju vsega, o komentar�iske vsega, s esk defenses, ko ju ojevaj, kot da živaj na natičnice, ne obolo, n а bolo, kot bi bil. Asi da bo, tk' sem bilo, ta však je, ta bolo, to jen pri to, to je, tk' mislilo easega poživana, ta je potreben iz nej, to je je, ta je, ta je, to je, ta je, to je, to je, kako je zelo v pomečnih, ok? Zelo, da so, da se, da je vzelo, da je vzelo na vse, da je momentum p za ramani vse. Vse moj moj, da je, da je, da je, da je, vse, da je vse, da je vse, da je, da je, da je, And that's okay. Assume that this point in phase space, some distance is rotating as some angular frequencies omega, okay? So you have something like this basically in time. So if you have this, well, this point is rotating, right? Now what I do, okay? No, suppose I look at this point. I project that point on the q, it's kind of looking at the motion on this plane, right? So I would see that this point is basically oscillating, like, here in this way. So that's basically the position of the harmonic oscillator, which is going back and forth, around zero here. I would see the same if I look at the p position, sorry, the momentum, okay, so the velocity, if you want, of the harmonic oscillator. Now, suppose that they put like a time here as well, so then I can rival like this, oscillate on something like this, right? It's oscillation, okay? So this guy is going to rotate, right? And it's going to do like this, and so on. So I'm creating a wave, right? So this means that basically a point oscillating in, I mean, a point rotating in this plane, in the phase space, give you, I mean, describes a classical wave with some frequency, okay, which is related to this angular frequency here, and also the wavelength, of course, you can compute all this stuff, okay? And look, that is the initial phase, okay? So this is from this angle here, which give you basically the initial point. So basically that thing completely describes that wave. Okay, now, why are you talking about this? Because, okay, let's say that, well, if this is like a wave of an electromagnetic, I mean, if this is actually an electromagnetic wave, if I want to describe this as a quantum system, okay? Well, what I get usually is noise, okay? So what you get actually quantumly is that when I look at stuff like this, is that they have a noise, okay? Now, suppose that this is not a very strong signal, no? So it looks like something which is a little, I don't know, a few photons. And for some reason, I mean, I can able to actually measure, I don't know, the shape of this wave. So what I would see then is like a noise. So I would like something like this, I would like something like this, okay? So some kind of mean value there, some values distributed around that value, and most likely a kind of Gaussian distribution of these values, okay? Right, no, what is these values? I mean, when I project this into P or Q, as I should project this on Q, okay? So I can actually, so when I say Q and P, so Q and P could be like, okay, the position momentum of a mechanical oscillator, when I talk about a thermodynamic field, these are basically the two components of the electric field, okay? So somehow this is my electric field, okay, which is oscillating somehow. And this is like the P component, and this is the other Q component. So basically it's just two components of the electric field. So but the main idea is that the quantumly, you have this kind of uncertainties, you have quantum noise. And this is described here, back, so if you go back, you can say, okay, that's actually more reasonable as a state describing my wave, my balsonic mode. And this is not just a point, but it's a distribution, okay, say quasi probability distribution of points in the phase space, which is this guy here. So when I project here, and I have like a Gaussian distribution, for instance, with some variance, which give me this kind of blob, the same. When I project here, I've got like a similar Gaussian distribution, the same way. So this, okay, of course this can be done very precisely, and with a theory, where basically you should be able to use operators, so QMP, more than, so your phase space is a quantum phase space, so QMP are actually, usually I mean people, another way to do that is that you introduce not the annihilation and creation operator, no, of balsonic mode, then it may introduce like a position where this depends on the notation, it can do something like, that depends on the notation, so you may create this kind of, where this one is like, where this kind of, this kind of, so you can introduce this stuff, and when you do that, the way to do this is using like this. Now, okay, yeah, it's square, it depends on the notation, I mean that is the typical notation, like quantum optics, actually there are several notations here, depends on the shot noise, actually in quantum formation, that is the typical way to consider, the way like, now this is actually relation, so that depends, now this is, now these two operator here, they don't commute, and what you have is, that depends on how you define this, also I mean this guy, but I mean what you have, you have something like this, this guy, now from this you may derive corresponding uncertainty principle, so for instance this is called coherent states, which are the states, which have minimum noise in QMP, okay, they have like, you may describe them, so let's call that alpha, so these guys actually some kind of variance in Q, which is equal to the variance in P, which is equal to one, so one shot noise unit, one vacuum noise, that's the minimum variance, you can get for, I mean symmetically in QMP, so if you want, I mean you start from the Hilbert space, so there is a state applied to, I mean an Hilbert space, then you may write down from this state to, this is called characteristic function, which is something like this, I mean this is like some kind of process of doing this, this is suitable displacement operator, then you take the Fourier transform of this, and you come up with a Wigner function, which is in QMP, right, and this Wigner function here is come up one to one with the state, and basically it is the function that basically you are considering here, in this case is a Gaussian function, and a Gaussian Wigner function would be something like QMP, right, and you get something like this, something of the order of, let's call this X, so X minus X, T, these are covariance matrix, and this, and there is like a normalization factor here, and now, so basically what you have, for a coordinate state, you have that for a coordinate state this takes a very simple form, because I mean the mean value, so you get basically the mean value, which is connected to the circle amplitude in the coordinate state, and the covariance matrix is just the identity. I don't know if I'm going too fast, no, sorry, tell me if I'm going too fast, whatever, but basically this guy here is a Wigner function, and for a coordinate state that's very simple, because somehow the variance here is one, and the variance here is one, so I assume it is like a contour of this Gaussian Wigner function, and this distance here can be basically alpha, which is the amplitude, or equivalently is related to the mean value X. Why all this? Because even if you forget about the mathematics, the point is that coordinate states can be used for QKD in a very good way. So the coordinate states is coming like a web like this, they can be used because they are non orthogonal. So when a coordinate state is if you take the inner product between a coordinate state and another coordinate state, because at the end of the day this is like a Gaussian, it's a Gaussian distribution, so if I'm in the phase space, and I take like a coordinate state here, then I take another coordinate state, say there, whatever you take it, it should be with amplitude beta, so I take that guy alpha and beta. Now there is always, because it is a Gaussian function, there is always an overlap, actually I mean, of course there are tails, so, because of that overlap, these states are always non orthogonal, no matter how you choose the amplitudes, alpha and beta. In fact, I mean, you can say that I mean, if I compute this, now if I remember where, this is like something like Manos, this. So this is actually the overlap, it's not zero. I mean, it's this quantity, it's different from zero. So it goes to zero if they are very far apart, but as a limit. So they are non orthogonal, so I have a very good ingredient which is non orthogonality. Now, if I want, I can exploit that for QKD, quantical distribution. And how can I do that? Now, one of the basic protocols is based on, on the fact that you may use the amplitudes to encode information. Now it's a continuous variable information because these are actually a complex number. And one way to do that is combining again some classical encoding of that variable and then use that variable as an amplitude of a state and then use that state for the communication channel. So this is a protocol that was developed in 2002 by Grosse and Granger in the French group, but also from other people in Australia. It was a modification of that protocol. So it's called one important one is called the GG02 another one is important one is called the no switching protocol. So it works in this way, especially this one no switching protocol. So what Alice is doing, Alice is basically choosing a classical variable alpha which is a complex variable. How she chooses that with a random well, I mean ideally what she does she picks this according to a random Gaussian distribution with say zero mean and some variance well, let's say zero mean and some variance let's say mu. That's classical. So just that, normal to say, that's a b-variate Gaussian distribution. So she's basically drawing up instead of picking zero one she's just picking a random complex number alpha according to this kind of Gaussian distribution. So something like you have the phase space and then there is let's say Gaussian distribution which is this one, this very large mu and then she pick randomly amplitudes. Ok, that's ok, alpha. That's the first step. That's now is the variable which is important for the key generation. It's the variable that Bob should get, should understand. Of course, what the next step is that ok, now I prepare a coordinate state with that amplitude. Ok. So she get that so what she does, she prepare a coordinate state and now she sends this coordinate state to the channel and here is Bob and Bob wants to basically detect this, understand what was the encoding so what Bob does in this protocol is a measurement which is called heterodyne detection. So heterodyne detection is a kind of a joint noisy measurement so basically it's a kind of noisy measurement of the amplitude. Ok. So it's a measurement heterodyne. Well, Bob gets a variable, a variable beta ok, which is, well, not so far from alpha. Ok, as a matter of fact it's actually basically alpha plus some extra vacuum noise for those ok. Ok, now we have like ok, now suppose there's nobody here so you want to you want to compute the so you have some kind of mutual information between Alice and Bob which is the mutual information between these two variables. Right? Nothing like this. And well, I mean like it's a kind of well, it's something that can be computed because it's a kind of now this variable as basically mu is the somehow is the variance of this signal ok which is sent to the channel and Bob is getting this variable which is alpha plus some noise ok, actually this means that the variable beta has some noise which is mu plus 1, which is the extra vacuum noise so it's not able to measure perfectly alpha there's some extra noise that gives you basically a spreading of the of this measurement right? And so you get something which is of this type so you get something which is well, let's log of mu basically ok something of this type not quite that one ok, but similar to that Yeah, the problem is the problem is the measurement is that Q and P are basically don't commute don't commute so, since they don't commute you cannot measure them exactly simultaneously in fact, I mean there is a measurement which is a line which is a measurement which is only measuring Q perfectly but it doesn't measure P or you may measure P but you don't not measure Q so homo den detection homo den detection is just measuring 1 quarture Q or P with the perfect detection when you attempt to do both the measurements is a problem and I use 2 homo den so, for instance, if I would like something like this and I use a 50-50 bit splitter with that of a vacuum and then I got the output there for instance, that's 50-50 and then I can use ok, I try to use homo den in Q here and homo den in P here I can do that well, when I do that, I mean I have to do that I split that, the signaling through so, this splitter here what does it do? if you do the calculation transform, basically it's cutting half of the photons, so now you get this output there sorry, you get you get this output here and you got this output here there is a factor square root of 2 because I mean the number of photons here of the input is alpha squared ok so, you got half of the photon here, half of the photon there the splitter is 50-50 half of the photon in one part half of the other part right so, when you do this it's kind of this is kind of measuring stuff like Q over square root of 2 now, and P square root of 2 stuff like this so, when you basically want to give up this square root of 2 kind of this kind of theorem from, I mean you want to amplify this you get basically an extra vacuum so, basically you have like your measurement, that is your state but actually you are measuring something which is like the amplified and then when you go back to the measurement you get actually extra noise ok, I can tell you about this in more detail ok, by the way, the idea is that you got this, you got this motor information here you can write that now what you have ok, so potentially they can use this, but what about if these are misdropper so, they may be misdropper in the middle right, so, misdropper in the middle can do all sorts of things again I mean it can use interceptor sand can do like heterodyne and sending the signal to Bob or can use a quantum quantum machine as well ok, now very simple analysis of this protocol is based on the, on a Gaussian quantum machine ok so, if Eve is trying to use that so, a Gaussian quantum machine is is kind of creating two outputs so, one for Eve and one for Bob now, because I mean again we cannot clone the pressure of the Gaussian quantum machine is actually creating two clones which are in perfect copy of the input so, in perfect copying continuous variables means that the input is thermalized so, the variance the input has variance 1 and 1 in QMP the output clones they have extra variance so, they have like so, let's say for instance ok, let's say this is the situation ok so, that's QMP ok, so, you get like an input, that's my input it's alpha, ok but this is actually one it's one one, if you want ok now, what's going on here is that here is like a state a state kept by Eve's dropper ok now, the quantum machine is going to add noise ok so, what's going on is that, ok, if the noise is added for basically, if it's a Bob state it could be something like this ok so, basically the idea is that if the variance of Alice say, it's 1 ok the variance of Bob state is something like 1 plus say, C squared now, the other clone also has some noise and let's say, ok, this is the other clone for instance which more noise could be the opposite and could be that and the other clone has variance 1 plus 1 over sigma squared ok, in this case I'm assuming that sigma squared is less than 1 so, what is going on here ok so, it's going on that when the quantum machine is applied the state is called extra thermal noise ok and this affects both the clones now, it's up to you to decide, ok what is the best clone what is the better clone what is the worst clone so, in this case, for instance the better clone is going to Bob and the worst clone is going to be get by the dropper and the extra noise so, it's just the extra noise added by the clone machine of course, the dropper could do the opposite so, she could actually use sigma squared which is bigger than 1 ok, in this case this would be noisier so, Bob would be noisier and the dropper 1 would be less noisy so, let's say that you have like I mean, I'm really trying to simplify this because, I mean, the time is running so, suppose you have like this quantum clone machine giving like this extra thermal noise, sigma squared ok and then you are in that situation here ok so, of course, I mean when Bob is going to of course, I mean, the dropper could do the same stuff here could like make heterodyne and she's going to get some kind of output variable gamma which is, again some kind of alpha ok so, when I consider all the picture, I have a rate which is of this type so, the motor information between Bob between Alice variable and Bob variable Manos, the motor information between say Alice variable and Eve's variable so, this is the information that is stolen and this actually is the motor information between us and Bob and I want this to be to be positive right that's my target now depending on this, for instance in this simple attack depending on that noise this rate becomes certainly a function of that noise so, the higher the higher is that noise for as in Bob the lower is the rate so, this is always this tradeoff in GKD this actually could be quantified because, if I'm not wrong now, this quantity here when you put that into account it should be of the order of log log yes it's mu plus sigma square over sigma square and this guy becomes log mu plus 1 over sigma square over 1 over sigma square and this is 1 something like this so, what does this formula? this formula simply comes from the the capacity of the Gaussian channel it's related to that so, remember if you have a Gaussian channel you can write single variable you can write something like this 1 plus signal to noise ratio something like this we are not omitting this because we have two variables it's a complex it's a complex it's not a real variable but two real variables it's a complex variables and basically we have this kind of structure here plus one we should not make too much difference here but that's basically is the signal for Bob and so this comes from the test structure so, when you take this difference here basically, just tell you that I mean, what you have at the end of the day is that the rate is positive is sigma square is less than one I mean, you see from these terms there so, you actually get that so, if sigma square is less than one so, if the noise from here to here is below the threshold they can extract the key they can go home they can have these complex variables now they have strings of complex variables they actually can digitalize them into binary digits so that's and then they can apply error correction proud simplification and so on as before BBT4 protocol and they can come up with a key but if the they are okay, the rate is above one in this case with one shot noise in terms of phase space then they can't because it means that this drop is good is going to get the good clone get more information than Bob and the protocol is not secure now this is about quantum calling machine more generally I want to consider a Gaussian attacks and more complicated attacks here so here in general it could be like a much more general interaction you can extend the proof you can consider the proof also for realistic scenarios and the more realistic scenario you can consider is when you have like of course like a fiber from here to here some losses and then you assume that the drop is actually controlling the fiber what does it mean it means that she can in principle replace the fiber with an ideal fiber with no loss but inserting like a bit splitter in the middle but from this splitter she's going to get some of the input information so something like not something like this but something like this that's the most typical attack in my consider like it is dropping it's putting a copy splitter and with some transmissivity eta and then she's sending and she's also putting something here some states she can control and then she's putting like here quantum memory where again she's like picking all the outputs so she's actually inserting this loss so transmissivity so this splitter with transmissivity eta so it means that a fraction eta of the photons are going to pass and she's getting one minus eta fraction of the photons and all the outputs are stored in a quantum memory again she could apply a completely general measurement at the end which is very a joint quality measurement possible outputs she had she can reach the level bound here as well by doing this and at that point you don't have this simple motor information here but you have to consider like the level bound again and so on and when you do that you get the rate you can compute the rate very general, I guess the very general attack based on this and you get the rate depends on the transmissivity here and also the noise the extra noise which is introduced by this but especially it's important to understand that there is a rate in terms of the transmissivity there the eta so for instance you may map what you can do you may map these transmissivity eta in decibel so typical rates for I mean continuous variables actually also other protocols they are plotted in terms of the decibel so the decibel is like transmissivity of the splitter then you compute minus 10 log base 10 of eta and basically this is decibel so for instance transmissivity of the splitter is 1a then you got 3db here ok so typically you have like a plot like this where this is like in decibel and and that's the rate of the protocol and what you expect is that of course if the higher the noise the lower is the rate and at some point probably you go to 0 ok so that's the maximum amount decibel you can you can tolerate with the protocol below that you have positive rate so that's the typical behavior now you can also map a decibel in to distance which is very useful so typically you consider that like there are 0.2 decibel per kilometer in an optical fiber ok so if you count for this for instance if I'm not wrong it should be like 15 kilometers and then you can make a plot in terms of kilometers so that would be the range you can achieve the maximum range you can achieve with your protocol and below that range you have a rate so this brings me to the final part of the of this lecture which is about this so this type of protocol is very good in the sense that they are able with continuous variables you are really able to achieve the higher rates possible so the promise of continuous variable is that I can achieve this is that of let's say BBT4 that could be like this Sofia this but in order to get there there are some technological challenges ok so theoretically the continuous variable protocols are the best practically they are not at the moment ok ok so final topic in these 12 minutes is about the ultimate rate so ok so we want to be like we want to go on to distance we want to have the best rate as much as possible so we want to boost this we want to go there we want to high we want to long distance high rate private communication between Alice and Bob which are very far apart in an optical fiber or a free space link what is the best I can do what is the limit what is the silky key capacity so the maximum rate I can achieve in quantum key distribution ok that's a quite difficult question in fact it takes years to find the solution but the solution is very remarkable at the end so let me tell you this very final thing then ok so this is the problem ok it's like is this the secret key capacity the key capacity of say lossy lossy I mean lossy kd ok so you have Alice here you have Bob there ok you have like a link ok to be an optical fiber free space link this link this link has transmissivity which is eta ok eta means that ok if Alice is sending I mean it's the transmissivity is Alice sending like n bar photons on average well Bob is going to get it's n bar ok and this also means that one manozita n bar photons are getting are getting by this dropper right so it's tapped I mean as long as there is like non unit transmissivity some of the photons are are going to the dropper so it makes sense to say ok general question so if I have like a fiber a free space link with this transmissivity there what is the maximum the maximum rate the maximum secret key rate I can Alice and Bob can achieve so let me call that Km ok so that would be the absolute upper bound of any key rates of any protocol Alice and Bob can implement so any protocol can do continuous variable whatever any strategy they do they corresponding rate cannot be higher than that ok so what is that key rate well I mean we started to work on this in 2009 we found a solution in 2015 actually and it's also called what's called is also known as what's what's blob bound where it's basically myself, Pirandola and other three people Lawrence, Octaviani and Bunky so basically this is actually very simple very simple, remarkably simple formula this is just this Manus log to one Manus eta or if you want you can write this in this way log two one this way it's so secret bits channel use so it's remarkably simple it was extremely difficult to find this and what you see is that when eta is is close to zero which means long distances ok so this becomes about 1.44 eta bits per use ok so as it tells you that there is like a bound here so I have like a capacity it's going like this ok and everything which is point to point everything which is like a cross a single use, anything which is like this with a channel, a single channel between Alice and Bob and there is nothing in the middle helping ok so without repeaters so without anything in the middle like a repeater this is also by the score also repeaters so anything which is like this has to be below that region the only way to bit this is using something in the middle so cutting the line here and using like a repeater so like Charlie a Charlie helping the communication so for instance you can split that into fibers which are like a square root of eta so if you have that situation then you may bit the bound you may go in this region there above the bound so now we are going to show you now the current set of the art with the protocols compare to this plot bound ok so including BBIT4 and continuous variable protocols and what happened after that when the bound was actually derived a lot of people started to work actually to bit the bound and actually found a kick at the repeaters which are efficient so it is efficient stuff like this I mean it is not like that of course the operational challenge should also be quite efficient they then started to well actually it was found something more fancy actually than this because even if this is Eve Eve itself cutting the line so even if the repeater is untrusted so it is called untrusted repeater or untrusted relay ok even in that case you can have a you can bit the bound you can go above the bound so as long as there is something which is helping there even if that is the reverse dropper you can bit the bound if there is notice you are stuck with that limit and now well right so there we go so there is a very nice figure here which actually was where is that there you go ok so there you go ok so this was ok so that is the bound of a talking box that is the sticky capacity the plot bound so here is the distance in kilometers between eyes and bob so say how far they are an optical fiber here so find the kilometers that is the sticky rate so in terms of bits per use so that is one bit per use don't be scared about these low quantities because these are per use per use of the channel then some protocols they actually work at the giga I mean a giga so basically you have like 10 to the 9 uses of the channel per seconds so when you multiply this quantity it is huge ok that bound and then you see basically these are all the so called say point to point protocols for instance vbbt4 in various versions so we look at the one photo in vbbt4 today so that is actually the rate you can achieve ok these are continuous variable protocols you can achieve higher rate you see and then ok that is the sticky capacity now the sticky capacity there is an actual continuous variable protocol able to saturate the bound so it is able to achieve that rate ok so what are these guys now these guys have been derived afterwards so that this is a twin field protocol it was the first one to be derived it is a protocol of this type so it is actually using like a middle middle relay the general transit could be this dropper helping the communication because of that is able to beat the bound here and it is going to have like a better rate along distances and all these other stuff here is face matching, signal and other things these are all twin field inspired protocols which have been developed ok at the same time their performance cannot beat this other bound ok, which is another bound that ok, even if you have that situation even if you have that situation suppose I do this ok well there is another bound that you cannot surpass I call the single repeater bound there and it is basically this the single repeater bound is basically this one with the square root of eta so it is kind of one repeater bound it is one minus log two one minus square root of eta ok and that it is not beatable, not even by this protocols but you see that it is at the same scale so perhaps there will be some better protocol able to achieve this and if you want to go beyond this above then you have to use two repeaters two relays and so on ok, so thank you very much that's the end of the lectures thanks