 I think, but I wanted to announce on the occasion of this final lecture the three names who have been selected for the spirit of Salam award for this year. Those are Hugo Rojas from Institute of the Cybernetics, Mathematics and Physics in Cuba, Tino Nyavello, University of Utah, Salt Flexity and Federico Rosei from Institute, Institute of National Research at Scientist Week in Montreal in Canada. More details at patients will be on our website on Salam's birthday. So I just wanted to make this quick announcement and then I give the floor to Professor Piotrescu. Hello. Hello. Good morning everybody. So welcome to this third lecture. Again, as I told you the first day, each of my lecture will be on a different subject. And the subject today involves basically the idea of the flow of time in quantum mechanics. So let me start with a very simple point about classical mechanics. So suppose that you have a particle in classical mechanics, x, t. So here's a particle. And suppose that you know about this particle, the initial conditions. So you know that at time t0 here, it has some position x0 and you know the velocity b0. Now if you were to know all the forces that act on this particle, then you would know the entire, at all times, you would know the entire evolution of the particle. This is the word line of the particle and everything is perfectly well defined. So nobody actually would need to take a measurement in the future because you can calculate what the position will be and what the velocity will be then. So performing a new measurement in the future doesn't give you any new information provided that you know all the forces. Also you don't need to know anything about what happened in the past because again you can calculate this thing. Now this is in total contradiction with what happens in quantum mechanics. So think of a particle that at time t0 is prepared in some initial state. And suppose you know the Hamiltonian. So you know the evolution of the particle at all times. You know the Hamiltonian. And then at a later time, t final, you perform a measurement. Suppose you measure some operator say b which has different eigenvalue, eigenstates b1 up to bn with eigenvalues b1 to bn. You perform that measurement. You know how this state evolves. You measure the operator and you don't know in advance what value you will get. You could get b1, you could get b2, you could get bn. All you can know according to quantum mechanics is what is the probability to get a given outcome if you would repeat this measurement many times. You would prepare the particle again in psi, let it evolve, measure again. But in any individual run of the experiment, you don't know what the answer will be. So in general, people think that okay you find here b1 and from now on you know for the future. Or you find b2 and you know what will happen in the future. But even before that, there is one point that you have to understand. Finding out, performing the measurement and finding out the result gives you new information about the system. Something that you could have not predicted knowing psi and knowing h. Here, classically, there is nothing new that you find out by measuring. Here, the information of which outcome will occur will only be present at this time. You could not infer that from the beginning. So as I said, people just think of the influence versus the future. But there are some very interesting things that you can do. And here is one thing that you can do. Let me write it on this side of the board. You can prepare many, many systems in identical states, all of them in the state psi to start with and so on. And on all of them, at the later time, you measure b. So suppose here you get, I don't know, b5. Here you get the value b1. Here you have the value b17. You have b5 again and so on. I don't know. Psi, you measure b, you get b8. So you get all kind of different outputs there. So you perform a measurement on an ensemble of particles. All of them prepare the same way. But now you look at the final result and you decide, let me split this ensemble into sub- ensembles. For example, the subset of cases when you started with psi and you got b5. So it will be this one will be b equal b5 and will be this one and so on. Then you can look at the subset of cases when you got b1. It will be this one, some other one there and so on. So you have your initial ensemble of many particles and now you subdivided it. But what is the important thing is that if you were only to look at the initial state, you don't know which of them belongs, will belong to each ensemble. You really have to wait from t0 to t final and when you got to t final and when you made your measurement, only then based on this new information you are able to separate your system. So in this sense, this single system, these subsystems, which I will call pre and post-selected ensembles, they are in some sense the most refined possible ensemble. You know, you would like to think whenever you do an experiment to make a clean experiment and to prepare the state as well as you can. So what we see here, that our most detailed way of preparing a state, as far as I'm interested for events that occur in between these two times, is to know how you start and you know how you end. That is a more defined information because the ordinary ensemble, looking at all of them, you could view it in the following sense. I know what happens in each of these particular ensembles. So like in the ensembles where I start with psi, I measure and I got B5. And then I know what happens in the one when I start with psi and I got B17. And then I forget all this information and I lump them together. So the ordinary way in which people are used to do quantum mechanics, just to look at an ensemble of prepared things, throws away the information. You had more detailed information, what happens in each of these separate ensembles, but you just mix it together. So one thing that you should be careful is that you have some statistics of, you know, you may perform here all kind of measurements in the middle. And you may look at the statistics of what you do here in the middle over the entire ensemble. But that would in general be different from the statistics in each of these things. If you start with psi and you got B5, the statistics will be different from what happens if you start with psi and you get B1. Okay, so here is the deal. While usually people think, let me just do here and then B5, I will consider what happens with it in the future. I want to look at the statistics of various measurements here in between, given that I start with psi, given that I measure B, and that I get a particular value. So let me ask a simple question about that. Suppose I do this, let me try to use these beautiful blackboards. They are really the best that I've seen, so it's a pleasure. When I was told that they are so good, I said, okay, I've seen some good ones. All right, so let me consider I start with psi and I measure B. Suppose I get some particular value. Let's say I get the eigenvalue B and this is the eigenvalue. And in fact, let me call that some state phi, this particular eigenvalue of B. So then what I'm interested, suppose that at some intermediate time t here, I want to measure an operator A that has eigenstates A1 up to An and eigenvalues A1, An. And what I'm interested to calculate is the probability to get the value Ai, given the fact that I started with psi, that I measured the operator A, I measure finally B and the answer for B was this particular eigenstate phi. So this is the probability that I would like to measure to calculate. This is a conditional probability because it depends on the beginning, but it depends also on the end. How do I compute it? Well, first of all, I have to evolve the initial state up to here and see what is the probability to get the value Ai there. So that probability, let me put it here, that is the probability. When I start with psi, I evolve it from t0 to the intermediate time t by the unitary operator. That gives me the state just before I measure A and then I have to project it on Ai. And the absolute value square of this is the probability that if I start with psi, I will get the answer Ai in the middle. Then what happens? The state collapses on Ai and Ai continues to propagate towards the end. So now I have the state Ai. I continue to propagate it. This is the unitary time evolution. Let me put a hat on it from t to t final. And here I project it on phi. And I take the absolute value square. So this is the probability that if I start with psi, I got Ai in the middle and then I get B. But I'm not interested in that. I'm interested in the probability of getting Ai given the fact that this particular value B is obtained here. So there are cases in which the particular value B that I'm looking for is not obtained. So I have to normalize over the probability that this guy is obtained. And it could be obtained because, in fact, in the middle, there could be many other values, all of them giving this final one. So all I have to do here, I have to imagine that it was, how can I get phi? Starting with psi, evolving from t0 to t, getting some value Ak in the middle, and then I have Ak that is propagated by u from the intermediate time t to t final on phi, absolute value square, and I have to sum over all possible intermediate results. So see what is happening here. Here it could be that in the middle I get A1 or I get A2 or I get An. And all of these, you know, I have to compare what is the probability that I get from here and I end up to phi or I go through this one and I end up to phi from this one to end up to phi and look at the different ratios. So this is the probability conditioned on the fact that you started with psi and conditioned that you ended up with phi to get the intermediate result I. Okay, so I spent some time to this, but it's really trivial. So this is, you know, a simple student exercise in conditional probability. But you already see something interesting here if you look careful. If you look careful, you see that even if I'm interested in just a single particular value, A i, I have to do a lot of work because I have to compute this value here which is for all possible k for one up to n. So there is a lot of work I have to do. The question is how do I do this work? The standard way was I take psi, I evolve it, I do the scalar product, I evolve it here, I do the scalar product with A k. So I have to solve a Schrodinger equation for psi. Good. But then I have to take every A k and continue with it up to the final time and take the scalar product with phi. So this is a lot of work because I have to solve n Schrodinger equations, one for every single A k from here. Okay. So computationally is a hard task. But if you take a step back and look to it, you say why do I apply the unitary operator on A k? Why don't I apply it on the state phi, on the final state phi? And in that case, all I have to do is to have another single Schrodinger equation which is the backward in time evolution of phi. So instead of looking at this picture here, I want to look at the other picture, which is I want to propagate now the final state phi backward in time. Scalar products I will have to do in any case. But I only solve a single Schrodinger equation for backward. That is in fact even better because somebody said, okay perhaps you did not measure A, perhaps here you measured some other operator C with other eigenvalues, eigenstates C1 to Cn. And I say, okay, no problem. The Schrodinger equation that I do is the one of phi. I will take here scalar products with the Cs, but I don't need to change my other Schrodinger equation. I will just take here with Ck, Ci, Ci. If I were to do the standard way to just propagate everything, now I will have to calculate Schrodinger equations for the state Ck. So a lot of work. So this already shows that if I take a view of the flow of time, like propagating here things from the initial condition to the middle and here things from the final condition to the middle, at least from the computational point of view, it's easier. Okay, well, yeah that already sounds nice, but that's also what you would want to do classically if you would have a probabilistic theory. But the issue here is in fact far more delicate. Let me just put things into perspective. I was telling you that I can divide an ensemble here. Now you say, well, but we know many times things like that happen. You go to CERN and they do that every single day. So they have here some scattering center, perhaps in the center of mass or whatever. You send a particle and then the particles, you know, they may end up in different places. This is how every scattering experiment is done. And generally you may be interested what happened here in the middle if the particle only arrived to that final place. You don't want to look at the whole junk. But if this would be classical things, that would be done selecting the final thing when you know the initial beam of particle. That is just a matter of convenience. But in principle, if you would know exactly where the particle starts, the impact parameter and exactly the velocity, then you could calculate where it ends. So there is no new information here, classically. Quantum mechanically, on the other hand, you have an incoming plane wave and then you have outgoing waves and you may put detectors here everywhere and you know the plane wave but you don't know anything where it is. You know only the speed. You know that there is a scattering wave but the fact that it clicked here or there is not determined by the incoming thing. Of course, in both cases you can do a calculation with some of the predictions but here it encodes fundamental information which was not available for you at the beginning if you were to be more careful while here there is nothing new. Nevertheless, this is trivial and if I were to stop here, it would not be very interesting. But let's try to play with it. You know, this is one of the basic things that it took me some time to understand but after I understood I always told my students if you just solve an exercise and you get a result and you are happy and you go away, that is that you've done perhaps half of the work, perhaps less than that. When you have a result, you should start playing with it to try to understand. So it is like you were looking for a treasure, you dug, you found the treasure and then you go home instead of using it. So here we got an idea and let's start using it. So here it's a very simple example. Suppose that I start with a system. Okay, it's important enough. Let me do it in the middle part. So suppose I have the simplest possible system. I start with a system and I spin half particle and I prepare it in the state of z. So sigma z equals plus 1. To simplify everything, I take the Hamiltonian to be 0. No magnetic field, nothing happens to it. At a later time, I measure the spin in the x-direction. So generally the spin in the x-direction could take now two different values, plus 1 or minus 1. When the state here, the original state is sigma z equal plus 1, that is of course a superposition of sigma x plus and sigma x minus. But I will only be looking at the cases when sigma x equals plus. So there will be cases when sigma x is minus, I ignore them. I'm only looking at the cases when I start it at z and I measure it at x and I find it at x. So at an intermediate time, this is the time arrow, I measure sigma z. What will be the result here? Louder? Plus 1, of course. I prepared it plus 1, I measured it again, it is plus 1. Sigma z equals plus 1. Again I'm looking at sigma x equals plus 1. I measure sigma x. What am I going to get? What am I going to get? Plus 1, could I get minus 1? If I only said that I start with up z here, I could get minus 1. But if I also tell you that I get plus 1 here, if you would get minus 1 here, it would be impossible. So again I get with certainty plus 1 there. So you see these pre and post-selected ensembles, they are funny. In a pre-selected ensemble only, you cannot have a situation in which sigma x and sigma z are both well defined. They are non-commuting observables. But in a pre and post-selected system, I can have them sigma z equals plus 1 and sigma x equals plus 1. So I want to think that this means that at the intermediate time, both these things are true. It is also true that the spin is z is plus 1 and the spin x is plus 1. Okay. So let me ask now something more interesting. Sigma z equals plus 1, sigma x equals plus 1, and in the middle I want to measure sigma in 45-degree direction, which is 45-degree between x and z. So I want to measure the spin in that direction. So I took 45-degree because it's easier, but in general if it is in a direction theta, the spin in the direction theta would be what? Theta, sigma x plus sine theta, sigma z. So in this particular case, cos and sine are all of them 1 over root 2. So this is sigma x plus sigma z over root 2. That is the definition of this observable. So I measure this. And the question is what will be the result of this measurement? So what do you think is the result of this measurement? Suggestions. I want to hear them. Yes? Loud voices. Well, it's very simple. I know that z is with certainty 1. So this is 1. I know that x is with certainty 1. So 2 over root 2 is root 2. That would be the result of the measurement of the spin in 45-degree direction. Are you happy with that? Not happy. Why not? Well, obviously not, because the spin can be only plus 1 or minus 1. What should I have gone here? Well, I just spent the last 15 minutes telling you how to compute that conditional probability formula. So I have to go back to that conditional probability formula that I erased it, so that you forget it. That was the reason of erasing it. But you should not forget it. You should go back to it. And you'll find that sometimes it is plus 1, sometimes it is minus 1 with those probabilities that I calculate. But this is very nice. So it's really whenever I measure sigma z i c plus 1, whenever I measure sigma x i c plus 1 with certainty, this seems nice. So you see, here is what Aharonov did. This was his original idea. And then I collaborated a lot of developing it. It sounds too nice. So let's see why we don't get that. Let's try to dig a little bit more. You should not just throw out a result because it's wrong. Sometimes it has an interesting grain of truth in it. So let's see why this is not what we obtain and why do we have those conditional probabilities. So let me get back here. You see here, I just said if I measure z, I get with certainty plus. On the other hand, if instead of z I measure x, I get with certainty plus. Let me go to some more interesting questions. Suppose that I start with z, sigma z equal plus 1, and I measure two things. I measure z and I measure x, one after the other. And here is sigma x equal plus 1. What will I get now? Well, because sigma z is plus 1, I must get here plus 1 because I just measured this. Because there is sigma x plus 1 there, sigma x must be plus 1 here because otherwise I wouldn't get that. So this must be plus 1. Or if I want to look like this, the final condition implies that I will get that thing, whichever way you want to look. So I measure them like this and indeed even if I measure both of them, they give me this result. But suppose I start with sigma z equal plus 1, I end up sigma x equal plus 1, I measure sigma x and sigma z immediately. What am I going to get now? Well, now it could be plus 1 or it could be minus 1. Even if it is minus 1, the measurement of sigma z here can disturb it and I could still get plus 1 there. So here I can get plus or minus plus or minus in fact with equal probability. So you see that this idea that sigma z and sigma x are defined with certainty, which was valid when I measured them separately, is no longer valid when I try to measure both of them. And measuring sigma in 45 degree direction is like measuring both of them simultaneously. But since I can switch and the time order is important, that is what disturbs the whole thing. So that is the deep reason why this naive way of thinking did not work. But now you got more insight. It's because sigma x and sigma z did not commute in time. This is the next step. What happens? I presume all of you played with magnets. Yes? Take a tiny bar magnet. Did you play with them? And I don't know coming and picking up paper, clips or nails or whatever. So you take a tiny magnet. You take a bar magnet here. The bar magnet is actually made out of many, of many spins inside. And you start with it aligned in z. So you take here, you see it attracts something. To see how strong it attracts, for example I can put here some piece of iron on a string and see that it pulls it down. Or I can put the same thing, a piece of iron sideways. Now whenever you bring, you have a magnet here and you put a piece of iron this way to measure the spin in the x direction, you don't destroy the magnet. But it is a measurement of sigma x. So how comes that a measurement of sigma x does not disturb sigma z? Okay? It is because it's not a precise measurement. Okay? You've seen many times, so you know that x and p do not commute. And if you measure the position of a particle, you completely disturb its momentum. So now you have an infinite spread in momentum and you destroy the thing. That is what you learn in the first lessons of quantum mechanics. So take a look at this piece of chalk. You see it? Yes? So you measured its position. Is it still here? It is. So why? You measured its position and you still didn't destroy its momentum. And that is because you did not measure it precisely. You measure it because light bounces on it and then comes to your eyes, but it doesn't give it a too big kick. The light has a given wavelength, which is much larger than the precision of the center of mass of this. That's why it disturbs it only mildly. If I would have made a really good measurement, it would just go away. But this shows that I can have a measurement which is pretty good, though not absolutely precise, but it allows me not to disturb the particle when I measure something else. So here is root 2n plus minus root n. Root 2n is much larger than root n. And the maximum value of the spin is plus n. Now here I get that my measuring device will give me a much larger value than any possible eigenvalue. Okay? So this is the prediction from thinking from two-time information and, first of all, from considering pre-impose-selected ensembles. That if you try to perform a measurement in such a way that you do not fully disturb the two states propagating forward and backward in time, then within some small errors, which are necessary, the result could be way out of the allowed set of eigenvalues. Now this is not a proof. This is a prediction, because I didn't prove to you that this is what will happen, but the logic of the argument here is very straight. So now you will have to actually prove it. How do you prove it? Well, you have to start with this state, sigma z equal to n, and perform a measurement of the total spinning 45-degree direction. But here you cannot apply the projection postulate. That is valid only when you have a strong measurement, an ideal measurement. Here I said that I will make this measurement that is slightly macroscopic. So you have to model, you have to go back to this idea of the macroscopic thing, and you have to imagine that you have a piece of iron here, which has some spread, and to model how much this will move. Okay? And it will move a little bit here, it will move a little bit there. So if I look at the, let me say, let me look at the spin in 45-degree direction, the spin in 45-degree direction, the spinning 45-degree direction as a function of Sz. So it will have different eigenvalues of Sz when I measure. Or if I want to look, let me do it better. Let me put here Sz in 45-degree direction, and see starting with a state Sz equal n, and see what are the possible eigenvalues. Then for each of the possible eigenvalues, which range from minus n to n, you will have to say that your little Gaussian moves from here to there. So it moves and it gets entangled with it. The final measurement of measuring Szx will remove the entanglement and you will have an interference. So there is a lot of work for you to do, and it is something that I'm not willing to do it now, because the only way to actually see what happens is if you try it by yourself. And you will have to interfere a large number of terms, two n terms, and see what result you will get. And the claim is that whenever the size of the spread of this Gaussian is much larger than how much it moves due to a difference of a single spin being plus or minus, then the result will be this. So this is already way beyond classical mechanics because everything is based on interference here. It is based on the nature of quantum mechanics. So in quantum mechanics, the main idea that we used here, we started with the fact that when I perform a measurement at an initial time, a measurement at a final time, the measurement at a final time gives me supplementary information. Usually people thought what are the consequences of this supplementary information for the future. I asked what are the consequences for the time in between. I didn't modify quantum mechanics. This is a normal consequence of quantum mechanics, but one at which people didn't generally look. And then I looked to this simple example, and the moment when you start making measurements that are gentle enough not to disturb this two-time reality, whenever you take the measurement and you take any measurement and you tune down the coupling constant so that the interaction doesn't move, this is the pointer of the measuring device, the location of this piece of iron. If you tune down the coupling constant between this and the magnet, for example, you take it a bit further apart so that it moves less, you reach the asymptotic regime in which this is the value that coordinates the things. In general terms, the value that tells you if I start with something in the state psi, I end up in the state phi, and I measure A, and let me take the Hamiltonian to be zero. Then the value that generalizes this is something that we call the weak value, A weak, which is psi A phi over psi. So this value could be larger than the largest eigenvalue of A, could be smaller than the smallest eigenvalue of A, could in fact be complex. When it is complex, it is interesting what does it to the measuring device. It just, instead of modifying its position, will modify its momentum by adding some phases on top of it. That is a part that you should also try to calculate. There is no substitute for you trying to do it yourself. A hint perhaps you look at the perturbation that the measurement interaction does in first order perturbation. That would be a good start. Whenever you start analyzing physical situation and realizing that this is the key element because I'm looking in the most filtered ensembles. You know, these are an ordinary ensemble and I look over just all initial particles. It's again, it's like performing the final measurement, splitting it into sub ensembles and then forgetting which event to which sub ensemble corresponds. So that is losing information. It's never a good idea to lose information. When you look in detail, this is what physics tells you. This is the variable that characterizes an observable in between two things. So just to give you a tiny example without calculating anything, but here is one that has a nice interpretation. So imagine that you have a particle here you have a finite potential well in one dimension and let me look at a bound state. A bound state would be like this and then we'll have some exponential decay out. So in this case, if this is zero and that is the potential v0 outside, in this case the energy, the total energy is smaller than v0. It's tunneling there. So this is the situation where in classical physics you say, ah, the particle, if it has the energy, total energy, let me put total energy, smaller than the potential, it cannot be there. Why? Because e-total is the e-kinetic plus the potential, which is v0 in this case. So if it is to be smaller than v0 would mean that e-kinetic is negative, but e-kinetic is p squared over 2m. So it's a square of things. It's positive definite. So in classical physics a particle if has this small energy cannot climb out out. Quantum mechanically it is out. This was one of the very first challenging puzzles in quantum mechanics. So then people ask themselves at the beginning of quantum mechanics and if you go and look in early books of quantum mechanics you see people discussing this. Today it's forgotten. People take quantum mechanics as it is and they don't really bother. But go and look, look in BOM, which is one of the best books. They all discuss, people were very, very puzzled by this idea. So they ask how is it possible that a particle tunnels? Could I catch it tunneling and let me find what its kinetic energy is? And then the answer came, which was beautiful, came because, given by Heisenberg and the other if you find a particle here it's no longer in that state but you now localize it. But a localized particle here now has, you change its kinetic energy. Now you gave it kinetic energy so now it is legitimately out there. Okay, this is the story that people were told. Which is correct, but it doesn't actually capture what quantum mechanics is. What they've done here in this thing they started with defining e-total. Then they measure the position and then they want to measure the kinetic energy measuring the position totally disturbed this information. But there is something else that you can do. You start with a state of e-total. And now you think this way. I want to measure now the kinetic energy of the particle. The kinetic energy is p squared. The momentum does not commute with the position. So in principle it is possible that the particle is here in the well and when I measure the kinetic energy I throw the particle outside. So I get it here but it actually came from a legitimate place. Now if I measure the kinetic energy very well as well as I want but not infinity well. So I measure delta p basically I measure the momentum. I measure but not delta p equals 0 but having just some finite small value say epsilon. Then delta x would be like h bar over x pi epsilon. It creates a disturbance in x but not infinite if epsilon is finite. So say I fix an epsilon how well I want to measure the kinetic energy or the momentum and I find out how far I can throw a particle out of this well. And then I measure position and I'm only interested in cases when the particle comes much further than this delta x. So then I measure here the position and I select some position x0 which is further from the potential well than this disturbance. And then I look back at what I got here and what you get there which you can see immediately from here you really get into this weak regime and we get that this is e total minus v0 which is smaller than 0. So you really get that value for the measurement of the kinetic energy and how does it go in practice? Well in practice it goes like this. You see here it's the kinetic energy and here is the probability to find the value of kinetic energy. Kinetic energy cannot be smaller than 0 so if you make an ideal measurement this probability would be I don't know something here and decaying. I never calculated exactly how it looks but definitely nothing down than 0. But if you... there was some color chalks here no more. But if the measurement is not precise your pointer had a little bit of spread like this Gaussian then you will convoluted this with that Gaussian and you may get a little tail it will be a little bit disturb here and you may even get a little tail of errors where the pointer will move negative. This is what you get if you are at this level you get a spectrum like that but then you want to collect only cases where you find your particle at x0. So the experimental data here this is a histogram where you build it up of points yes of experimental points here. So it's a histogram so you come to every point and you say that is the result of the measurement at this time what is the value of x0 after? So you find out that in most of these cases where they are positive all those points found the particle much closer to the well. So you throw them out and you throw out basically all this region and what you are left with you are left with a much smaller number of cases out here where you thought that there are errors due to the measurement precisely at ET minus V0 that is what you will see in your lab. How is that related to the weak value? Where is this? Well that is pretty simple. The weak value I wanted to measure the kinetic energy between the state psi and the state. So the state psi was e total an eigenstate of e total and here I find it over x0 divided by x0 total x0 this operator of kinetic energy is the operator of e total so the Hamiltonian minus the operator of the potential put it e total x0 e total which well the e total I can apply there so that will give me the number e total the potential I can act only on x0 so gives me the potential at x0 and these are now numbers so this thing simply is e total minus V0 that is the value why you get this is up to you to calculate so you already see that the reality that people missed here they said I'm going to do I disturb so that fact that the kinetic energy is negative makes no sense in fact if we look at more carefully that is the physics you perform the measurement and you see that value and here I can make this measurement of kinetic as well as I want I can increase the precision as much as I want I can make this epsilon as small as I want what will happen in that case disturbance would be bigger and would be bigger and what happens in the ideal case it can throw the particle out of the well up to infinity but the probability for that to happen to find a position x outside that range is going down to zero because the wave function originally is going down to zero so when it is ideal you cannot find the particle with negative kinetic energy but you can never find the particle outside the range where you disturb so that is how things get stabilized in the case of very disturbing measurements so to finish it's an interesting tool that one discovered that this tool of weak measurements but the main idea is that everything follows from what is probably the most important differences between quantum mechanics and classical mechanics the fact that whenever you do a measurement you don't know what answer you will get and you receive new information and you have that as implication on the flow of time in quantum mechanics well thank you very much thank you very much so I see one question so I think we can so is there similar thought experiments for double seed experiments like which path I'm getting for the particle well you can use this idea to analyze whatever experiment you want ask any question in general any question that is paradoxic and where people dismiss it saying that if they try to make various measurements then one disturbs the other take those measurements don't make them as ideally precise because a measurement that is ideally precise is also very disturbing tune down a little bit the interaction perhaps a little bit more and see what you get I guarantee you will get some interesting answers yes thank you for the nice talk so I was thinking about this tuning down the coupling between your measuring device and your system so I would think that there may be some optimal value of this tuning parameter is this a safe statement to make well whenever the answer is not so simple so one thing is clear the moment you tune down the coupling constant so let me show you what the basic issue is in tuning down the coupling constant so just to understand what things are imagine any measurement and here I will say I measure some observable a and these are the eigenvalues a1, a2, a3, a4 and so on an now suppose you have a pointer so in the case of an ideal measurement a pointer starts localized at zero so this is now my pointer and if it gave the value indicates you perform and the system was in an eigenstate of a1 so you start in an eigenstate a1 with a pointer this is your system, this is your pointer your system will remain in a1 and the pointer will move to the value a1 the pointer moves here if you start with a2 with a system the pointer at zero will move in a2 and the pointer the system remains in a2 the pointer moves at a2 so it moves here the pointer is never a delta function so the pointer has a little spread now the question is how big that spread is so suppose the pointer has such a spread and the spread is smaller than the distance between the eigenvalues then if the pointer is here it would be like that and for a2 it would be like that and you can clearly distinguish whether it's this although it has a very small tail that may get up to a2 but that is almost zero on the other hand suppose that the pointer is much larger has a spread much larger than the eigenvalues then will be much larger so now it will not differentiate between various eigenvalues and that allows interferences in the state of the pointer the more extended you take the more chances you have for this interference to occur so it's not an optimum that if I go far too large it doesn't work whenever you take it much larger than this difference between the eigenvalues the more chances you are to enter this regime here I just increased the size of the pointer but I can make the coupling constant smaller which means that each of these eigenvalues is multiplied by the coupling constant by how much the pointer moves so it's the same thing you don't really what is interesting is you only need for that phenomena to occur to be much larger than the set of the relevant eigenvalues in your problem so perhaps somewhere here you would have very big distances but your problem doesn't measure that so the answer is that there always you enter in this regime when you tune down the coupling constant so you try not to disturb the system how much you need to tune down that depends from problem to problem but it never gives hard to continue to tune it down even more thank you actually I have a question so actually two small questions I understand so if the final measurement is not that precise the final measurement I take it precise if it is not that precise it will the effect will fade out fade away it depends because you can then it is like you mix different different values or different outcomes so you would have one outcome that corresponds to psi A and some final state let me call phi 1 psi phi 1 but you which will come with some probability because with some probability you got the final state P1 so plus something where is phi 2 A psi over phi 2 psi with say probability P2 so if the final the final state is not precisely defined now the question is how big is the difference how sensitive is this value to the final state in general it is not very sensitive this very sensitive but there are cases you see the important factor in general is this here if the scalar product between the final state and the initial state is very small then a change in the final state could have a dramatic effect so there you have to be very careful how to make your final measurement if you are in a regime where your final state is almost orthogonal on the initial there not being precise in the final measurement we are not talking about the intermediate measurement we are talking about this final measurement that determines in which pre and post selected ensemble you are if they are almost orthogonal then this little change may have dramatic effects you are right and the last small question that I am confused so if you have this situation in which you have S, Z and in beginning N and final N for very large N the probability will go down the probability will go down there again is because you the probability is related to this scalar product however what is important is to see that that is what determines quantum mechanics once you understand that limit you may come back and say ok what happens when N is smaller how large I need you can balance the thing by saying even if it is not very large but I will make my pointer spread so then my results will not be so sharp in a sense a large N being classical is not appropriate yes indeed you can do it even with N it will be one for a single spin half particle but then you will have to repeat the measurement a few times yes more questions is it atish I don't know if you want to say something I don't have any question but I just want to thank you very much Professor Papescu I hope you your visit was interesting even though I was not there and I hope to see you yes ok well thank you very much for inviting me and I hope to see you next time ok I had a great time thank you thank you what should I look to look exactly in the camera