 Okay, so I think it's time to start today's colloquium and it's a really great pleasure for me to introduce today's speaker, Angelo Bassi, who I've been in a strep project well for more than four years, because the corona was actually more like five years, I guess, than it was today, on actually some of the topics that Angelo is going to to talk about today and maybe just to say a few words about Angelo. I mean, Angelo is, as you can see here, from the University of Trieste and also the National Institute for Nuclear Physics in Italy. And Angelo, he made actually his PC in the University of Trieste back in 2001 and then he moved on to the Abdus Salam for a couple of years and then on a Marie Curie to MLU in Munich and then eventually back to Trieste in 2006 and has since then built up a nice group there on the topics that we're going to hear about today, that means the foundation of quantum mechanics and specifically a lot of this on the potential collapse model that we can use, that one can use to describe what happened to quantum mechanics at the latest game. And yeah, so he's a full professor there now and I think that you're going to tell us more about your research. Thank you. Thank you, Michael, for the invitation and also for the presentation. Thank you for being here. I had a very nice day from the morning till now and I heard about many things. So here now we change a bit the topic. It's about the foundations of quantum mechanics. So the basic question is that can we take the theory seriously as a fundamental theory of nature? For those quantum mechanics works very well. There is no question about it. No, we are not denying that. But can we think that it is the ultimate theory or a possible candidate for the ultimate theory of nature? Is there a problem with the theory? Of course, the answer is that there is a problem otherwise there wouldn't be no field of quantum foundations. And I will try to argue about that and about the possible resolution of that problem. There is no one, no resolution stands out as their resolution to the problem, but there are some and we'll talk about one of them. So first of all, what is the problem? So with quantum mechanics, what is one of the big deals in the field of the foundations of quantum mechanics? Here in the picture somehow I try to summarize the textbook formulation of the theory, which was quite explicit in the old days. Now it's a bit more implicit, but the story is still the same and it goes as follows. Quantum theory or at least originally was a theory to describe microscopic systems or atoms and molecules or elementary particles. Now it's not true anymore because of complex systems, but also because people are using to describe the universe with cosmology. But in some sense it's still a theory of particles, atoms and molecules. And then if you want to then you have a wave function. The wave function is the state of the system which has a completely different status with respect to all other theories, different from the electromagnetic field, different from the gravitational field, different from the point in phase space of classical mechanics. The wave function is the state that an observer uses to make predictions about outcomes of measurements. And that implies, and it was clear in the discussions between Bohr and Einstein and all the business with the Copenhagen School, that in quantum theory you assume that there is a classical word. There is a classical word of observers. This is the textbooks formulation. Then we can argue how to replace that. So there is a classical observer with a classical device that makes measurements and the wave function is the theoretical tool to predict the outcomes of the measurements. And then to borrow the, I mean the scientists from Steven Weinberg just to have a one quote, you have this the Copenhagen interpretation assumes a division between a microscopic world of a microscopic system governed by quantum mechanics and a classical world of apparatus and observers that obey classical physics. And then this works very well. No question about that. But this division is, first of all, is arbitrary. So where is exactly the division? Why? During a measurement something strange happens that marks the division between the quantum world and the classical world. And second, why is there the division in the first place? Typically, if you want to have a universal theory, a fundamental theory, you would like it to be universal in some sense. Then it will be, it will break down at some point, but it should be universal. Classical mechanics aimed at being a universal theory of nature. Then it failed. Electromagnetism also aims at being a fundamental theory of electric magnetic phenomena. And then classical electromagnetism. And then it failed. General activity aims at being a fundamental theory, universal theory of gravitational phenomena. And perhaps one day it will fail. So it should be universal. Here instead you have this division between the classical and the quantum world, which doesn't make much sense. But most of all, it is, it's not well specified where this division lies. And so in that sense, quantum mechanics cannot be considered a fundamental theory until you fix that problem. So what is the, so this is the old debate that goes back to many decades ago. So the standard, so the idea, the consensus in the, in the community is that if, if there is truth in quantum theory, the universe should be quantum. Every physical system should be describable in quantum mechanical terms. Atom, molecules, many atoms, many molecules. Classical objects, tables, chairs, humans as physical system and planets in the universe, at least in principle, should be described in quantum mechanical terms, which is fine, except for a problem that because of linearity, if everything is quantum, because of linearity, then these are the superpositions, these strange states should emerge to the classical world. We should be part of a huge, massive universal wave function where every part is entangled because, because of linearity, because of interactions and a linear revolution. We should be entangled with everything else. And then the question is, how do we describe the world that we see out of this huge, crazy, completely highly complicated and abstract wave function? How do we describe the classical world we live in? That is the big question. So there is this, there is this conflict between the linearity of the dynamics, which makes wave function expand and tangle between and become highly non-classical and observations. We don't see, we see localized objects. We see a classical world, at least with some degree of approximation. And the question is, how do we do? We reconcile these things. So again, as I said in the beginning, there is no consensus how this happens. So one solution is given by the many worlds of interpretation. Every time that you have a superposition, you split the universe into the two or more possibilities, and then you have this new ontology, so to say, of a vastly large meta-universe made of many parallel universes and so on and so forth. You can have the pilot wave theory or the de Broglie-Bohm theory, where there are particles, like in classical mechanics, which are guided by the wave function. So the wave function, the primary role of the wave function is not that of giving predictions to outcome of measurements, but to guide particles like waves, guide boats and surfers. Or you have models of spontaneous wave function collapse, which is the subject of my talk and my research in the past years, according to which the Schrodinger equation is not universal, is not, is not always valid. The Schrodinger equation, the linearity of the Schrodinger equation is not entirely correct. The equation is supplemented by non-linear and stochastic terms, which cause the collapse of the wave function. So the idea is that the linearity is valid at some level, but not a universally valid. And this is not a new idea in physics, let's say. So if you take the gravitational field, the gravitational field is linear at the Newtonian level. Then if you go to general relativity, it's not linear anymore. And in general, often linear theories are an approximation of non-linear theories. So it's a first order expansion in some interaction parameter. And the idea would be similar, that the linearity of the quantum mechanics is valid only under certain circumstances, but at the fundamental level it's not valid. It's not really true, there are non-linear terms which cause the collapse of the wave function. And in such a way, you can describe the transition from the micro to the macro to the macro-warden. In particular, if you take again the picture that the universe is quantum, then the state of the universe is not this crazy superposition anymore. It's not in a highly entangled state where nothing comes up, because that would not be a solution to the new equation. In the new equation, you would have something similar. You have seen this movie of galaxy formation from where these simulations, where you have these particles that interact gravitation and then they combine and you have the formation of cosmic structures. Of course, for entirely different reasons, but a picture that you should have in mind is similar. You have this highly quantum state that then over time the wave function collapses into regions where it's more dense and regions where it's less dense. And eventually, here where I am, the wave function is very peaked, then there is less wave function here, more there. And that would be, of course, you don't solve the equations for the universe, you solve the equations for simpler systems, but that would be the same picture that would emerge. The non-linearity that you inject into the dynamics causes the wave function to localize in space and then possibly to correspond to what we observe in the classical world that we observe. So that's the idea of spontaneous wave function collapse models. There you have the reference, if you wish, to the first paper in 1986. So that's where these new models came out in the first place. So the dynamics is this one. For those that know, I know that some of you know about it. It's the basic equation of a continuous quantum measurements for the people that know about it. For the other people, it's the first part is the Schrödinger equation. So that would be standard quantum mechanics. There we have the interactions. If you wish, it's linear. That part is linear, so you would have superpositions and all the quantum behavior is hidden there, dependent on the system. And then the new terms at the phonological level, I'm not talking here, at least from my point of view, the aim is not to convince you that this is the new fundamental theory of nature. It's just a phonological way to implement the collapse of the wave function. Then if there is a truth about these models, and someday someone finds a signal of it, then the theory is to be developed. So these terms cause the collapse of the wave function. So why the collapse of the wave function? Because the terms are nonlinear. So the superposition principle, strictly speaking, doesn't hold anymore. If you have here plus there, the superposition evolves into here or there. So it's a nonlinear term, mathematically you see that the wave function enters more than once into the dynamics via the quantum expectation. Where M is the mass density operator, I'm using a second quantized language. A dagger is the creation operator. A is the annihilation operator of some particles. I'm neglecting spin. It doesn't play a significant role, but you can pull spin. If you wish, the point is that it is a density operator. Density is density in space. So you have the collapse in position. The wave function shrinks in position, not in momentum, not in energy, not in spin, not in angular momentum, or whatever degree of freedom you want to consider, it's in space. And it collapses randomly because you have a noise there. W of t is a family of inner processes, which is white in time. No fundamental reason to be white in time is just because the equations are much simpler to deal with. And then there is a correlation function in space, capital G. Incidentally, the collapse occurs with the bond rule. This is something important because you don't see it obviously there, but if you describe a measurement process, so you take a microsystem, a macro system, everything is quantum, you plug into the equation, you solve the equation, and you see that the microsystem that behaves like an apparatus produces definite outcomes because of the collapse, but also with the correct probabilities. And if you do the analysis, it's not entirely trivial. What happens is, I mean, it's a delicate balance of many things in order to recover the bond rule. So in this way, you explain also that if you wish. Okay, so there is an amplification mechanism, the strength of the collapse. So there is some phenological parameter, which is contained here in the correlation function, and we'll show you later. One or two in most models, but there is an amplification mechanism for one atom or two atoms. The collapse effect is small, basically negligible, but then when you have a many complex system of many, many atoms, then you have a bit, not really a linear increase, but let's say a linear increase proportional to the size of the system. And so for large objects, the collapse is fast enough so fast that the wave function is always well localized. So the moral here, what is basically the picture that you should somehow try to have in your mind, that if you take just an isolated system, one atom, then you can neglect the new terms. They are completely ineffective. They are there, but they are negligible, except for tiny deviation, and then you have the wave function, the quantum wave function that spreads over space and behaves the usual quantum wave. But when we have like a table here, then the wave function for the center of mass, for example, is so well, it's a soliton. It's so well localized, really so well that you cannot possibly measure the spread. It behaves for all practical purposes like a point, a point moving according to Newton's laws. And then if you wish, that is a nice way to see how a particle like behavior emerges from the fundamental wave theory. So there is a wave function, so there's a theory of waves, but these waves are not linear. They don't evolve linearly, and in some cases they become very sharp solitons. They behave like particles. Okay, so this is an understanding. Okay, so this is, that's it with the theory, because so you can now go in many directions that have been explored. So the one interesting question would be these models are non-relativistic models. This is a typical scenario that one considers. How can you comply with relativity? That's a very difficult issue because of non-locality. Since you want to explain bell correlations for entangled pairs, the collapse of the wave function must be superluminal, and that's a delicate thing. But when you want to cope with relativity, that's an open problem for all interpretations and for alternative formulations of quantum mechanics. That is a problem with special relativity. Incidentally, this is a problem also with standard quantum field theory. In quantum field theory, typically you speak about the dynamics of the system, so the Schrodinger equation becomes the Dirac equation. But what happens to the collapse of the wave function? You never hear about the collapse of the wave function in the quantum field theory book. They don't touch on that. How does the wave function collapse after a measurement? There is no answer to that, because it's an open problem. What happens in a measurement to a measurement in a relativistic setting? If the wave function doesn't collapse, then you have to explain definite outcomes. If it does collapse, you have to explain how it collapses. So it's an open problem. Also, standard quantum field theory has a tension with relativity due to non-locality. So you can go in that direction, then you can have many other generalizations to non-Marcovian noises, noises that are not white, and there are many reasons to consider that. So I will go directly to the phenomenology, also to test these models, which I think is the nicest thing. So to test these models, if there is some truth in that, then it's worthwhile working harder. If they are excluded by experiments, it's the end of that, of course. So first of all, you can test these models. First of all, because you are changing the Schrodinger equation. You are introducing a new interaction, basically, that slightly changes the dynamics of physical systems. And so in principle, you can test them. The tests are very difficult for the reasons I will tell you now. But you can test them. So how to test them? There is a direct way to test them and an indirect way to test them. So the direct way is via interferometric experiments. So here, the issue was the superposition principle of quantum mechanics. So the issue is that why don't we see superpositions? Why do we live in a classical world and not in a quantum world as a macroscopic object? And the answer of this model is that the superposition principle is approximately right, but not entirely right. So you do to see whether this is true or not. You do an interferometric experiment. You take a massive system, possibly as large as possible, as massive as possible because of the amplification mechanism. You create a superposition. You let the superposition rest for some time as much as possible, as long as possible. The superposition should be as large as possible in space. And then, I mean, I'm talking as a theoretician, of course, and experimentally, you have to be much more clever than that. And then you do a double slit experiment. Do you see interference? Yes, quantum mechanics is right. These models are wrong. Don't you see interference? Of course, you've done the experiment in a clever way. So you have control of the system and you have reduced all sources of noise. But you still do not see interference. Then you have to ask yourself, why don't you see interference? Is it because you have neglected some source of noise, or is it because something is going on? So this would be a direct way. And so what is the good side of this type of experiments? That they are a direct test of these models, but you might or might not care, but they are a direct test of the superposition principle. This is a fundamental thing. So not many people care about that. But it's important to test the superposition principle of quantum mechanics as it is important to test the equivalence principle of general relativity. Because the two are the building blocks of the two theories. A tiny violation of the equivalence principle would mean the breakdown of general relativity. But not only a breakdown of general relativity because like Newtonian physics, it's right but not fully right. It would mean a breakdown of the deep meaning of general relativity of gravity being a manifestation of the curvature of spacetime. That the world view of general relativity would collapse if there is even a tiny violation of the equivalence principle. Something similar here, if you have a violation of the superposition principle, then the building block of quantum theory. What is quantum theory? It's a wave that's superimposed. If these waves do not superimpose a bit, but not completely, then the theory needs the theory as a big problem. So it's important to do this kind of experiments. We'll tell you about the current limits. The problem with these kind of experiments is that they are very difficult to be realized. It's difficult to create a superposition in the first place of a really massive system. It's difficult to keep the superposition there for some time in a clean environment. And then there are also detection problems. So only increasing by one order of magnitude the mass of the system requires a completely new technology. I will tell you where we are from the point of view of collapse models. So the alternative is non-interferometric experiments. So what is the idea behind that? The idea is that the collapse of the wave function besides collapsing the wave function also changes the position of the object. So changes the center of mass of the system. This is difficult to appreciate because you need to be in the field to really appreciate the reason for that. But we also, there is a preprint on the archive that if you want to have a consistent model of wave function collapse, then you unavoidably make the system fluctuating space. You have a branch in motion. So we discuss, so this is, let me go back. This is obvious from this equation here because you have a noise. You see that you have a branch in motion, w of t coupled to the system. So you are shaking the particle. But one can say, and this was one critique of Roger Penrose to our models, was saying, okay, you implemented the collapse of the wave function that way. I have in mind a model where the wave function collapses but without this branch in motion. Well, the answer to the paper on the archive is no, that's not possible. Any consistent model where the wave function collapses must have, in a way or another, this branch, it's not branch in the technical sense, branch in the sense of random, has to have this random motion in space. So collapse in position means diffusion also in position. And then this, that opens the way to non-interferometric experiments, which are basically classical experiments. You just take a physical system because the collapse is a universal feature. You take a physical system like this table here, and you just have to have a very good control of the motion of the system. That's a difficult part of the story. A good control of the motion of the system, and then you check whether the fluctuations are those predicted by quantum mechanics, but I mean also classical mechanics. Unless you go to the ground state, if you are in any thermal state, you can use basically classical mechanics. The fluctuations are those predicted by standard physics, or if there is, there are extra fluctuations which would be induced by this collapse. So what is the positive aspect of these experiments? That they are way easier to perform because you do not need to create a superposition. It's classical experiment, you just monitor the position, the motion of the center of mass of the system. So it is, they are much easier. Of course, the negative you have to pay for that, the negative side is that they are not a direct test of the superposition principle, but they are an indirect test of the superposition principle. But then I like to make this kind of analogy when the first detection of gravitational waves came third in the 70s, not 50 years ago, I think it was in the 70s when they saw the loss of energy in this binary system. Now they saw this loss of energy which was attributed to the emission of gravitational radiation, and the match between the experimental data and the theory was amazing, one of the best results in the history of physics. And then the people won the Nobel Prize. There was an indirect detection of gravitational waves, which was important. Then they didn't take away the fact that one wanted also a direct test of gravitational waves, which came with LIGO. Here, I mean, would be something similar. This would be an indirect test of the violation of the superposition principle with non-interferometric experiments, which would still call eventually some point for interferometric ones. They are complementary, both important to perform. So much of the effort is how to test, in a non-interferometric way, how to test the possible existence of this extra noise that causes the collapse of the wave function. So the point is that you need a platform where you have a good control of the systems. Over the years we considered three of them, cold atoms. We use them in a very naive way, so we apologize. Here there are experts in cold atom physics. We use them really in a very elementary way, simply because you have a good control of the cloud. So you can see whether there are extra effects or not. Charged particles, why charged particles? Because a charged particle under the influence of a noise emits radiation. Noise is acceleration. So you see whether this radiation comes out. And I will tell you a bit more about that. Or optomechanical systems, which is part of the project that we did with Michael. You just have to create an ideal, basically, harmonico simulator. So you have to take the right particle in the right trap, the cooling, and all this kind. It was the trap guy in the experiment. In the cooling, you have to cool it, and then you have to monitor the motion. And the business here is to have an extremely high control of the motion of the system to detect these extra effects. Okay, and that's the three systems. I will not give you the theory out of that, because for each of them, it would be a talk only for each of them. So I will just show you the results with the relevant literature. But basically, the point is to take the equation above and apply to each of these systems, and then to compute the prediction system, in standard work of theoretical physics. Okay, so I'll start with the test of the Dioche-Peneros model. So the Dioche-Peneros model is, so I will talk about two models, the Dioche-Peneros and CSL. These are the two models that have been considered in the literature. So again, the equation before, and then this everything, I mean, you can play with many things, but in the few people working on these collapse models, keep everything fixed, except for the correlation function. That's the real degree of freedom. In the Dioche-Peneros model, the correlation function of the noise, the space part, it's white in time, but in space it's correlated, and it's given there. It is Newtonian potential, and you see that there is a capital G, so there is a flavor of gravity, the idea that gravity plays a role in the collapse of the wave function. We tell you something now, and an H bar, because just because it's the Schrodinger equation at the end of the day. This is a nice model, because apparently it's without three parameters. So it's just constants of nature that we know of. Apparently, I will tell you soon. Why is it apparently? Because the correlation function is inside an integral, and when you integrate one over R, it diverges. So if you take it literally, this model is physically consistent. It would imply an infinite, instantaneous collapse of the wave function of any system. You shouldn't see a superposition ever in your life, which is not possible. I mean, people know about that. The Dioche-Peneros knows that these integrals diverge, and in fact, you have to take a cutoff, a cutoff either in space or in momentum. So you cut the integral at high momenta, because you are in the non-nullativistic regime. Or equivalently, you give a finite size, an effective finite size to otherwise elementary particles. Particles are not really point-like in these models, but they have a finite size. Instead of working with momentum, we prefer to work with position, so you have an effective size R naught. But you could take an effective momentum cutoff. It would be the same. But there is a new effective parameter. This is not surprising. I mean, it's a phenomenological model. You cannot ask too much out of this model. Again, if there is some truth, it should come from a deeper level theory. So how to choose the size of this parameter R naught? Peneros suggested the cutoff R naught. The cutoff R naught depends on the physical system you are considering. And for those of you who know about it, it would be the size of the wave function solution of the Schrodinger-Newton equation. That would be another story. Just take that there is a recipe to compute R naught for every physical system you consider. Diochi originally took the Compton wavelength, so a cutoff which is independent from the system. It just depends on the type of particles. So for a proton, it would be the corresponding Compton wavelength. This was the original idea. It was later abandoned. So just a note about Peneros. So the full story about the Diochi Peneros model is the following one. The ocean Peneros worked independently. Peneros brought forward the idea that according to him, in the correct quantum theory of gravity, superpositions of different spacetimes should not be stable. So if you quantize a system, then you have superpositions. So in a standard quantization of gravity, you would have a superposition of spacetime plus spacetime there or a gravity that goes one way and another way. For Peneros instead, the true quantum theory of gravity should not be linear in that sense. The superposition principle should not be exactly right. And then it gives an argument to that. Basically, that's kind of technical, so I don't really want to go into it unless you specifically ask me for that. But it's really technical. So the true theory of gravity, the two quantum theory of gravity for him, doesn't tolerate superposition of different spacetimes. So when you have a superposition of different spacetimes generated by a superposition of matter, then spacetime collapses. Not in the sense of generativity, in the sense of the superposition. And then with the collapse of spacetime, you have the collapse of the massive superposition. Then you have the classicalization of the state of the system. And he uses some phenomenology to compute the lifetime of a superposition which ends up to be, it's in the yellow cloud there, it's at the lifetime is H bar over EG. And EG is the gravitational content of the superposition. And then you understand that the superposition of an atom here plus there has a smaller gravitational content than the superposition of the Schrodinger ket here plus there. And if you plug the numbers, what happens that the superposition of an atom lives quite long, longer than experimental capabilities. So, it's a quantum. But the Schrodinger ket, instead, the superposition of a ket would live for a split of a second. It would immediately collapse. So, and this is not because you tune some parameter. You just compute the gravitational content. And it's somehow interesting to see that actually gravity has the, although weak, has the right weakness or the right strength to guarantee that microscopic systems behave quantum mechanically and microscopic systems behave classically. Wouldn't be obvious with the electromagnetic, with the Coulomb force. This is not true. The gravitational force has the right strength or weakness for that. So, it's at least an interesting coincidence. Okay, so this is Penrose gave the idea. Then the Yoshi built the equation to reproduce the phenomenology of Penrose. And that's why it's called the Yoshi-Penrose model. So, we did a test of the Yoshi-Penrose model, which was published by now, almost two years ago by now, by checking the radiation emission. So, the idea here, as I said before, is that if you take a charged particle and you plug it into the equation there, then the particle, the prediction is that the particle emits radiation, because the particle is accelerated by the noise. The particle jiggles, it's charged, it emits radiation. Of course, the spectrum of the radiation emission is extremely faint. Nothing strange happens. You don't see photos coming out. You see them, but not with your eyes, with a very sophisticated detector, you will see it. And that's what was the experimental part. The experimental part was the germanium detector. The germanium detector in the, let's see if I have the picture. No, the germanium detector typically used for dark matter searches in the underground laboratories in Frascati, near Rome. So, it's an experiment for very rare events. So, that's the experiment. Here you have the schematics of the detector, the spectrum that has been taken over two months. We computed, we did a theoretical prediction. The theoretical prediction of photons coming out of this germanium detector is very complicated, so we need to do some approximations. And in particular, the easiest calculation, which was good enough, was to go to the energy range indicated there, which is high enough to reproduce the tail of the spectrum. The spectrum has very complicated peaks that you don't want to analyze theoretically, or at least not, that's not the first thing you would like to do. You just check the tail of the spectrum at high energy. And then you see that you have the 1 over omega t. So, this is the emission rate, number of photons, gamma of t per unit time and unit frequency omega. And that theoretical formula is easy enough because we are in a very nice energy regime. You see that there are all the constants of nature, capital G, because it's gravity, it's adiorship and it's mode, the electric charge, because we are checking the emission of radiation, speed of light, whatever. And then you have the n and n a measure the size of the system. The more atoms you have, the more photons should come out. It's as easy as that. Then you have the dependence over the cutoff or not. It goes as 1 over the third power or not. And then you have the tail, 1 over omega. Omega is the frequency of the photons, so it just goes just 1 over k, basically, the tail. Okay, so there was the data taken over 62 days, so the total count were 576, so not so many. But the important thing is that that's why the experiment was worthwhile doing, is that the experimentalist had a good control of the detector. They had a good system. They could analyze in a very accurate way the internal working of the detector, and in particular the signals that would come out anyhow because of radioactivity, because of signals anyhow coming from the environment. And they were able to simulate 576 counts out of 576. So in a statistical sense, of course, I'm saying that out of those 576 counts, 500 were coming from known physical effects. Most of them were coming from internal radioactivity of the detector. It is an ultra-pure germanium detector, but not a perfect germanium detector, not an ideal germanium detector. There is some internal activity which accounts for some of the signals, and then if you sum up all the possible effects, most of them were simulated, and basically the black histogram is the tail of the spectrum, and the green histogram is the simulation done by the software that they developed. So the rest, let's say, in a gain in a statistical sense, the 700 counts were not explained. Not explained doesn't mean that they are not unexplainable. They could come from other physical effects that were not taken into account, or potentially they could come from unknown physics like this spontaneous collapse. But it's only 70 counts. There are not many. So if they all come from a collapse model, the collapse cannot be too strong, because if it is too strong, there should be more counts. So you set a bound on the strength of the collapse, and in particular here you set a lower bound because the parameter r naught is in the denominator. So the smaller r naught, the stronger the effect, and then the effect was not so strong. And then this is the conclusion of the paper. You see that you have the r naught, the value of r naught, and the red circle is the value of r naught computed according to the recipe suggested by Penrose applied to the specific system, the Germanium detector. So in between 10 to the minus 12 and 10 to the minus 11, the blue and the red lines were excluded regions coming from previous experiments, and the green line is the one that came from this experiment. And then that means that the value is excluded, experimentally excluded. So that was the conclusion of our paper. The conclusion of our paper is that the ocean of Penrose model in the original formulation, so if you just read the literature and apply the rules, is excluded by experiments. Does it mean that the idea of Penrose that gravity causes the collapse of the wave function is wrong? Well, perhaps yes, but not necessarily. It just means that if it is true, it cannot be implemented in this simple way by these models, and it's a more sophisticated model that we don't have yet. So we are doing an extended analysis also for the CSL model I will tell you about. So basically it's just a recent research that we have been working on. So the blue line is a tail that you obtain if you do the first order perturbation theory in a very naive way. So you get the order of magnitude of the effect. But if you extend the analysis to lower energy and to higher orders, you see that there is a modulation, which is the red curve. So the blue curve is just 1 over k. If you refine the analysis, you get a modulation, which is nice, because you have a specific signature of this effect. And therefore, so in an ideal world, you see the effect and it would be a wow thing. But more realistically, you can set stronger bounds by having a specific behavior which is not just 1 over k but is richer. You can have a stronger bounds on the parameter so you can have a better analysis. Okay, so CSL model. CSL model is always the same equation. It's the third time you see it, but just a different correlation function because the research came from a different point of view, from the works of Girard, Riemann, and Weber back in the 80s. Now the correlation function is probably one of the first correlation functions that you would write once when you think of a noise. It's just a Gaussian with a cutoff. So it's a Gaussian with the cutoff, with the finite size. So you have now two parameters. One is the width of the Gaussian, which is the correlation function of the noise. And then lambda is the strength basically of the noise. M0 is a reference mass, which is the mass of a nucleon. It doesn't play. It's just there to rescale the value of lambda. So now you have two parameters you have in this model. Again, now you want to test whether this model is right or wrong. And then now the exclusion plot is two-dimensional. You have RC in the horizontal axis and lambda on the vertical axis. The bottom line, the bottom gray area, is a kind of a region that is excluded on theoretical reasons. And the reason is that for those, for those values of the parameters, the collapse of the wave function is too weak and the model will not guarantee that classical objects behave classically. And if classical objects don't behave classically, these models are useless and then you dismiss them. So you don't want the values to be too weak otherwise the models are not important. Of course, the region, the border is sharp in the picture, but of course you can play with what being classical means. So there is some degree of freedom on the size of that gray area, but more or less it is there. The colored area instead is excluded by interferometric experiments. And there you have, so the one with the entanglement diamonds is not relevant. The two important ones are with the atom interferometry and with the matter wave interferometry. So the two on the left. So atom interferometry is the result by the group of Mark Kasevich. What did he do with atomic fountain? He created a superposition, no? The largest superposition in space so far with matter. So almost half a meter. Half a meter was debated. So people then in later publication they spoke about 30 centimeters, but from my point of view 30 or 50 centimeters is the same. So they were able to create a superposition of an atom, single atoms in a superposition of a distance like this, more or less 30 centimeters, more or less like this, which is the largest superposition as far as I know ever created in a laboratory once of matter. With light of course it's much easier. So it's a macroscopic superposition. It's also, the superposition lived also for one second. So it's also a macroscopic time, but it's a microscopic system. It's a single atom. And that's where that's the weak part of the story from the point of view of these models. Because the system is a microscopic system, this was not really a test of a superposition principle towards the macroscopic scale. And the bound in fact is the green area that you see in the picture, which is far away from the gray area. You can count how many orders of magnitude. Oh, by the way, the black points there, the black bullets, are some theoretical suggestions about the possible values of lambda and RC, which are based on different arguments. The other one is matter wave interferometer with large molecules by the group of Marcus Arndt in Vienna. The current record is, I hope I updated, yes I think I updated it, yes, 10,000 atomic mass units. So it's a very complicated molecule. That's how the fact, the mass is much larger, but then you lose in a superposition distance. It's not one, it's not centimeters anymore, but it's, no, it's a fraction of a micron. And also the time is a milliseconds, it's not seconds anymore. So you gain on one side and you lose on the other side. And in fact, the bound, it's better than the one with the atomic fountain, but not so much better. It's just one, I mean, a couple of orders of magnitude. You're still far away from the green region, from the gray region. So what does that mean? It means so, from the point of view of collapsed models, it means that these experiments, how much time do you have? I'm going to, yes, I'm close to, from the point of view of interferometric experiments, you see how much freedom you still have. The white region is the unexplored region. But perhaps you might not, probably, you might not care about collapsed models. The point of view is that the test of the superposition principle, experimentally we know that molecules, up to 10,000 atomic mass units, are quantum in the sense of being a superposition here plus there. But that's it. And such molecules are, from the quantum perspective, are large molecules. But from the molecular perspective, they are not, sorry, they are not that large. The DNA, is a DNA quantum, is a virus quantum, is a cell quantum, is a dust-grain quantum, we don't know it yet. So there is a vast ignorance about the validity of the superposition principle towards the macroscopic scale. And for the reasons I said before, it's worthwhile exploring. So for non-interferometric experiments, the situation is better instead. Because as I said before, it's, they are easier or less difficult, let's say, to be performed. This is with cold atoms. So again, we, I mean cold atoms, I mean, I'm embarrassed to talk about cold atoms here. It's just, again, it's a classical analysis at the end of the day. You take a cloud of cold atoms and you cool it and you have a good control of the cooling. But according to collapse models, you are not capable of cooling too much because there is this noise expanding that makes the cloud expand. So the cooling should not be so cold. But it was so cold. And then, and then you killed, so that's the reference, the theoretical reference, so this is a paper, this is a picture taken by the, by a work of, the group of Kasevich again. You exclude the orange region there. For those values of the parameters, the cloud should have expanded more than what they, they saw in the experiment. So the radiation experiments, I told you about, before they were applied to the Dioche Penrose models, now they are applied to the CSL model. And you see that you, you have excluded the, the theoretical suggestion by Adler. Gravitational wave detectors, these are macroscopic experiments. It's LIGO, it's LIZA, it's Virgo. They are huge objects. But as I said before, it's a classical experiment. And the more, and in, and in these devices, you have a highly, a very good control of the motion, the mechanical motion of the system, because you wanted to see gravitational waves. So you had to have a very good control. If they had such a good control, it was because the collapsed noise was not so strong. And then you kill all the region on the right. And then you have specific experiments, like the Cantilever experiment, and updated Cantilever experiments. And that's some other updates, sorry, I'm just going, I'm going because I'm towards the end and that's the state of the art with the, also there is a review that appeared early last year. And that's the state of the art. You see that the idea of Adler is gone by now. It's been disproved by different type of experiments. And then you have a specific area that still needs to be explored, which looks small in some sense it is, in some sense it is not, because again you can count the orders of magnitude there. But that's basically the fate of these models. So I am not surprised that the region, that most of the region up there has been excluded because there would have been a miracle basically to see immediately a collapse effect. Now the story becomes interesting and we will see. It will take a long time to really to check whether there is a sign of a collapse effect or not in future experiments. Okay, so there was, okay, there was a, I can skip because it's outdated. And so just to, just, I just go quickly if you are interested about the references. These have been related to the work that we have done on collapse models. We, not only we looked at the motion of an harmonic oscillator, but we looked also at rotations. We explored the possibility of doing experiments in space. We play with the mass distribution of the system. We applied collapse models to cosmology. And there's a side project we applied also to quantum computers to tell you that I mean there is many things you can do with these kind of equations. And you can play with the noise and make the noise no white or you can introduce dissipative effects and the parameter space changes. So this is just a flavor of the many things that we did there with these models. Okay, thank you for the attention and this is my group and the funding agencies.