 Thanks everybody for showing up. It's my pleasure to give this course together with Alex as Matteo was anticipating. So I will start today's lecture with the following. I will first give you a couple of more technical remarks and then I will tell you something about the motivations that drove us towards this direction. You can see already from the title that this is a kind of a hybrid topic that connects two, in principle, distinct fields. From one side here, you have something data mining that you can interpret as a specific form of unsupervised. This is generally something that attains to the myth of data science, machine learning, artificial intelligence. And from the other side is that you have something which is typically connected to a field of physics, the many body problem that entails several things. I mean, of course, the principal one is statistical mechanics, physical physics, broadly speaking, but also entails related to that condensate matter, particle physics, and so on and so forth. So why the hell are we trying to work at the interface between these two, in principle, disparate fields? And then one can have different sources of motivation. The first motivation is that in the last maybe 20 to 30 years, in all of the fields from the right, there has been an explosion in terms of new computational techniques and also larger computational power. So this has generated, this has changed in part the paradigm, now we approach the many body problem, by inputting this new ingredient which is a huge amount of data that needs to be interpreted. So this is really something that comes out of the last 1980s to today, huge. Of course, interesting for a theorist, but one should not forget that in the last year there has been also experimental developments. So we will not be dealing with that, but it's also true that to some extent we have the idea also because experiments have a certain amount of data. Another motivation that I think for you all that are interested in the field of complex system is probably even deeper and more fundamental. Is that when we think about the many body problems specifically in the context of statistical physics, there is one concept that we typically emphasize and utilize to understand and connect different phenomena. The concept of universality. Universality is just telling us that very different fields, very different distances of potentially also different problems share only very few things in common, my typical example of symmetry, and yet they behave in exactly the same manner. This can happen for when looking at the behavior of ultra cold atoms, molecules trapped in flight, or the behavior of electrons in a solid, then you can think about the one that fits mostly your specific background. This is something that really typically pertains to physics. The idea is that what we really would like to see is whether this concept of universality can teach us something about data sets, whether instead of having an understanding of data set that is based on the tools that have been developed in the field of data science. We can have a different understanding that comes from from the direction of statistical physics. Of course, I mean, this is just one direction of the motivation. The other direction is to exploit this power. There's about the power of these new methods in the field of data science and supervised and supervised learning to learn something about statistical mechanics models about many body problems that otherwise it would be very hard to understand. That is also the second direction. These are the two motivations, and at the same time the two goals of this short course trying to understand whether we can identify universality in data sets in the same way that we identify universality in physical phenomena, exactly the same way. And whether we can actually look at data that are generated experiments or in computations, analyze them with new techniques that come from the field of supervised learning and understand something about these data structures and then about the physical phenomenon. So how we will be doing this? Well, our approach since we have two teachers is really coming from two different sides. So in terms of methods, after an introduction that will take place today, we will have to do it in two ways. I mean, the first way I will review methods. I will come from this side and I will review methods to generate large meaningful data structures related to physical problems. And in particular, in the physical complex systems, I will utilize classical statistical mechanics, and I will discuss cluster algorithms. This will be tomorrow's part. And I mean, you should really not see it as a super technical lecture. I think it will be mostly conceptual. What do you really have to, what I would like you to appreciate from a methodological side is that as theorists, we have tools generate. Hello, sorry, I want to ask a question. Yes, let me just write in this sentence, corresponding to physical phenomenon. Yes, please tell me. I just wanted to write more clearly just like, what do you write in front of cluster? Cluster algorithms. So is this that you can read? Okay, and then you can make your writing a little more boot so that Thank you for pointing this out. So in right hand side of the blackboard, can you write a bit large? Yes, sir. In right hand side of the book. These words are small, sir. I'm unable to see them. Yes, no problem. Tools for these comments because after a year of writing all my lectures on a tablet, finally I'm from the real blackboard and I have to adjust back to size. So do not be scared of reminding me that to write harder during the course of the lecture. Logically, what the first part of the course will be about. And then there will be a second part would be developing the tools. Now that we have generated this data structure, we would have to analyze them as we have developed these tools. In the last lecture, I will show you how to apply them to this physical phenomenon and how to establish the links that we are targeted at the beginning. So observe universality in data sets and utilize unsupervised learning to learn something about the many. Any question on this. These questions here. I don't think that for me. Very good. Okay, let me maybe remind of a thing. If you have physics question is much better to just unmute yourself and ask straight away. Professor. Yes, do they do the universality of data science is also characterized by some exponents that, as we see in the case for phase transition or population theory. Yes. The answer to your question is yes, but the reason why it is so I will answer you in two weeks from now. Are you satisfied or you want to want to have a primer. And two weeks later will be enough for me. Okay, good. Then we will do it two weeks now. Thank you. Cheers. Are there more questions. What we will do today. This is that lecture one I will raise everything. I know that you guys are very broad spectrum of backgrounds. I am decided to somehow give a short introduction. The models that the easy model that I'm sure every of you said about but I just want to review some of its basic properties in this first lecture so that we set the language but I will also use it to illustrate what I will tell you in the last lecture on a very, very minimal example. Before that, let me come back to the concept of universal and the concept of phase transition. So what is a phase transition. Well, typically, when we have a substance, we have different phases at the equilibrium and the substance can be everything. Let me just draw you two examples. You can take a water, you know, that in water pressure, temperature, phase diagram, we have vapor, we have ice. So these are different phases of matter. And when you move from one phase to another, you cross what is called a phase boundary. These are called coexistence lines and some of those and with what are critical points. Otherwise, this will be something that we call just phase transition of first order. Let me give you the definition in a second. But before that, I want to show you that this is not just something that it's at the level of describing things that we can touch or so. I mean, one can also go more abstract and have exactly the same type of physical pictures. Okay. Now, maybe this is something that every of you knows, but if I tell you what is the face diagram of quantum chromodynamics would get scared and say, okay, what is going on here. This is very strange particle physics. I don't know what you're talking about. But at the very end, what you really draw is always very simple phase diagram in that case, temperature and with something which is called barium density of chemical potential. And if you look at quantum chromodynamics, you will also see that there that are lines, there are the lines which are dotted and phrases. This is what is called quantum plasma. Hey, would you please rise a bit larger. Yes. Thank you very much. This is here in the rest of the data. Just to tell you that, I mean, typical features, typical substances at whatever scale they live. This can be particle physics or real world scenarios. These places and phase transitions and phase transitions. So in classes, okay, I wanted to stop now for a minute, but let me say this. So following in two major classes. The first one are the so-called first store transitions to the transition. What you have is that typically there is coexistence along these lines. And these are indicated by discontinuities of first, first derivatives, potentials or water parameters. And this are typically not related to universities. There is a sense of universities also in first transition, but we will not be dealing with that. The transition we are interested in will be second order. So these are transition where one observes discontinuities in putting them on a potentials or order parameters, not at the first order, typically at second order or lower. In the past, there was a full classification depending on which order you saw something. Nowadays, it is most more commonly said that all the transition which are continuous. The purpose of the examples that we will be discussing second order and continuous are equivalent. And example of those, there are many, of course, and one typical example is the two dimensional easy model. And this will be the one that we will be discussing today. So just one of the reference points. Now, there are also transitions that do not immediately fall into this class. And some of you have heard about the Berezinski-Costerlitz-Stallis transition. So transitions where the nature of excitations that drives them is of topological nature. They typically do not display any discontinuity at final order. So sometimes I refer to as infinite order transitions. I will not be discussing BKTs, even though some of the things that I will be telling you about data sets can also be extended to this topology. I would like to take a short break now just to fight Zoom fatigue. We can do something very short like two minutes. And of course, if you have questions in the meanwhile, I will be happy to answer. Professor, may I ask you a question now? Yeah, you can. Sure. Yes, you said about infinite order transition. So the second order transition is characterized by the first order derivative of free energy is continuous and the second order derivative is discontinuous. And then the first order for first order is that is the first order derivative is discontinuous. But for infinite order is up to infinite order. Every derivative is continuous. Correct. Yes. In the sense that you will not find discontinuity until you go down to infinite order. Apply to your question with more detail, but they will take us the full amount of this lecture. If you are interested in a nice reference where some of these things are discussed in a way which I find accessible. There is a very nice book by Daniel Arobas. It's free. It's online. And I think it is a really, really beautiful reference for statistical mechanics problem. And some of the things that you have asked me are discussed there. Thank you. I have one more question. Is that anything characteristics about correlation length in case of infinite order. The correlation length does not follow power law, or rather following an exponential divergence, something like that, that we see in case of big Katie transition. Yes. Yes. Is it a characteristic property or this is only for big Katie transition or is it is a characteristic for infinite order transition. Okay, to be honest with you, it's not that there are many infinite order transitions so that one one can fully characterize them into a class. I will tell you that this characteristic is, I mean, my intuition will be that this exponential scaling with respect to reduce temperature. It's a common feature of all infinite order transition because they are intrinsically non perturbative. Okay. So I will tell you it's generic. Of course, the specific form, like in big Katie that he comes with a square root of the of the use of a use temperature this can change, but the fact that it's exponential should not be changed. Thank you. Cheers. In the chat, somebody asks some reference text books could be helpful for this course, of course, and I will send you everything. Okay, so what we are prepared with Alex is a text is actually a full tech file with the text and on the figures and you will get it at the end of the course, or maybe you will get a primer after the third lecture we have to see. There is no really book about the kind of anybody from progress reasons, but we can suggest you already from now, certain references that cover some of the topics that we discussed, and in particular at the end of today, I will put a few references on the elements on the matrix channel. Okay. All of you that are on the matrix channel tonight will get the references. And for those of you that are not yet there, please subscribe quickly. And if you don't know how to do it, please contact our second. Okay, the secretary of the course. And then there is a question so this course is about is mainly about data mining with phase transition. Yes, it will be mostly about data mining with phase transitions. One can of course also try to determine problems which are related to transitions. For instance, phases of matter. This is a very interesting topic. And there are some known results in that field but unfortunately for reasons due to time, I will not be covering that. If you are interested on that topic specifically, I will suggest that you repost this question on the matrix channel tonight and I will send you references. Yes, then if you're not on the matrix, you have to contact the secretary and they will help you with that. I think our two minutes pause went a bit longer. So I think that's okay. So, so we have said that the last thing that we cover are that there are three transitions of two different types. And we are mentioned that what we want now it's, it's a model where we can review some of the features of this transition. The model is the two dimensional easy model in the square like this. So how is this model defined and that most of you already know and one has to define that is for us will go to square and then one assigns variable variables. To each lattice and these variables can only take two values plus one or minus one. And our convention will be such that plus one will indicate with a spin and minus one will indicate with a spin. So that's the configuration space of our model. And we will be defined by a classical Hamiltonian or energy functional and this energy function will be very simple. We will be summing over all pairs of neighbors I and J. So this notation is all nearest neighbors. The purpose of the sign of Jane does not matter. If J is positive. The energy would like to have neighboring spins which are anti aligned so that the total contribution is negative. If J is negative. One would like instead to do the opposite. So to favor ferro magnetism and this will be the choice that most of the time will be making for our conclusion. Now the easy model doesn't really have any parameter. So the only thing we can think of in terms of driving a transition in this model is temperature. And we write down this diagram of the easy model that will always be something which is called temperature. This would be zero. And we can even by just looking at this energy function we can easily guess that we see if there are questions. Looking at this energy function one can easily guess how the space diagram looks. If the temperature is very small so it's close to zero temperature has to be measured in units of something and for us the unit will be J. Close to this regime all the spins because they would like to minimize this energy would like to point into the same direction. Here we will have to toss what is called a ferromagnetic phase. Of course nobody tells me that this will be the preferred choice of my spins there will be a one another one which will be equally. We will not be discussing that we will just interested into the fact that this phase is ordered. Oppositely if the temperature overcome the regime in particular where it comes over comes J by a lot. At some point we expect the spins not to care more about the energy because they will have a lot of temperature. They will be more than configurations like this will be equally probable then configurations like this. So this phase is typically for a magnet to know that the two are separated by what is called second order phase transition at the critical value of the temperature. Tc units of J. And this can be computed analytically using for instance the duality of the easy model and I forgot what the size that was I think it's 2 divided by the logarithm of 1 plus 2 something like 2.26 something else. It does not really matter. So these are the transitions we are interested in. So what characterizes this transition because so far this is a qualitative picture of of this problem. Now we would like to make it on firmer grounds. So the best concept that is very useful for us. That is how we distinguish the magnet from a pattern. Well, and the idea is that we can utilize what is called correlation function. So let me let me define it and then provide intuition. So we can say that for instance now we want to go on to describe the system by looking at the correlation between two spins at the distance are defined for simplicity. Sigma j sigma j plus r. And it does not really matter in which direction we're looking suppose that we're only looking one direction and the system decides to drop. How will this correlation function behave? Now, if we are in a ferromagnetic phase, all of the spins or a very large majority of the spin will be pointing to the into one direction. So there will be an infinite correlation between two spin at arbitrary distances. So this correlation now plot it as a function of distance. And our federal case is the white line. Well, this correlation will decay a bit at short distances and but then we will get to a constant is much larger than one more specifically C. And this constant for those of you that have already seen that is nothing but is related to the order parameter. Instead, obviously, if we are at very high temperature, what happens is that the C of r at very large distances, I mean, it will be very fast because there is no correlation whatsoever. And the spins are essentially independent variable. What will happen is that this will decay if we try to plot this. Here, you see that this correlation approaches zero within a length scale, which is related to this correlation. What is interesting is what happens in the middle. Okay, exactly when we look at the exactly at that point, what will happen is that it means that you approach it from the pattern of the device. The exponential will become slower, slower, slower. And at some point you will have to transform to something that at some point approaches a constant. And for second, the transition, this, this happens like this. So C of r equal to one minus r one exponent. And this exponent, let me write it precisely. The two dimensional isn't more at least exactly called eta. We plot that one to critic classical critical point of this month. This correlation does decay by the case very slowly. This connect to the question that by one of you minutes ago, this is one of the critical exponents that eta. So now all this change of behavior happens. It can also be described in a systematic manner, at least comes from the normalization group and so on and so forth. But in particular, I just want to point you out to one relation is that if we look at how the correlation length approach the critical point at the critical point, there is no the correction. You can say that the correlation is infinite. Well, the way this infinity is approached is described by this equation. The correlation line will be proportional to T minus Tc minus mu. The correlation line will approach infinity in a way which is not random. But for this second or first transition, it will be dictated by another critical exponent, the nuclear exponent. These are just not symbols. I mean, they also have true numbers. They can be computed analytically in the case of the 2D easy model. And in particular, eta is equal to one quarter. The critical exponent that we will not be touching defines what is called a universality class. This is a very different story already, different forms, different sources. But I think it's important to give you this reminder because first time, I mean, this will be something that at the very end, we will have to connect with data science. So this has to be better be bearing ground. And also it reminds us that a very naive but physical approach to this problem. It's already allowing us to capture the basic features of what is going on. It depends on this more refined understanding. So I think I will take a short break and read your questions now from the bottom. Can we really define the reason for the discussion? So there's not really a question. So if somebody has a question about this discussion, I will strongly suggest that the VMB or she amutes and ask the question. Temperature is an external parameter. Okay. But for our purposes, I think you can understand simply by the fact that temperature. It's an effective parameter that described the fact that our system is not closed or in these two easy model is not isolated. It's in contact with an environment and this environment through a competition of energy skills can be effectively described other questions. Maybe I stop again for two minutes in case somebody has questions that want to be. Yeah, I have a question. Please. As you say, you have three different form for the correlation function in the para and ferro and the critical one. Yes. What is the range of change from ferro to para for the correlation length? If I want to see it, you know, not with these three equations, I want to see a one equation for all the temperatures. There's an excellent question and unfortunately we don't be touching this. So the idea is that this correlation length, this correlation function. It's a bit weird because it's constant critical and exponential. So if you want to do a full fledged analysis in statistical mechanics, what you typically look at is not this function that I told you before, but a very close cousin, which is the connected correlation function. And the connected correlation function is this. This will be in the lecture notes, but I will not have time to discuss this is this minus expectation value of sigma j. J plus critical point and in the paramagnetic phase, this correlation function behaves in the same manner as the full correlation function. But because this expectation values will be essentially equal to zero, there will be no average preferred direction for the spin. But inside the paramagnetic phase, you can see that if this pin is up and this pin is up, not only this part is zero, but also this part is zero. So the connected two point correlation function will behave in a different manner will be critical at our critical point, but then it will be exponential to zero, both in the paramagnet, both in the paramagnet and in the paramagnet. So this will allow us to define a correlation length that approaches the critical point from the paramagnetic side and a similar equation that approaches the critical point from the other side. You will not be surprised to get to know that this is also dictated by this. Well, in the very large majority of cases, this is dictated by the very same critical exponent. Not only this, maybe for some of you, it's also interesting to note that here I put just proportional conditions, but one can make more rigorous studies and especially for, for temporary potentials and standard coefficients of these expansions. These are called amplitudes. So the values of the amplitude will not be the same, but the, but the ratios of the amplitude in some cases approaching from left to right will also be something that characterizes a university class. So to answer your question, there is a lot to be learned also by looking at the transition from the other side, but this requires these connected points and we will not be touching this. Cheers. I was wondering if one, if we can also consider transition from laminar photo rubric as an example of transition. Unfortunately, for that specific phenomenon that you are pointing out. I know this was a direct message. So let me repeat it for everybody. So a person is asking if we can also consider transition from laminar photo turbulence as an example of transition. I mean, in our context, we will only be dealing with equilibrium. So this we will not consider. Please, what does universality mean? So this is universality applied to only to the face transition only in our context. Yes, it will be something that we define specifically for the transition only. However, from this format that you have seen, you can already guess that universality also dictates not only how the system be exactly the transition point, but also dictates how it behaves in the close vicinity of the transition. What close vicinity means is a non-universal aspect. May I ask another question? Please, sure. Thinking about what you just explained, I wonder if it means that we don't have the full two point correlation function for the 3D easing model, right? No, no, no, no. No, forget about the 3D. Let us focus on the 2D. The 3D, the very same reasoning will also work. Thank you. Okay. So far, we have been very naive, but I think it's important to tell you how we compute this correlation function, which is something we have not discussed. Of course, all of you know that it's like the elephant in the room that we have to introduce. And the elephant in the room is that whenever we talk about the classical, statistical mechanics and the equilibrium, there is one object, one quantity, which plays really the most important role to the very least prominent role. And this quantity is the partition function. The partition function is defined very simply. That does define for the case of the easing model, but of course, you can define it for whatever model you are interested in. So it's, it's decided by the sum over all the possible configurations of our spins. We call that n, we call n the number of spins in the system into the minus beta. And you will have h beta would be our inverse temperature measured in units of the Boltzmann weight of the constant. The energy of the spin configuration. So that is the partition function and the partition function is a huge model. Okay. So it allows us to compute thermodynamic potentials, grand potentials, and so on and so forth. Most importantly, this is the object that we typically utilize to compute thermal averages expectation values. So if you are interested in some correlations of some object, oh, the way to define this correlation is by some over all the possible configurations. Oh, into the minus beta h divided by Z. Our partition function. So it is really fundamental. If you notice, but typically, we never talk about physical phenomenon phenomenon. I say, okay, the partition function is doing this or that. We typically think about observables because the partition function doesn't look like an observable. In fact, it's probably not. We typically are into correlations or energies or responses like specific it. We don't look at the partition function as an object. Okay. The course that we will be discussing about is if you want exactly the opposite perspective, we will not be interested about the details about what happens to correlations or to specific observables. We will just target this. As an observer. Okay. In the same way, we will treat them. So we will characterize the partition. This is our goal. I think I want to tell you something. Do we characterize a partition function? In general, this is very complicated. We can already see that this can be something which is related to a data structure. Because if you want, here, since the partition function is nothing but the sound of all the possible configuration, they very much look like a data structure, but there are also coefficients. Okay. So it's a bit strange how do we deal with that? So I don't want to tell you the full theory, which is relevant. But it is long in terms of notation. So I would like to show you how we understand partition functions as data structures. But I take you an example of an easy model with three spins, which I think is the only example that I can break down where I can break down. Okay. So now let us forget about the problem. We want to characterize a partition function of three spins. We will not be able to do this fully today. But what does it mean that we want to characterize the partition function in terms of what is called data structure? Okay. Well, the idea is the following. So we want to, whenever we talk about the data structure, we are typically interested in two things. Okay. We want to define things. So one thing is an embedding space. Okay. And the other thing is the space of the features we're interested in. I think about when, when you do try to do machine learning of images. Okay. And for simplicity, let us think about black and white images. Okay. An image what it is, what is the data space, the embedding space of these images is nothing but the space of all the pixels. All the image. And then each of the pixel, you can either have a zero or a one. Zero stands for black. And one stands for white. I said we're sorry. You can choose also random. You can choose RGB. And then instead of having just binary, you will have more and so on and so forth. But that's what you will call your embedding space. Now within this embedding space, all the images that you collect, they can be faces. Cats and dogs. They're very complicated. Okay. But they define a manifold within this huge space. And this manifold is the feature space you're interested in. So now this is cats and dogs. The example I told you, we want to do the very same thing for partition functions. So we have to define, sorry, we have to define our embedding space and our space. Now the embedding space is relatively easy. Okay. So our three sites is in model. What it is. We can write down this is our lattice model. This would be our spring one. One. Two. And how many configurations exist. For, for this model. So the configurations set. And it's very simple. There are just eight. Okay. You would have the configuration where all the three spins. One. Then you can change the spin to minus one. So what we call embedding space is nothing but the space by defined defined by these eight states. Just a set expression. It is typically very nice to also have a geometrical and visual interpretation of this space. Since these are just nothing but three variables with values plus plus and minus one, we can do it with a, with a Q. So let me use a different notation otherwise you can be confusing. This is not the easy model. This is our configuration space that we are drawing the Q. And what is the criteria for doing it as a Q. The idea is that we can say that the bottom left element in our QB is the state one, one. If we, if we move into the X direction, we, we say that this is the same, which is minus one. You are understanding how it's going. Now if we move into this direction, we will change to the second minus one. So the embedding space of our partition function, I will abuse notation. So be very careful. It's nothing but Z2, the group of two. Oh, you can see it as a Q. I have designed this example with three spins that you can imagine now because already if I had four spins, I don't know how to write the embedding space. So a four spin easy model on a blackboard, I will need another direction. You can imagine how difficult that is to get to get a simple graphical interpretation about what is going on for a full-fledged many body problem. We just don't know how to visualize it. And this is exactly why we will need this unsupervised learning techniques to characterize it because it is something that evades the simple physical understanding. We have to define the feature manifold here, okay? Our cats and dogs. So how can we see cats and dogs within this embedding space? This is a bit more complicated and it is I think a good point to pause and to stop this lecture and take questions if there are. Feel free to ask. Yes, I had a technical question. Because not everyone has access to matrix rooms for now. So some information. Can you also put it on the website, please? Yes, if you wish. So like a lecture notes, for instance. Yes. Yes, I will also put on the website, but I will put them at the end of the course. Okay. Okay. Excuse me, can I ask a question? Please. So are the spins assumed to be localized in this system? I mean, there is no degenerate states. I mean, any states are important in this system? Yes, the spins are localized. We will not be dealing with itinerant spins. To be frank, I think it is possible to extend some of the things that I will be telling you to itinerant systems, which are also very rare, of course, but I will not have time to discuss this. Okay, thank you. Professor, I have a question. Please. Could you elaborate on your comment on supervised learning and how it comes into play? Yes, I can come. Well, Alex will discuss this connection. I mean, the method that we will be using. They can be caused to it as forms of unsupervised learning in the sense that we will characterize features of data sets without labeling, without anything, without training anything, we will just be looking at them. If you know something about unsupervised learning, the most naive of the methods, the most simple of the methods that fall into the class of the things that we are using is a principle component analysis. If this says something to you, I'm not so sure. Thank you. Cheers. Professor. Yes, we can, we can calculate so many things from partition function, but, but can we said it, does it have any physical meaning or not. It's a very good question that you're asking. It is not guaranteed the fact that the stuff that we are doing here. And what we will show you tomorrow, it is not guaranteed to tell you anything about anything which has physical meaning. It will be only at the very end of the last lecture that I will show you how this actually related to response functions that we do know at physical meaning. Thank you. Let me repeat what you meant by characterizing the partition function. Let me repeat it tomorrow because what I will do tomorrow I will start not tomorrow Friday I will start the beginning of the lecture with a brief wrap up of what I told you in the last 10 minutes because I will need this definition again of this blackboard. Okay, so I will, I will reply to you more extensively directly at the beginning of the next lecture. Sorry if I may ask one short question. So, in the embedding space is that so we have eight states. Is it that all of them would be stable. For example, if you want to encode just cats and dogs so you have two features. You don't need eight of them. And, okay, okay, good. I mean, this question is so spot that I can reply to you. So suppose that we are sampling all your electromagnetic space. It will be very clear that we will only two of these points which are really important. Okay, so this implies that all this data space that we're looking at, we don't need the full queue. We just need a line. Okay, instead, if we are at very high temperature, we will need all the states to characterize our set. Okay, so there is a fundamental change in the data structure that is needed to characterize different phases, very much akin to the fact that the more complex images you're trying to classify cats and dogs but then maybe also sub species or colors of the dogs and so on and so forth. Then you need more information. So we will see. I mean, some of the things I will be telling you Friday can be seen as a transition in the amount of information that is required to faithfully describe the state. Okay, I see. I think I have to give you a short break before next lecture. What do you think I don't want to. Yes, yes, I think maybe we'll take a short break. We will resume at quarter past three Central European time with the first lecture of by Dominica Boeti and we'll continue with March and on Friday, right? Yes. Thank you all. Thank you very much. Bye. I'll assign you to break out rooms so that maybe you want to chat or maybe you want just to disconnect and then we'll join together in less than 10 minutes. Hello, Dominica. Hello, Matteo. I'm ready. I heard a little bit when I was introduced and then I was all excited. How are you? I'm very good. We always have about 150 participants. So, do you want to try? Yes, exactly. I have to make the choice of two things. This is the main one. Meanwhile, I put myself in the presentation. You see it all. There are a lot of videos. So, if there are videos, do you know that you... This is a sound. Other things like this. Wait for the video. When do you share on Zoom? There are buttons to press to optimize Zoom for the video. This is a very short video. Do you see it? I only saw a blue ball at some point. Yes, you have to reflect. It's a short thing, an experiment. Yes, it's not a high resolution thing. There are some balls but you're telling me that there is a way to optimize. Yes, when do you do... Optimize for video clip. I saw it. Yes. This one. However, the video is very short. Yes, but... Exactly. Now I'll do another thing. If I want to share something that seemed very nice to me, I'll do another thing. This is today. I want to switch to this. It's a navigation... Wait. I'm sorry, Zoom is asking me. Do you see it? Do you see the brain? Yes. And this is basically if I move... There is navigation, basically. Ok, interesting. This is very nice because first I show a schematic version of the eye, the chiasma, and then I show it in the brain. This is me. Yes. These are magnetic resonance images, right? Yes, these are structural. They are static images, morphological in the subject. It's very nice because I show something more or less intuitive. When I talk about the eyes, it's more or less what I know. Then it's a little interesting to see where the chiasma is actually. And then I'll come back to the... When I finish the lecture, I ask you... I have to...