 Okay, so I'm sure you like Professor Carvel Blackchurch yesterday, so Blackchurch here, what did you prepare for today? Good, thank you. I hope you hear me well, also, virtually. As yesterday, I started with a small activity because it's good to, you know, have a little brainstorming. This time, I'm asking you to write down the most unexpected or surprising application entropy you have ever seen, so maybe in unexpected field, an exceptional pleasure, so it can be in your field or in a distant field, doesn't matter. So I again pass these post-its, and we have three minutes to write down your most... I don't know if our name is on the timeline. No, no, no, we're going to write names just with the... And for the virtual participants, you can write your answers to the chat. Might be something you've seen this or last week, something you've seen during your search, something you've seen from girls or from friends. So if there is somebody who finished their post-it, you can again come, we will put this in the chat. Yeah, you should come and read it out. I found in the theory of partial differential equation that a principle of maximal entropy is followed to prove that system and uniqueness are that good, especially in the reading process. Meanwhile, you look at the chat. Okay, you can read it. It will also lead to a little bit nearing with the bottom lines. So comparing the compatibility of data just and information. Yes, pretty much this. The answer is analysis. There is a very entropy of all the truth that satisfies the dynamic of the loss in dynamic resolution and the gain in statistical stability. Yes. In the meantime, I'll read some chat posts. So Nor is describing, is saying that the most interesting application is entropy of a black hole. This is definitely interesting. Su Yong is referring to entropic force practice in chemistry. Yes. And Gilberto, entropy played a major role in the modeling of peak interactions of visual signals in lizard tretinas. Very interesting. Dial using entropy in complex systems information theory. Good. Who else? I do have a black hole entropy. Yes. Okay. There's an analog for rate of entropy production in first passage process. Oh yeah. Somebody else is still writing. The folks, these are some very, so you can also write some. Okay. Do you want to write it? No. I mean, are you minding me? It was a lot of applications, a lot of interesting developments in what you want to be determined to be in 7x. Entropy in 7x. Very interesting. Good. Thank you all. It's all very interesting applications. So I'll tell you mine that I came across. It's the entropy based adaptive sampling. What it means is that if you want to create such a picture with this shiny metal structures or objects, what you do is that you do this ray tracing algorithm, which means that you specify your scene, you specify your source of life, and then you basically randomly send an array and follow it and see where it reflects and where it's like. And in this case, this paper uses some entropy based sampling of this source rays because the smart way you use this choice of your rays, the faster and better you get the nicer solution. So this is one of the applications in image processing or ray tracing, but I've seen many other applications in natural language processing in imaginary, as was mentioned. I've seen also applications to analyze, for example, law texts. So it's like, again, let's say language processing, but there are so many applications of entropy and they are really everywhere. Good. So the title of this talk is Let's Calculate Them. So I was promising you yesterday, I will show you some entropy that we can calculate and that will be different from Boltzmann Gibbs Shannon entropy. And all this works down to the fact that the Boltzmann entropy is equal to Gibbs entropy and Shannon entropy with this Boltzmann factor when the multiplicity or the sample space is multinomial. So when the multiplicity is multinomial, then we get Shannon Boltzmann Gibbs entropy, but there are some examples of systems where the multiplicity is different and then you get some different entropies. Maybe the simplest examples are Boltzmann Fermi Dirac and Boltzmann statistics. So this is something that you might already know from your statistical mechanics courses. Most of you know what is the Poisson Fermi Dirac distribution. However, you can show that there's an entropy function that corresponds to these distributions that can be very easily calculated. So let's consider Maxwell-Boltzmann statistics and we have n distributional particles and we have ni particles of given energetic state, let's say epsilon i. And then the multiplicity, it's kind of what we did yesterday, can be calculated and here's another way how to calculate it is that I say I put ni particles to the first category, so n1 for the first category, from the rest I choose the one to the second one, from the rest I choose the others to the other one and so on. So I start with n particles, I choose n1, then from the rest n minus n1 I choose n2, then from the n minus n1 minus n2 I choose n3, et cetera, et cetera. And then if you play with this, you see that it's n factorial over product of ni factorial, it's basically what we've seen yesterday. Then what you can think of is that if the epsilon i is the general cgi, so it means that several states have different energy, like it can happen, then basically for each state I have gi possibilities that I can put it, so then it's like n factorial times gi to ni over ni factorial. And then we end with the regular Schellen-Boltzmann Gibbs entropy, where this gi is for the multiplicity. Okay let's now think about Einstein statistics, so we say let's consider that these particles are indistinguishable and then basically we have the ni particles in state epsilon i, the general cgi, and I am in short using again the Starrson-Barth theorem that tell us how, if I have ni particles and gi states, how many of, like, boxes, how many of them I, like, what is the multiplicity of these states? And you can see that this is the formula and then by using the same procedure like yesterday, so basically I have this combination numbers, I express it in terms of factorials and then make a logarithm, logarithm of product makes me some of logarithms, and the logarithm of n factorial is n minus n. I, very simply end with this formula for the entropy, so this lambda i is gi over n, so this is the product of the general cgi. Yes? Yeah, are we using an approximation here? So the regular sterling approximation, just, okay, because I feel that in the Bohr Schellen statistics you always have to jump to the micro-colonial in some way, because you cannot do this. This is, this is, this is a micro-colonial. Yeah, because we are summing up. This is micro-colonial, actually, because we don't consider any integration over energetic states, so here it's micro-colonial. Yeah, so yes, so that's exactly the case. Okay, so and you see that it's kind of similar to the Boltzmann entropy, so this term is the same like for the Boltzmann case, but there is this additional alpha i and also this alpha i, this is the basically constant term alpha i log alpha i, but this is the term that changes the whole statistics. Good. What about permanent rack statistics where I have an indistinguishable particles, but maximally one particle can be in one sub-level, which means that the n i must be smaller than g i. Then basically, and again, simple combinatorics that you see that in the combination number, it's not n i plus g i minus one, but it's maximally g i, so basically I have to have maximally one particle at each state, and this leads to this formula that is product of i goes to one k over these combination numbers. And then again, the same story expressing combination numbers in terms of factorials taking the algorithm, which makes the sum, and then against the formula, and we end very similar equation for entropy on the previous slide, but with few minuses, if you remember. So it's really different in a few minuses, but this difference is very important because then the whole difference in post-Einstein distribution and permanent rack distribution is very big. We will discuss it in the next talk when we little bit discuss the maximum entropy distributions. Good, so these are the examples you might already know from your courses on statistical physics. Now I go to maybe a little bit more, not so well-known examples of entropies of systems where there are no, not in the case of they are not described by or what's called Gibbs entropy. And we come back to yesterday a little bit, so if they had dices and they're from dices here, I make it a little bit simpler. I have coins, so we throw coins, those coins, and these have each coin, let's say, had two states, head and tails. If it was like that, then we end up with a multinomial factor and we end up with Shannon entropy. But let's make a small change. Let's say that these coins can be magnetic, they are magnetic, and they can bond together. So they basically stick together and create one-one state. This is something you can think of as maybe atoms creating molecules or a part was creating clusters or polymers or something like that, or even people forming groups. And it seems like it's more different, but actually you can show that the state space goes fast and exponentially. So what it means, it means that if you express the multiplicity in terms of the number of coins, then you end with n to n, which is actually e to n of n. And normally you would get something like 2 to n if you have coins. And now there's the question, so is it very different, the 2 to n versus 2 to n of n? And the question is, yes, it is a big difference because you will see that in large n, the bond states will basically have the most, will be the most common states. So the states where the particles are free are then very much suppressed by this. And this is a picture that is from a night's paper by Henry Genson and his colleagues who started to play with this simple model and discovered that the termodynamics of this model is really a bit more complicated than we thought before. Good. So we know how to calculate the multiplicity and or what is the multiplicity? It's basically the number of mesostates, the number of microstates that correspond to one mesostate. So here we have a mesostate, let's say, two coins, two heads and one tail. And we know that the microstates can be heads, tail, heads, tail, head or head, head, tail. So basically the multiplicity is three of these mesostates. Similarly, if we think of one coin in head and two other coins are bound, then basically it's easy to calculate that the multiplicity is free again because this can be like head and the second and third are in this bond state or first or second are in this bond state or third and third are in this bond state. Now how to calculate the multiplicity? So for this case, again, we will be permitting all the particles or all the coins, but again, we will have to take care of overcounting. So basically, if these are all the six permutations of the three coins, we see that the states one, two, three and one, three, two are the same microstates. So basically now what we do is that we take free factorial, but this is the number of all the permutations and then we have to divide by two factorial for this case that I have to take care of these permutations between two heads states. So we see that and similarly for the case of permutations one head and one bond state, then the two, the one, two, three and one, three, two, these are the same states. And so it means that in both cases here by chance, it's free factorial over two factorial, which is free. And now how to do it in general? So let's have an ij, I would call it now molecules of size j and that are in given state. So basically, we do all these permutations, which gives us an factorial. And then first, we have to take care of all the permutation of the molecules. So here it's molecules of size three, then it's the free factorial, six permutations. And then we also have to take care of the permutations particles in the molecules. So in the molecules, we have to also take care of this permutation. So for each molecule, we have the free factorial because this is the molecule of size three, and we have free molecules. So basically, it's a free factorial group third. In general, yes, it's like, I should think that these balls have like labels and you can. Yes, exactly. So you would say the molecule has the label. This is the first molecule, second molecule, third molecule, but actually doesn't doesn't make any difference if I call this first one, second one, third one, or any other one. And the same is within the molecule, I can label the particles in the molecule as first one, second one, third one, but it doesn't matter. So I have to really take care of this overcounting. So here you see that the total multiplicity is given by nine factorial over three factorial times three factorial to three, which is 280. And the general formula is that you take in factorial over the product of nij factorial. This is something we already saw in the multinomial factor, but then there is additional factor of j factorial, which is the size of the molecules done to the nij, and nij is number of these molecules. So and now I will be displaying in red the difference between what we have for multinomial factor and general entropy and what we have in this case. So we do the same story. Yeah, before that I have to say that this seems to be right over you, although it's been already discussed by Boltzmann. And let me switch off these titles. Can I switch them off? So here, yeah, I just moved it. So here, Boltzmann in his 8084 paper, which is unfortunately in German, but he's interested in chemical reactions of chlorine and hydrogen or hydrogen that makes the reaction to make the chlorhydrogen. And basically this is that this is the multiplicity. This is in German. This is exactly the formula I showed you before. So he already was discussing this structure forming entropy. Unfortunately, later, it was later discovered by Gibbs that you can do it by the Bramptonic Compensable, which is eventually easier. So then this has been somehow forgotten. But in case where you are really interested in exact numbers of molecules, especially for the small systems, or if you doesn't have molecules, then this is the formula that applies. So by taking the formula and doing exactly the same thing like before, so using the logarithm, change the product for the sum, and using Stirling's formula, we get that the log of the multiplicity, the entropy is equal to n log n minus n. Here, the nij doesn't sum up to n. We'll discuss it a bit later. And plus this threat factor is this nij log j factorial. And now we introduce probabilities, pij, and I call it probabilities in quotation marks because if you think about it, they don't sum up to 1. Why? Because nij is number of molecules. So if you sum up the number of molecules, it doesn't sum up to 1 because for each molecule, you have to multiply by the number of particles in the molecule so that you get the total number of particles. So it means that this sum nij, which is the total number of molecules, is always small or equal to the number of particles. So here, we can write it in terms of probabilities, but this is a bit easier to work with. So we have this kind of probabilities that do not sum up to 1. So here, the first thing is that this minus 1 after the log p remains there. So it's not cancelled because of this lack of normalization. And here we have this additional term that depends on, this is the pij and this is the constant that depends on the size of the molecule. We can do then so-called finite interaction range. If we say it's not every particle can interact with each other molecule, but let's say we have b boxes and in the boxes, the particles can interact with each other, but not between the boxes. Then we get this effective concentration c, which then plays the role of the number of molecules in the box. So that's another example. And tomorrow again, I will show what are the consequences for thermodynamics, but you see that this formula, again, kind of similar to Boltzmann Gibbs Shannon formula, but not exactly the same. Do you have any question to this example? Then I tried to get another example. This system is called sample space reducing process. It was introduced by my colleagues, Bernard Kolominas-Mutra, Rudi Hanel and Stefan Turner. And the idea is quite simple. So let's have a staircase. Say we have a staircase and we have a wall, but the ball can only jump down the staircase. So once I go from nine, let's say to this five, I cannot go back. I have to come to this ground state. And then only in the ground state, the wall is driven back to a random state. And so it's like the relaxation part where you go down and the driving part where you go up. But these are not, there's a certain kind of symmetry. This system is interesting because it leads to so-called Zipp's law, so that the visiting probability is then 1 over i approximately. And also interesting part is the entropy of this system. So let's think about how we calculate multiplicity. So let's consider a sequence of where we go down and end in the ground state. So we start in a random state, do few steps down, and then end in the ground state. So this is one sequence for us. For each sequence, we calculate the multiplicity of this sequence. And how we do that is that, so let's have a number of states, n is number of states, so number of steps coming from n to 1. So basically a series of states goes xn to x1. And we have a sequence. And let's say we sample our relaxation sequences. And now we can visualize it as follows. So basically this is the first sequence. And if there is a star, then the ball appears in the state. If there is this pause, then it doesn't appear. And what we can see is that in any state that we see here, then it can or doesn't have to appear in the state, except for the ground state. So it must appear always in the ground state. So basically it's like something, something, something ground state, something, something, something ground state. And the question, how many of these sequences contain this state xj, exactly k times? So if we have r sequences, how many of them contain the state xi, exactly k times? This is easy because then we can see that the state time has to be, has to appear in r sequences, k times. So then basically, and since the number of rants is equal to k1, so the number of the times that the ball appears in the ground state, then basically the multiplicity is k1 over kj. Because kj times, it must appear. And the total number is k1. So it's just a combination number because it can appear in any of these. And we can, then for each state, we can, then the total multiplicity is that we multiply this multiplicity for each state. So it's this product of k1 over kj. And then again, the same trick, we apply the logarithm, which makes the sum from the product, and then use the Stirling's formula. Here we see that some terms change a little bit. Then what we do is that we add and subtract this kj log k1 because of the, it makes the formula nicer. And then we introduce this again, probability Pi, this is the k over n when, and is the total number of steps. And we get this formula where we have this Pi log Pi over P1, P1 minus Pi log Pi over P1. And this kind of, if we come back, it's kind of similar to the case of permittorax statistics where there was not P1, but there was this alpha k, this multiplicity. And if you plug in the multiplicity, where it's somewhere 1 versus 0, you get very similar formula for that. So this is very interesting because what you get here is that this entropy has, for each state, it's somewhat entangled with the ground state. So then basically what you will see when we do the maximum entropy principle, that each state will be somewhat entangled with the P1 state. So this is again yet another formula that describes the multiplicity of the process that is now history dependent because now it's not just about the state itself, it's about the history. So do you have some questions? If not, I have a look. Shit. It doesn't seem so. Do we start to go? Yeah, yeah. So another example. Poirain. Here I have to say I was learned quite aggressively by my younger friends that name of this mathematician is not Poirain, but Poirain. So just that you know. The Poirain is a simple system. So basically you have a, let's say, a box with balls of different colors and each time you basically choose randomly one ball from this box, look at the color and then for the next round you basically put back delta of balls of the same color and then you draw again and you then again put back delta same balls. So it means that the number of the total number of balls increases but also the ratio of the balls changes during the runs and we are interested in the limit when the runs are very long. So in the infinite time limit and the number of balls is infinite. And the question is what happens with the ratios of these balls, whether some color wins or whether there is some stable ratio of the colors so again this multiplicity was discussed in this paper. So let's just think about the probability of the sequence. Now I do not talk about multiplicity of the sequence but probability, the probability is just the multiplicity over the total multiplicity over the total possibilities here. I do it because it's more convenient but it's basically the same situation then I will not have to divide by n the final result. So basically let's say we have n i balls of color c i and we have c different colors, so red, green, blue, whatever and after the ball is drawn you return delta balls of the same color to the urn and after n draws you can think it's easy to think that the number of balls in the urn is n i plus delta k i where k i is the number of of the draws of the ball of that color i c i or c i here and of course the number of balls is growing so it's n which is the initial one plus delta n uh no there should be not be n but there should be small n so the first n is the lowercase n and this n is the this one okay sorry uh so and now the probability of drawing the color c i in nth run is this n i n over n capital n uh and this this we have this this formula here and then the property of the sequence is basically the product of over the all colors where i introduce this notation that m delta r is m times m plus delta plus m times two delta plus m times etc and it makes sense because once i throw once i draw a color next time i draw a color i basically draw it from k plus the from the number of balls that are increased by delta so first time i do it from n i second time i draw i i do it from n i plus delta third time i draw i do it from the n i plus two delta etc okay so then this is this is the probability of drawing the sequence but the sequence is not the histogram so the histogram is now just you know how many times i draw uh each color and you can think about it that it means that the multiplicity is that since i have the sequence for the given sequence is that like the histogram is is uh related to the sequence by the regular multinomial factor as before now the trick is that we also have the property of the sequence so here you can think about it like now if if i had a sequence before then this sequence each sequence had the same prior probability of being of being uh being chosen so for example if i have the dice so then the prior probability of from a sequence is the same here i have the change the probabilities of throwing the sequence and this changes then the the the entropy so basically i can decompose the multiplicity into the multiplicity of the histogram times probability of seeing that histogram and of course for some cases the probability of seeing the histogram is uniform or like multiplicity of the histogram uh given the sequence and then probability of seeing that sequence when the seek when the probability of seeing that sequence uniform then i can forget it because then it's just constant if it's not constant then it gives me this extra factor and here what you get is that actually this probability you see that i use this and then i don't want to go to technical details but the nij is basically this kj factorial times some other terms so you see that this kj factorial coming from the multinomial factor here nicely cancelled with this kj factorial coming from this this probability of seeing that um that uh that sequence so then if you do big technical calculation i don't want to go into detail you end with this entropy that is uh sum of log p i plus one over n so you see that it's not p log p but just log p so although it seems to be very similar to general entropy this has very different consequences as we see next time uh for the for example maxing entropy distribution are there any questions yeah Is there any real system to reach this capability? Like in the physical system yeah uh yeah you might not aware of any of that the systems like real physical systems you would need to have this part this control of particle influx this is more like a system that you can think about not any real physical system i would say what a good question maybe i will look at the chat no nothing so then and then the last example i think because this is something that has been quite a lot discussed in this our community the polymer and other people this is what it's called generalized entropies and this is rather a theoretical example so it's i don't have an example simple example of a system what i will do is let's say theoretical approach when i try to deform this multinomial factor but it can be useful in what happens if there are correlations in the in the sample space so the motivation can be that what we have is exponential function and we can define as a limit of 1 plus x over n to n and let's say for some reason we don't want to take the limit but we consider a finite version of this exponential and logarithm and people call it q exponential then they don't use the n which is the this number of let's say of the series so it's the the notion of the each term in the series but they use this q so they define this q exponential as 1 plus 1 minus q times x to 1 over 1 minus q and in the case where you send q to 1 you get the ordinary exponential function and then you can do the following so you can define the inverse function which is this the q logarithm which is the 1 x to 1 minus q minus 1 over 1 minus q and then i try we can find operations that have the same properties as exponential so basically we want to say is there an operation that the is generalized product of q exponentials is the same like sum of exponential of sum so we know that sum of product of exponential is exponential of sum and it turns out that you can find out such an operation and it's called q product and it looks like that so it's like this a to 1 minus q plus b to 1 minus q minus 1 to 1 over 1 minus q in all cases where you send q to 1 you need to take typically lopidel rule and then you and regular product good so what we can do now is that we can say okay so i have n factorial which is 1 times 2 times 3 times and and we saw that it's because probably if you use some combinatorics then at the beginning you have n possibilities then you have n minus 1 possibilities you have n minus 2 possibilities etc so basically then what these guys it was done by Hiromaki Suryari from Japan what he like proposed is that you can generalize this that you say is the q factorial of n which is this n q product 2 q product 3 q product etc to n and this doesn't look so nice of course but maybe it's good to show that the q log of q factorial n is this sum of k to 1 minus q minus n over 1 minus q if you think about it's a nice nice generalization because the the n log n minus n so if you think about this it's n so this sum over 1 minus q if you take the the the the limit and it's like k log k so this can be used for the time in the sterling's approximation here so basically what you find out is that this q log n factorial q n factorial is approximately n q of n and then we can generalize the multinomial factor to say let's make everything uniform because we liked it for some reason and but then if you calculate it starts making some sense because then you see that this is this sums over l to 1 minus q and how you can think about it is that maybe not all the combinations in the state space are possible maybe some of them are forbidden from some reason that it's not clear now so maybe this is how I can describe systems there some combinations are not possible from some reason and then what I get is that I have this that the the the multiplicity is given by this q multinomial factor and in this case we can again use the Boltzmann formula with your algorithm but it seems more possible to use this q algorithm because we have these nice properties so yeah then it's not directly comparable but what you get is the so-called salis entropy where the salis entropy is this p i 2 minus q minus 1 over q minus 1 and sometimes there is this duality between q and q minus q is another debate but it's interesting is that you see that this salis entropy is prefactured is n it's not typically in all previous cases there was n times something in probabilities here it is n to 2 minus q something in probability which is called not non-extensivated which means that it doesn't draw a linear system size but as a n to 2 minus q this is something that we will discuss later and whether it's useful or not we will see um and yeah so until now maybe is there a question or something I have a question from online yes please what is the physical meaning of this q q deformation so there is in my opinion not like clear simple physical meaning but people use this size entropy in many applications because of the correlated systems I will discuss that in the the other talks so basically what you can show is that if there are some interesting intrinsic correlations in the system then it leads naturally to its size entropy unfortunately this approach where you calculate the entropy from the from the multiplicity this is not very much intuitive so I am sorry but I can give you a simple example which is really more from the chat room uh Gilbert commented something that sounds interesting that this transformation comes with a random variable from non-gaussian it's mapped into normally distributed meaning gaussian might be might I add this space box called transformation yeah it might be the case I would need to to look at it a bit more carefully but it might be interesting yes yes thank you Gilbert all right thank you and there was another question sorry in the first slide slide on q deformation yes um this uh the the very last equation the a q product is this expression and uh where did this expression come from uh you buy buy buy basically um by basically um saying that I require that this I look for the the the operation such that it fulfills this this property one line above so that's yeah good and now uh so up to now there was no thermodynamics in in that was just counting the states but what we are interested in in connection with thermodynamics are there might be something in the chip yeah good um is the relation to energy of course because we are interested in so states have energy and this gives us some um intuition about how the energy is dissipated or like and change like exchange between the bath and the and the system so basically we consider this that the states describe the energy of the system that can be either the Hamiltonian or it can be like more general energy from functional case of Hamiltonian we need to have this kind of coordinates but in other systems that are maybe described another way can be a general energy to function because just just a function that that gives a number to every state and then we use the Boltzmann formula so the so the s basically the logarithm of this multiplicity so the multiplicity is number of states having given energy and um then as you know there are few and we call ensembles so it can be a few situations so either an isolated system which we call microconnector ensemble which means that we have just the system and nothing else so it means that that the total energy of the system is this constant so basically then it means that that if we have the multiplicity of e is the number of states that of the microstates having this this energy and you might know that it's connected with the phenomenon with phenomena like negative temperature so if you have the state systems where um you have the microconnector ensemble that you can observe uh negative so temperatures and these things I don't want to go too many details uh because for most of the applications uh it's more interesting to have a canonical ensemble which means that you have a closed system means that the particles of the systems are somewhere in box for example and you have a system of interest and the bath and the bath we don't care so we get only about the energy of the system so what we do is that we integrate over or hold straight over the bath and we make an assumption that the bath is much larger than the system so the bath is always in a free room in other way um you can say that the state of change changing the state of the system doesn't change the state of the bath so they are independent and and of course one thing is that they might must be usually coupled so I must be able to write a total energy as the sum of the the two energies if they are stronger couples then the situation is different and it's much more complicated I will not go uh to the details but there's also a theory for strongly coupled systems and the bath and then of course uh the the total energy is the sum over the states and so the system states and the bath states uh and this delta function between the total of moratorium and total energy and then uh there is another yet another approach called open system grand chemical ensemble which means that also there is a particle there is a war the war system can exchange particles I there is some typically some um confusion with these like between what is called isolated system the cloud system open system so I my definition is is the following some uh people can uh define it in a different way so that if you read other papers then you see maybe other definitions now I'm interested mostly in the chemical ensemble because I want to show you what's then the entropy and basically it means that I calculate this this multiplicity so what I can do is that I say since the the total entropy is constant and the total uh the total energy is constant and the total energy is the energy of the bath plus the energy of the system I say let's consider that the system is some energy and the bath has the the rest of the energy and I say I integrate over all this uh system uh energy so basically what I can do is because of this this weak interaction I can divide the delta function into the the the delta function for the system and delta function for the bath which then uh is useful because this double sum over the states of the system and the bath can be decomposed into the sum times the delta function so this is just about the system and this is just about the bath so then the multiplicity the total multiplicity is the multiplicity of the system having energy e s and the bath having energy e dot minus t s and I have to integrate over this and this is typically very hard to calculate so what people do is that the they consider only dominant contribution from the integrand so they take they take consider they they calculate then this uh term is maximal so basically then what you impose is that take the partial derivative first put to e s and set it zero which then leads to this equation and if you think about this this is the this is the derivative of entropy because entropy is log of the multiplicity over e s and this is nothing else than the temperature so then this is why the temperature of the bath in the systems are the same in equilibrium because equilibrium is actually the state that maximizes this this contribution and then basically this this s b this second term the log of this w b this is the free entropy because it's the entropy minus the t times one over t times t times e s so this is the the one over t minus one over t times the energy this is called free entropy sometimes also you can uh hear the name massier function and this is where the maximum entropy emerges we can also see it from the point of programming that uh we have a system and we have a much larger bath and in this case we have the deterministic picture because we can say for example I have the Hamiltonian system so the the total the total evolution is Hamiltonian however uh I'm either not interested into in the bath's evolution or I cannot describe it or there are too many it's too large so in case if it's really very large I can do this course training this is what we did with this approach and I have this statistical picture there that there is the basically the the system of interest coupled to the bath that we basically model by this temperature and that's it for today then again I ask you uh if you have some questions or if there is something that was interesting to you surprising or something that you didn't know something that you don't agree with there's something like that yeah so yeah so basically this is really the so in this case it's really just the toy model of a ball going down the stairs but more generally this can be a good model for systems that do not follow so for detailed runs I come to that later so basically it tells us that there is some kind of symmetry between relaxation and driving and some systems it can be that the symmetry between relaxation and driving is broken and that this can be an effective theory for describing these systems here it's really uh I wanted to have very simple example uh that you know you can imagine but uh so in this case it doesn't represent any real physical system so it's really just toy think here I have something from the chat okay thank you very much um any other question and if not I think I hadn't been earlier but I think you don't mind so enjoy your evening and yeah see you tomorrow see you tomorrow