 Hello everybody to contribute to these links with geometry and like quantum gravity and whatever like I will I will talk about like some links to category theory and Comology theories With probability on a particular with the information theory, so I will introduce this like So this is the plan so I will like present the characters like what's comology what is information? Then well the trivial path for you because I will talk about observables and probabilities and whatever and then I will try to explain how All this is related to to entropy at the end so In the geometry we like this thing called co homology because the allows us to distinguish the shape the shape of things So you can distinguish a sphere from a torus because there is a hole inside But there is a mathematical way an logic break way of saying that there is a hole there And basically is the fact that these cycles a and b that are drawn there They are not the boundary of anybody so they are like cycles that are no bump no boundaries. So they are like non-trivial cycles and So the notion of shape at the end geometry is related to these co homology theories and that are stable under Deformation so we can change the structure we can I don't know transform this sphere into an ellipsoid But basically in terms of co homology we have the same so now we'll try to define a co homology for information theories So that's that's a purpose So I have to introduce also what's information theory and The idea is like Shannon introducing the 48 this way of measuring measuring how much information? It's in a it's an a random source So you think that like there is a source x that can like transmit and Different messages so those n messages there and then you want to measure how much information is there So you can like arrived to this expression, you know I'm like you some between all the probabilities and then you put the lock of the probabilities and that's a measure of information Why because here if you know in advance the output There is no information and the idea here is communication if you know the output So you don't have to transmit anything you can just hard code the message, you know, like I mean there is nothing to transmit But on the other side if if you are like equally undecided between all the possible outputs So you have to design your communication device to respond equally well to every Output so the idea at the end is related like information is how undecided you are between the outputs and the end that that's related also with the optimal ways of coding, etc So the idea now it's like to try to put the Our problem in the in a setting that accepts like some algebraic approach The idea here is like we will consider like a family of possible observables Like you can think that these are the possible sources of information or maybe in a physical system These are like the possible things that you can measure, etc And then we will put an arrow between two observables if like the observable X Is a refinement of the observable like why in the sense that the Algebra defined by X is like finer than the algebra defined by Y So if you do that you can start making like this kind of graphs You know supposedly there you have like a constant random variable And then you have these random variables that like you scan distinguish one point, you know, so so here by X one I'm just denoting a variable like That takes like certain value on the point one Certain of the value Value on the on the complement But then I think that this value defines a sigma algebra and then I just like don't distinguish between two variables that define the same thing So in the finite case are just partitions So here I have three partitions the partition that distinguish one the partition that distinguish two the partition that distinguish three, but then if you Immediately what you have below is the atomic partition by where this is a very poor example, but easy to understand and Also, I could remove like one observable for example, maybe I'm not able to do the observation X2 And you can take like a physical example for example, this is quantum mechanics So so you take like this momentum these observables that are related to the momentum And then you cannot measure like two of those at the same time because they don't commute but you have this fourth observable and that is compatible with the LXLYL set so At the end you can organize in a graph like that The different refinement relations between your observables Once you have done that Well, we will put a name to that so the name of this will be an information Structure so an information structure will be we say a category that organize all this data So you have in the as objects of these categories are as the vertices of the graph Observables and then as arrows you have this refinement relations And then we will assume that each time that you have a variable X That is a refinement of other two variables You can make this construction that is called the joint variable So the joint variables with what everybody knows but the idea is at the level of Of partition, this is the coarser partition that is finer than the partition of y the partition of set So on x you have this structure and we will see this as a product and in fact is the categorical product And So those are the assumptions that we make on these structures So we have this nice this is a category, you know, so it's just that you have to add here compositions of arrows and also identities, but well essentially that and And now like remember like what is like entropy so entropy of x is a certain like function of The probabilities on the partition x so the natural space where this leave Is in the functions? That goes from probabilities on x to the real numbers So in certain sense is implicit in information theory that we we want fix like one Probability like that we want to play with families of probabilities that possible probabilities that we could define on this partition So I will call this families of probabilities qx So on each x you have a sigma algebra sigma x and you can like consider a certain family Qx that we assume that is parametrized in some way and Then each time you have refinement You I assume that there is a subjective map that goes from Qx to Q y So for example a trivial example is like you can take the the thing we were considering before So omega 1 to 3 these are the partitions that discriminate just one point This is the atomic partition. So all the possible loss on the atomic partition You can say that is the set delta 2 like just like triples of numbers that sum 1 and then if you have If you have a probability on the atomic partition You can always transform that to a probability on this partition just like just some of the two components and Now we define like the last ingredient that is this functional models. So you have your category with variables Then to each variable you can associate a Certain set of probabilities and now I said that I want to consider like all the possible functions That take probabilities and give me real numbers. So in particular like the entropy the entropy of x lives here and So I just defined that here And there is a natural action of conditioning that you can define here So you you have a certain function f That is defined on the probabilities on x and you can condition this way like this was introduced by Shannon, too And you obtain a new of this function So here you have an example so similar and The one of the features is all these can be extended like quite naturally to the quantum case So you could consider a set of omega you take like a finite dimensional complex vector space And then observables are self-tajian operators And then there's also like a kind of notion of refinement because each observable defines the composition of the space in direct sum of Spaces, so if you have a better decomposition that we finds the other you put an arrow, it's so the idea is like the construction is very universal and Well, that's the information part the probability part and now we want to like Go to this algebra and geometry thing So the idea is to define like a homology theory related to these probabilities And well, I won't introduce directly like the construction we did But I will did I will do an analogy because it's easier. So the idea is For example, this is the rank homology the one of the simple example You can you take two functions defined on a open set of the plane and you demand If there is another function like big F such that the gradient of that function gives you these two functions So it's easy to see that there is a necessary condition, you know, because if the function big F is a smooth you know that the cross the cross derivatives like have to be the same and This gives this necessary condition, but then you can ask is this condition sufficient So not always it's not always sufficient But you can prove and well, this is this topology first that if you start shape It's radially convex. Well, yeah, it's like you have this and it's a sufficient condition You will find the big F, but You has a hole comes the plane with So the answer is no and then you can consider a simple example you take these two functions and then well If you assume that there is such F you arrive to a contradiction because well, yeah, the elementary computations So in fact the answer depends of the shape of so if it is like has a hole for example, we know that the answer is no We exploit this fact all the time in complex analysis so Well in algebraic terms we will write this like differently, so we'll just say that we have the continuous functions That to the continuous functions we can apply this operator that is called the gradient So this give give this vector like take is that take it as a vector And then to a vector we can apply this this curl operator that Maps it to to this function And we know that the if we if we take the curl of the gradient that's zero So we know that the image of the gradient is in the kernel of the curl But it's not always Inequality, you know, we just saw that so you consider this This that we call like an exact sequence and then you look at this group So this is the first co homology group of the RAM So the kernel of the curl divided by the image of the gradient So we just saw that if the thing is a star shape This is zero everything that is like the curl of someone Can be written as the gradient of so everything that has cruel equal to zero can be written as the gradient of someone but If you remove the origin so it's not longer zero we have a counter example in fact You can prove that is equal to It's isomorphic to the real numbers power and where n is the number of holes So in here you you capture like the the the shape of the answer depends on the shape of So Well, you can do a general construction. So here I just like will say like there is a black box if you want but So the black box It's called topos co homology But but but the thing is you you you can put here your category of information your probabilities, etc This functional model that we define and well you obtain a general construction for Co homology so at the end like a sequence like this No, it's a sequence like the sequence we had before it's just that the sequence in fact continues So you can define co homology groups as before you just the kernel of this map divided by the image of this one You continue. So what happened if you do this in the own observation in this observation? setting So I will like take again like the kind of simple example So you fix omega one two three these partitions like the atomic partition here and if you follow this like General construction You have a characterization at one recycle now will be defined by three functions. So fx1 fx2 fm That satisfy these equations and others like but the others are not important and and and these equations are equivalent to say for example here we cruel equal to zero so Wouldn't want to be cruel equal to zero when we something more complicated equal to zero But it's just related to these equations and these equations are functional equations. I mean, I just said that these things are functions and But you can like well some people solve functional equations especially in Hungary So they have been doing that for 30 years. So they know like some set of solutions So They already know that if you find an equation like this like this equation We found the solutions to these things are multiples of the entropy. So here I selected a very simple example It's just like the entropy concentrated in two elements, but but well you can do it in general with If you are like patient enough So at the end what's the moral of the story in fact is a fairly general moral that is The information co homology this general thing that we can define from geometry where In a lot of situations fairly general is one dimension. It's a one-dimensional vector space. And in fact, it's like Composed by all these real multiples of the Shannon entropy so used to conclude because well, I don't have a time but the idea Well, the idea is that you we can use this maybe general approach categorical approach to I don't know see probability a little different for example We and this is an idea of Gromov we can we could take like the observables as the primary thing and just like the Omega as a kind of model like a selecting of a base for a vector space But but it Omega is evidently nothing like really inherent to probability because in fact in a lot of situations like Limits of limits of random matrices, whatever you change Omega on the way So so in fact you could think that you have just observables and observables are kind of finite sets with the possible Outputs and then well for this output. You have like all the two outputs that are like living in another observable, etc So you you you make a category of observables and then you try to plug Omega and match all those observables, but the construction is not unique and you could study like different constructions like that, etc Well, maybe the perspective of this is like try to understand Well, what's the geometry behind this allow? How can we like obtain? Like some things useful for probability theory and like the main guess is to obtain like higher information functions like ways of Seeing like how many information is shared between three four like one hundred variables and try to to Like find that in the higher cohomology groups That we can define with the same approach So well, those are the papers like the paper of the Benekam voodoo There is like the introduction of the concept and the paper of Gromov That is this categorical approach to probability to well, thank you So thank you Yes questions, no, if not, let's thank young Pablo again