 All right, I think that's the signal to start. Good morning. I want to come back. So today I want to talk about some x-computations in light-finance-being groups. So I guess there's the basic claim is that this category of finance for light-finance-being groups is a very convenient framework for doing homological algebra. It's a very nice-being category. You can form without any problems. You can form a derived category. Because topological spaces essentially sit fully facefully in condensed sense. Also, topological being groups basically sit fully facefully. I mean, under some of my assumptions, they sit fully facefully in condensed-being groups. So now you have a nice one to play with them. But then because you have an in-being category, it has all x-groups. And then you can wonder what they actually are. And for this theory to be somewhat useful, you want that all the answers of these x-computations, they give you reasonable answers that are interpretable. And so the first theorem that I stated last time is the following theorem. So once you can start with, for example, a nice geometric object like a CW complex. And let's say m is any in-being group. Then one can look at the x-groups in light-finance-being groups. Between the three condensed-being groups on x, or rather on the corresponding condensed set, and m. m's treated as some of those with a speaker eye. And this, basically, if you treat x as a condensed set, then in any topo series, there's an internal notion of what homology is, which one way to express what this internal notion is, is some are used, x-groups in the in-been sheets. So this is some of the internal notion of what homology of x-groups in the in-been group. And, unfortunately, it turns out that this precisely recovers the singular homology of x-groups in the distance. So the first thing I'm going to do today is a sketch and proof of the series. So maybe I should say that a two large part of all this lecture and maybe also the next one or two still follow the first course that I gave on condensed mathematics some years ago, and you can find lecture notes online. So like so much of what I'm talking about today. And also last time, the reference is condensed.pdf that you can find on my question. Okay, so the proof sketch is that, well, as it's a CW complex, it's somehow an increasing union of some xi where the xi are a complex possible. It's risable, also, and they're built from five-dimensional cells. And sometimes both sides take all limits to limits. And they probably need a slightly better statement to really compare to complexes, not just the individual groups. But basically, you can reduce the case if you just have a compact cluster space. So there's a compact CW complex. Then there actually is a more general statement that holds for a compact cluster space that may not be like CW complex built from cells. That's a boring theorem. And it's a CW complex cluster space. And again, I mean, N may need to be in groups. So then you still have these x groups in events and in groups between the three guys on x. And it turns out that in this case, you can always compute this as what's known as a chief homology on x with some instant elements. So here, chief homology. So this is defined in terms of the category of chiefs on being groups on x. We consider the topological space x or corresponding side. There's chiefs on being groups on x. And then you have a global section spunker. It goes to being groups. It's not right. It's actually has the right to have fun just not to distinguish it from other formalities that might arise. I don't know this HR sheet. Thanks. And here you apply this to the constant chief on an individual. And it is known that some are restricted to CW complex satisfies the X, X and sense us. It is the same as single homology. So it's a CW complex. But not in general. In fact, basically, I'm not sure if never, but the single homology is really only valid for CW complexes. In general, we should rather use something like chief homology or the same thing is actually just homology. So here's a key example to keep in mind of relevance to us. If x is totally disconnected, then we'll simply disconnect it. So in this case, you can actually show that the global sections function is already exact. So this is So do you mean totally disconnected compact? I mean, Yeah, I'm still just I sorry. Yeah, the profile said these are the global sections and the global sections are just the locally constant maps from x to n by equal to zero. And there's no higher quality. But if you will consider the single homology, and this defines in terms of the single content complex, which is built from just mapping points and synthesis into into x. But from the perspective of all simplices, there's no way to tell apart x from just a discrete set of points, right? Because it's totally disconnected. So any connected space will just factor over one point. So but when you come, so we can also compute the single homology of x. And again, it's when it's both degrees, but in the degree zero, you get all that's continuous or not. So this means that. So the single homology doesn't really see anything about the quality of x anymore. The chief homology does. So I think another another possibility is considered locally contractible spaces. I believe that when you cross x with an interval, this x group should not change. I'm not completely sure what it looks like. And then we can do for locally contractible comparison, the chief homology. And if in addition, it is para compact, then you get comparison, the singular homology, but the usual results Yes, yes, you're right. So actually, I think as a key statement, you really need about x is that it's locally contractible in this funny sense where locally contractible doesn't mean it's covered by open or that any point as a basis of open neighborhoods that are contractible, but rather that for any point in any neighborhood, you can find a small and possibly small neighborhoods that can be contracted in the larger one. And then there is this funny assumption that's the official definition of locally contractible. You can actually show us that. Yeah, I think everything that I said about CW complex extends to this. But to compare shift and singular homology, the usual treatment requires para compact. I'm not sure if it is. Oh, yeah, it's a very para compact. Yeah, yeah, I think this para compact is the assumption they would they kind of only matter to me, except for chief homology, I think if you go all the way back to here, I think it's probably disappears again. Yes. It's important to say that shift homology is not a negative assumption, so I think that's because if we talk about this, we can prove it's towards another context. And then the way I see it, so it means it's a tool to think about some of the homotonic rules that have to say that some of the singular homology. Yes, I understand the question. So the question was like often one considers topological space up to weakly equivalence, and then obviously, and everything is really true to CW complex, and for example, X is weakly equivalent to just the discrete set of points. But here I don't want to consider topological space up to the equivalent, because otherwise I would, I wouldn't be able to treat totally disconnected space at all. So I'm not using the topological space or condensate for whatever to to a pure homotopy theory. I really am interested in the actual topological space. Right, so. OK, so. Here's a zero, right? So I want to not prove that if I have a much reasonable complex space in any of the groups, other than I can compare these X groups and disconnect to be further further upgrade. And I mean, I'm not doing exactly purely for funds, actually, some are actually working for the truth. Here's a further upgrade you can do. So she's from our university fund. So they're two sites. Fix X. On the one hand, you have X. It's called the open sex. So this is what's used on the right. And on the other hand. You can consider the slides. Slides of serious topos defining light from the insectic and to the light profile and sex together was a map to X. And. And once you pass the shoes or went to possibly to first morning to boy. Turns out that that's actually a geometric morphism here. So. This year, if you're a life and then sets with a map to X. And these are people next and there is a map. So what I'm telling you is that whatever you have to keep over X, you can pull it back to get a life and then set to work and to define what to pull back is you really only have to define all the generating objects. So land of a star of you when you use an open subset of X. It's just. Okay, so. Yes, the usual sheeps on X. And you can pull them back to get like condense of X. And correspondingly, you're also access some of the driver category of sheeps on X, which is used to find the sheeps from on the edge. You have the right category of being sheeps on there, which would define this other commodity. So basically, you can. Can also pull back from there on the right category seems to the right category of sheeps on X. I mean, she's on X. Goes to the right category of. The being sheeps over X. And then. First, I think it's a serum is that on. If you're bounded to the left also on D plus. And that was put like this place. And in particular, this means that ice which has some kind of problems. But it is implanted for all. And which are. Which are the people being groups on X. The carmology. On on the scope of condense sets over X. This coefficients in. Publicly. It's the same thing as the formalities. And if you apply this to a constant sheaf, you will recover as the previous. So this is from X locally compact. And what kind of asset I was having in mind the case where X is a compact also space. Probably power complex with the guns. So if you apply this to the constant sheaf. On the being group, we get. The previous for this. And maybe this is also a good scary, but then the point is just that sheaf. Modules define in terms of. On some side. And the other is basically also just from on some side. And so. And then the original plane was that for a certain very specific. She even made a constant sheaf. As I just said in the carmology, but actually something much more robust is true that for. Any sheaf to come on, the decent green. Which can be a good face in terms of such a fully faithful counter. Okay. So. Let me sketch. You know. Um. Uh, we need. On any object. Because of. Being ships on X. The adjunction map. From a. To. Push forward. Of support back up a. It's not some office. And so this is a. This is a certain map. In the class. It's. So being good. On X. And then just a general fact that. You can check whether such a thing is not small. But checking it on stocks. And this is really the key point where. Uh, I'm kind of using. Uh, the stupidity. Formalism. Uh, in order to make a reduction to checking something on points. Uh, So we want that. AX maps. Isomorphically. We wonder what is this map. Isomorphically to. And now the key point is that you can actually. Uh, have suitable base change properties that allow you to pull. Taking the stock. Uh, Into this. Into this. This is for short operation. The key point. Basis property. Taking salt that X. Commutes. Let's see. And this is where you actually use something. About the nature. Of the stupas. Of. And then sex. Um, namely you use it. So in situations of so-called. Coherence. You often have such kind of patient results. Uh, so you use general. Um, Formal. Take the stock. Some kind of physical evidence. Taking all of the lost stuff. And kind of probability. Um, So in situations of so-called. Coherence. You often have such kind of patient results. Uh, so you use general. Um, Formology. For the full events. Taking the stock is some kind of physical evidence. So that's what we are using in terms of probability of probability. Um, And there are certain statements. When you can, And can see the change them. Results. For. And to this whole to the generality of so called here. I'm sorry. It's just basically the same thing as being positive, complex and pretty separated. And so in the end, what you see is that the key thing you really need. Uh, at this point. So the key. Geometric input. So to say. is that this condensed set corresponding to X, said in this internal language of just two points of all, it means to be somewhat compact and hostile. It is compact and hostile. It is a positive component concentration, which was precisely the thing I mentioned last time that if you want to do the notice, for example, for the interval, you need to know that there is a suggestion from one of the generating objects in our site to contrast it towards the interval. So it says to me here, you're using that you can cover something like the interval or something like this by the contents. It's good. I'm also going to say that it's very intense. Put it, sorry. Yes. Thank you. Why did you use the bounder's assumption this week? Because otherwise, this statement doesn't both prove in general or, well, okay. Yeah, so if you don't have things that are bound to the left, there's always some issue of how commodity changes with like this cosmic of limits and there are also a variety of questions that, to begin with, there are at least three versions of which way you can do this. There's Lewis version, which is maybe the best one. And then there is just a rough category of sheets. And then there is the left completion. If you want to pull the face from the ceiling, it always takes a left completion. I mean, all of this doesn't matter if it makes this like a CW complex or if it's five dimensions and everything's the same. But this kind of general statement that it always holds true in any career and purpose, this only works when you're bound to the left, otherwise there are some issues. So I believe you want to take a, to regard the point as a limit of its closed neighborhoods rather than open neighborhoods. Yeah, so yes, exactly. So he actually wanna execute what I gave you as some of the hint. Then he actually have to use it. I mean, I probably this is the first moment of all open neighborhoods of X, but when you're in a compact also space, these are profiling with the closed neighborhoods. So you can set the closed neighborhoods here, which has the advantage that then if you, so a closed neighborhood, you actually get some profiling that said, yeah. So I want to stay within the realm of positive complex by separated objects for which you need to take the closed neighborhoods. But you can just do that. Okay, so, thus, you can pull in taking the stock at X. And now this basically means that we've reduced to the same statement, but now the space X has to just become a point. So those producers, okay, but, but if it's a point and all these, this is just there have to be being groups. Well, there is still something on trivial, there's still a whole lot of life and that's being used, but some of the being groups should fully face fully in there because. Basically, I mean, the integer is as a condenser being used as tool projectives, so they expect us to. Yeah. Maybe you have explained, but it is obvious that it's all not the stock means with this to work to close neighborhoods. No, I mean, you have to somehow use it. These are some of the points you open the closed neighborhoods. So definitely can we just pull back to open neighborhoods, but then the transition is from, if you have two open neighborhoods, it's a commodity that someone, the commodity on the closed or the commodity on the public both sit in between. So in the further comment, this doesn't matter. I'm just asking. Let me just pause a second and let me try to understand what happened to them all completely. So big, so we were interested in computing some x groups from the x system, it's rather complex. And we're interested in computing, let's just say it on some competency. Then we would like, like now I took it like a very fancy approach, but we could try to do it more, somewhat more down to here. So how do you actually try to compute x groups and like in the civilian groups, you would try to find a projective resolution. And if you cannot find one that's projected, at least you would want to find that's a significant function. And so this actually breaks us the proof into steps. So step one is to show that if x is actually totally disconnected, they don't actually have to do anything. So then x group, I mean, so in this case, it's one of these generating objects in our side, right? So next, it's somewhat, it is in this case some live profile set. So let me write it really as s here. That this is, so the continuous maps from s and to g, so we'll look the constant map as i is equal to zero and not saying the positive figures. And, okay, so if s is for example, the one bank of application of the integers is really just falls from the statement of the last time that this is a projective object. But in general, this is not in for the counter set and probably you still have to resolve this guy. But for this comes down to the following. Like you say, well, some, for example, some x1 against C then you could always split this x1 up to your positive cover of s by what it means that it's the next x sequence. And so similarly, if there's some high x group, you can always split this after like passing to some kind of infinite resolution of this guy. And so completely what this amounts to is that for top of fiber covers, I will just pull it. So this is some simple object, official, let's go from that side. So there's a zero, so it's one, so I'm matching this. So concretely, the speech at s zero is a cover of s and then you can recover s as a portion by this fiber product, s zero times s, s zero. And so, but then the next one is required to subject on to the fiber promise. And so on, where and so on takes a little bit of effort on reveling. So whenever you have such a guy, there's actually some completely general things that in any side, if you have an object to hyper cover, then you can build the corresponding chain complex by dot com. This is actually always exact. And you need to see that it's, when you somehow do realize and pretend that this should be the answer, you would want that if I now pass the corresponding complex of continuous functions, that this is still exact. And this is not what you have to prove. So we need to show for all hyper covers. So this exact is an automatic. What we need to show is that the corresponding complex of continuous maps that is also exact. And there are several ways to prove this. One is to use the same argument I used, which is to somehow argue that in order to prove this exactness, you treat everything here, can be treated as a sheath on s, because instead of taking global sections, you can first push everything to s and then take global sections. And then it's enough to prove the exactness of sheaths on s, which again, you can check on stalls. And then again, when you pass the stalls, you realize that this is somewhat the same thing where now you're covering a point, they can have a cover of the point, but then again, because it's a point, there's hyper covers, that's not something to do. So there's one way. The other way is to approve something, some extra level that whenever you have a hyper cover of profile assets by profile assets, you can always write this as some cool footage limit of hyper covers of finite sets by finite sets. And then this right, this thing as some filter call limit of corresponding things where everything is a finite set. But then for finite set also, all hyper covers are split and the exactness is automatic. I think that the latter argument is the one I actually used in the lecture notes, I said. So the first argument is like homological descent in SGA4. And I think that they also check, I forgot in which reference, but in one of the proper base change theorem for proper maps in the sense of, because they separated this universe, closed maps. And so, and then the arguments in homological descent go through and it will give this, so the homological descent spectral sequence will give you this result. Right, yeah, I guess the previous thing on the right, I guess it's called homological descent more or less. So either that or right, you have a cover in the cool footage limit of finite sets. The finite sets, it makes it take a little bit of unraveling that you can always do this, but it's okay to reduce the case of finite sets where it's obvious. Okay, so that's the first step if you try to do some more concrete, but maybe also the less interesting set because we're still just treating a circular disconnected space. And now, so step two is to somewhat treat general, that's why there's a more complex possible effects. And so we would again like to find such an acyclic resolution. And now we actually found a lot of acyclics because now we know that for us what we disconnected guys at least was acyclic, right? So now it's enough to find out a projective resolution but one by such, you know, don't ask us where it is in the finite set. So one to resolve the don't ask, but it's on SS, or S is a large profile set. And so I just wanna actually do that. Well, when you use precisely that one uses that point, you can always find the subjections from the counter set. So from some light profile set on blacks and then you get something close relation as your time to SS zero. And this is actually, I mean, if S zero is already a totally disconnected then as your time to zero is and this is a close subspace if it takes a fiber product. So this is actually always totally disconnected. So you don't even have to do any further resolution. So you can, the so called chase nerve where it just is minus one, it is minus two. And then all these SSRs are just some I plus one fold fiber product, the reds. We have some hyper or in fact, Cheshire cover has pulled to X. And so again, by this general principle, the type of Chavarsky view resolutions, you now get such a resolution. So you do an X. Or as these three guys on these terms. Well, this is our resolution by things that are a segment for some of the companies. And so just tells us that the common you're interested in those X group from this guy, this can be computed by letting these guys in. And so now we know that this guy here is computed by this complex where you take here the continuous functions from a zero to re-entered this, which is some really awkward formula for what like something like single homology. So you take your, you cover your nice space X, maybe the interval by some country set and then all the spider products. And then everywhere you take just the locally constant function to the integers. And then notice, coaching complex and the claim is that this computer is a homology. And now it's not so clear uproar that it's really confused the right thing. And what's the previous proof amounts to is to check that to again treat all of these things here as she's on X. So as a global sections of some she's on X and then to check that just compute the right thing, you can again some on compute on stops. And then you're done. And then it checks that it resolves the consistency. So is this argument also one for non-constant she's? And that's our previous mistake, right? That there was a statement, funny, fancy statement about our categories. And that's it. In particular says that all sheets of the groups on X, the homology, the sheet combo on X is the same thing as what happens when you do this kind of stuff. So next I want to talk about locally compact. So yeah, so there's this category of like some logical groups that are low, so logically being used at a little bit compact. It's reasonable. So what are some examples of objects in here? Well, also the speaker being used on the, as then, I don't know, the wheels are there. Something like the wheels module with the institutes for the circle or the periodic numbers on there or something also something nice like the adults, which is when I think about the adults that they sit in and try to take sequence where you have a street sub and then a multitude to this context. And the trial will go into the street. We recall that the adults are the profile completion of the integers, rationalized. And so there's some kind of structure series for these guys. So each object can be broken up into three pieces. So one piece is the screen. One piece is a finite dimension of real vector space and one piece is a compacted product of these groups. I'm always getting confused about the orders. Right, so as you see, there's some kind of interesting sort of like sequences in there. So you definitely expect that there's some kind of for example, an X1 group of like arm or Z with the Z, which is the extent of the real number or something like this. But so like two things that one someone knows about this category is that it seems to be like, yeah, so computing none of the X's guys. So it's some kind of exact category and computing none of the X's some kind of contribution. One knows that all the X's are equal to zero for at least two and all X ones. And so you can wonder whether something similar holds true now if you compute X groups inside convincing groups. It's cool, okay. So let's take any two, look at the compacted being groups, which are also matrizable. And then again, matrizability is something can be ignored. It only comes from the restriction to large and then to be in groups. So you can compute the X groups and let them to be in groups from the corresponding guys. And I would just want to fully faithfulness. You already know that the homomorphisms, they can't be changed. They must just be the usual topological homomorphisms. But I could be some weird X1 groups that he didn't know about. I mean, here you're computing extensions within the whole category of peninsula being used and there could be some really weird X groups between them. And they were actually later, maybe hopefully today, Scott an example where this actually happens. But here it turns out that not so. All the X groups, all the extensions of a locally compact guy by a locally compact guy in financing being groups, they all are themselves, the extension is themselves, and really, you can identify the abstract X1 was the unit X1. That's the usual thing where it's. You're looking at soy sex sequences, A to something to be, sorry, B to something to A to zero. Soy sex sequences in locally compacted being groups up to the appropriate most of the forms. Let me give some key examples. Actually, I mean something slightly better than we could even on the line. So first of all, you can compute X groups of anything against the circle group as one. So this is actually zero in all of the figures. So this one's on a point of originality and this point of originality, some kind of exact operation on locally compacted being groups. If precise is a dual of being group, as a dual of locally compacted being group in B zero. Or another example on could be, you could also try to compute some X groups of the real numbers against the integers where some of the intuition is that the real numbers are something connected. So it can never map to the integers which are the speech say. So you would expect all the I X bits to be zero and those you need to fix. Things you can write down. And I think actually these are more or less the key examples. To understand this computation, some of you all have to do all, you can do some of the different size where A and B can be reduced to these basic cases. And so for example, if you map from something to the streets and there's not really anything to show because then they expect you to compute. And so you can assume basically they have a compact retrievable being group in the source because also to find them in real vector spaces, you have to do the size to something discrete and something compact. Linking is, for this you can assume that A is compact and then you can try to do a similar to the size in D and so on. But how does one actually do about computing anything here? So we need to find something that's also a projected resolution of A. For every this locally compacted being group can be up to R to the end extension of a discrete by compact. All right. There's no sign? Yeah. I think it says. Okay. And for this, the key is actually a second resolution that works next to general alternative. And then was used quite a bit by Green in a certain setting in algebraic geometry to do some computations. But the statement we needed was never actually put in the literature by Green. There's an unpublished letter of the lean to Green where he proves the result, but it's some unpublished. But it's a following very nice theorem. Let me use an important for this. There is a resolution of the form something funcorial in a being group system. So what is the resolution? So you're trying to resolve any being group M. I guess I was trying to find some kind of universal projective resolution of this. So you're trying to sum over results by three being groups. And there's of course a very easy way to at least find a suggestion onto M where he sends some of the generator given by some elements of M as an element in here. There's some of the finite three being group on generated by the elements of M, right? Of course, this is way, way bigger than this. And the standard way actually to do this is to do some kind of monadic resolution where you use some of the three being group monad and then there's some general thing that the next term, I mean, you might put here and this would be not what I wanted to do. You could put it as a free guy, the being group on the free being group on M and then they come up with two maps here and then the difference you use here and then you can continue. But then these things get uncontrollably large. So this is not what I'm gonna do. And you realize that actually you don't need something as big as this to generate the kernel because the only thing you really have to enforce is that when you add two elements of M, then you can become the same as the sum, right? So basically you just have to, whenever you have a pair of elements of M, and so here it's generated as a certain case, A, B, you can send it to A plus D, what an A plus B. And it's easy to check that this actually generates a kernel of this map because once you prescribe those relations, then you can uniquely sum any such thing and you can usually reduce to just a normal event. And then you can continue. So each term will just be D, it's going some D to M to some power. And there are some transition maps that are given by some universal formulas like this, except nobody's able to write them down. Do you have to take a finite sum or in each term of such powers or it's enough to have one power, like maybe... So when we originally wrote this up, we used to find some of these things, but you can actually buy some stupid argument, you can basically cover any such finite sum again by such a free guy. Oh, okay, I see, I see, okay, it's a... You will actually realize that there's a small issue because this is zero, but you will figure it out. But yes, you can just choose one term in each degree. Yeah, and so all the differentials like universal universal formulas, which is actually all different of the function reality. All different things like universal formulas. So it's a little bit of a meta mathematical result, yeah? And surprisingly, the proof of the theorem used a little bit of stable homotopy theory. So it uses some incarnation to something like the finiteness of the stable homotopy groups of spheres. That they appear in the proof is also the reason that you don't really know how to do this explicitly because at some point you need to basically kill something like stable homotopy groups of spheres, meet surjections from finite free groups onto them and when you can do that, you won't get anything specific. But so one very nice thing is that this is really factorial. And so this means that it usually works in any topos. So in any topos, you can write down the same complex, the same universal formulas, but whenever you have a chief of the being groups, you can write down the same complex as chiefs of the being groups on that side. And it will automatically be exact. So for functionality or really the universal formula, it also works for being chiefs of the being groups. And so we now get a resolution of our, of our a is that we're interested in where all the terms are some z is on a or z is on a squared or some of these on x. And so in order to compute x groups from here, there's some reduced computing x groups from these three guys on them. But this is precisely the thing that I was talking about previously. So x groups from such guys, three guys on some nice topological spaces. This is precisely what we already know how to do. So this reduces the station x groups out of a. So the x groups out of the e joint h of the n. And this is what we already said. So let us say that you want to compute x with values in our underline. So you have a complex where you have the terms are continuous functions on powers of a with values in the real. In the real. Well, I only did the case where the target was a disputed being group. For the same, I also need to case with a target as a real numbers. Yeah, so if you look back at what I did in this notes on the mathematics, then there are also proofs that for any compact cost of space, if you can compute some of the x group of z a joint x into r, so I guess also need to translate here. For all x compact cost of a compute I mean, right, I see six x groups. So for one way, for one way, you look at this book from x, but now it was real consistent. And then the claim is that this actually doesn't have any higher commodity, whatever x is, and the degree zero, of course, it just gets continuous function. This also works with r replaced by any banner space, but it doesn't work if it's just, but it really uses local convexity because there's some partition of unity arguments in the proof. So you know that the partition of unity behave nicely, you need that to calculate the space is locally convex. Look at this, it's really important. Right, this x cube type, of course, meaning from the 3.5. So as a preview of something that will happen later, is that when we consider real vector spaces in our series, we will actually have to consider non-locally convex vector spaces. And so we want to be really interested in situations of such computations where it doesn't take some local convex and then you do not have suspension on all convex vector spaces. And then this means that we will actually have to resolve. So when we want to resolve by a cyclic, we really have to go to totally just next. All right, so let me give an example of how such computation will come out. So when you're trying to compute the x cube of the reals against integers, then, excuse me, that x zero is, I don't know, I think that's all right. It's a hom's from the free guy on x. All right, but if you want to have the free guy, you just have to map this time to r and this is because the thing is x. Yeah, yeah, yeah, that's what I'm thinking about. Thanks. So when you compute it's it, when you compute that. So you're revolving this by the free guy on r. So the free guy on r squared and so on. Free guy, awesome. So I'm trying to make a real vector space. And then we know, I mean, find out the real vector space at step of the CW complex. So we know that the x groups of z is on, out to the n i, you can see. Sorry, that's, there's some noise here. Again, that this is, well, it's the same thing as a single homology system, all right. And so this is, of course, just B and degree zero and zero and positive. And so this means that when you compute the x groups out of this, then each term will just give you one puppy of z. You would like this to be zero for all the records to the zero. So apparently you might be worried that there are some, lots of these zero remaining and maybe you don't know what the differentials are. But one way to control this is to observe that you can do a stupid thing of also using this revolution for zero, zero group, which results in zero group by several properties of the integers. And then you realize that when you compute the x out of this sequence, it's the same thing as computing x over here because each in terms individually has the same x groups. And so then out of here is the same as out of here, but this just results zero. Okay, so this is how one can leverage this knowledge about the x groups from these three guys on reasonable things into such x groups from locally-contacted groups. Let me actually mention the variant of this argument. So one might be worried that one is, having some, it's kind of weird when it's trying to do very explicit computations by using an inexplicit resolution, but some of the inexplicit nature of this resolution, somehow never becomes an issue. Just the existence of such a resolution in these final properties is enough. But actually there is a resolution that is explicit and that can also be used. So this is something called as the claims Q or Q prime construction and which was really covered by one commonly in the process of this formalization effort. So this is an explicit complex, two of them. It starts just like we expected. We take Q join in and then we take Q join in the first where in here it doesn't matter it's just given by sending a D to H of P. And then it's just a three guy on one thing where now these are powers of two. So this is P join and this is the fourth, P join into the eighth. It's also called the cubicle construction as things excuse for cubicle. And let me try to write it down here. So if you know I have A, B, C, D, then you imagine this as the four corners, A, B, C, D. And then you are taking the face, this face, they add some up minus the difference of the faces. So this match to A, B, C, D, minus A plus C, B plus B. I hope I'm doing this right. I might be off screen as well. And then you do this on the other face too. So we take minus A, C, minus B, D, C, D. And I hope you can check it if you can post it to the financial to get zero and if not, then there's some easy variant of this is to work. And now you can imagine how you do this one step up. So you imagine now it's M to the eighth, this is four, the eighth element sit on the vertices of a cube and then you, this side minus this side, plus this side, plus this side minus the sum of the sides and then each side, actually, and then there is a theorem that this is kind of linear then. So what precisely Q of M is always quasi-isomorphic to Q of Z, base change, the rather base change term. So in particular, if you look at all the homologous groups, so M is totally free. So you can see that this is a very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, And so basically all the series are trying to prove they can be put into the form that some x groups would vanish for all I can equate it to zero. And then we can use this explicit resolution of the x step and it's not quite a resolution of a but for this purpose. And also, another thing you can actually compute. What's the thing is, using some little bit of stable homotopy theory and you can show there's actually that some or like greater than equal to zero. You know the whole process of the self transfer progress is the individual sphere. It takes two to the top of the little bit and you shift it into degree. This is where a little bit of stable homotopy theory. Basically, whenever you write down something explicit, the explicit answer should have something to do. Okay, so let's just to say that instead of an extra resolution you can also do all of these arguments with this explicit resolution instead. It doesn't really change any of the arguments except if you have a little bit of more comfort of that you actually know which objects you are. I think it actually is a direct sum. I think not. I think that the statement is. And so I am there's a few, so that's Q and that's Q prime so that's actually what it is that they explained that we kind of killed something extra and then you just get these and then you're moving on. All right. And so finally, I can say one cover line is that's really important for the series of so what's going to be a good. All the speed of the exam. One thing you cannot compute is an X group of something that's not at all now anymore. But actually quite big. You can take the whole product of a content number of copies of the images. So the product in finance to be in your suit. Or if you want you can take the product will not be being here since then possible and then scroll. So these experts. Turns out that they're actually just the dark some copies of them. And so this is very different from the classical answer where and which is fp that means this would be the naive dual of a comfortable product of copies of fp which is a new checker space but now I can only think someone continues things and now it's just going to direct some. And the really critical thing is that there's nothing entirely. And so this guy. This will be the kind of compact projective generator of solid being solid which is a full subcategory of. And it's all distributed in groups should be solid. And then, if you want it to be projected you definitely want it on the high exit range and this is what's the certain places. Okay, so let's prove that those you actually use a weird trick. So naively you would now again try to resolve this by three guys on profile sets but the issues that this is a really large thing as a. If you treat this here as a, as a condensate, then this is somehow a union overall functions from from end to end of the product over and of the intervals from minus f of n. The integer interval. So whenever someone take about for each end to choose a bounded subset, the finite subset of here, the product is some profile exit. In general, it's a huge unit of these, but there's a huge problem in here. I mean, of kind now to the continuum. And so now, if you met all such a huge cold and it approaches would become a huge limit that's a very extremely hard to control. So this approach, what it can be extremely hard to execute. But there's a tricky thing to do. You can resolve. In the other direction that seems great. So you can invest this into product of copies of the real numbers. This is the same thing. I think it's probably the palm of the. At this point to crucially use that product like that. It might be easier to justify in this specific case. Okay. So this is exact. And now, now we're trying to do the x against. So this one gives the x from here to the x from here and the x from here. Let's do this one first. This is actually a compact. Try the ball. It's still a private right so it's a comfortable product. And so we know what the x groups are. So the x, I, copies of our page. Turns out that this is what this discreet and this is something connected so there shouldn't be any more friends. And then it turns out that in degree one, the computation will show that this is just the trouble copies of them. I mean, this is what the result of the previous series when Reynolds to And then if you spare it to knowing six sequence, you realize that what you need is that the x troops from this product, copies of the real term is equal to zero for all of these things. And now for our score on the same kind of situation we're doing some kind of huge thing. And now, okay, there are two ways to know to finish the argument. As we can observe, said if you do a similar operation now with this guide, then each of the terms here becomes a product of interval. So each of the guys becomes a little bit cube. And actually, you can solve it also for a little bit cube that behaves like the highest in this comparison is kind of similar to module is also true for the little bit cube. And there's no higher commodity. So the argument we gave for the real numbers works also for this little bit cube variant. Well, there's a slightly different way of arguing using a little bit of a junctions. So the source here is some module that it makes a module structure for the real numbers. And so this means that the R-hom from the product of these things into any, any, any difference will always be the same thing as the R-hom in modules and condense modules over this condense ring R from the skies into the internal R-hom here. This is an internal adjunction. But now this internal R-hom is already zero by the previous. And so the whole thing. So we don't really need to know exactly what this is. It's enough to know that it's some module over the real numbers. And then because the real numbers don't matter and any module over the real numbers cannot be done. Okay, so that's one of the key computations that we needed. Which kind of gives me, I mean, if I have maybe five minutes left, let me talk about a fun serum and put some sets here as proof. So here's a serum. All right, then it comes to the following conditions. Consider the following a search of star. So I'm trying to wonder in which similarity to choose that I can pull such a product out and get a direct sum to the final continuous method of product that just affect one of the terms. And maybe not just for product, but maybe even for the sequential limit of things like we've wondered that. So let's consider the following assertion. For all sequential limits and zero and one countable speed of interest and all possibly all the screen may be not necessarily possible. And that the extrovert from the sequential limit of C7 towards N. Yes, this is the call limit of, I'm sorry again, so general. And of course, this automatically vanishes the size of these two because these are just x-groups and the b-groups, right? These are disputes or whether I can choose x-groups and convention b-groups or just the b-groups are the same. So they vanish into either at least two and then the call limit also changes. It's actually equivalent to the following statements that if I take the x-groups of the products, so the direct sum into a direct sum, it's zero. The point is that computing such a limit can be resolved by two-term complex involving products of the NNs. So computing this can be reduced to the case where this limit becomes a place-by-product. Then the NNs can be resolved by a countable three of the N-groups that can be used to this form. And then similarly N can be resolved by three guys. So such computations can be reduced to this case. You mean this transition might be Smith-Egg Leffler or something like this? I think this makes a surjective. I also objected to it. All right. Smith-Egg Leffler would be a muster. Okay, so that's an assertion. And then, so something that Dustin and I realized quickly is that if you assume to continue a process of these, this fails. And so take this exponent, it's not zero. So continue a possessory policy. The kind of analogy of the continuum, so two to the odd of zero is the first uncountable column. And so Dustin and I kind of thought that maybe it was just too much to hope for. But then it turns out that some sets here is independently considered to precisely same question. It's not a different language, but... Now you can't come on kind of unravel what this x-group amounts to here. Namely, again, this is some kind of huge co-limit over searching functions from end to end of the product over end. What's the direct sum over n that goes f of n to the east? Whereas this is now finite direct sum to the same thing as the product. So it's a huge co-limit of products of these, for which we know the answer and then you have to compute this whole huge limit here. And so here's the serum. So I think it's many people talking about this, Bergfels and Lundy Hansen, who've written several papers about this and the precise results and announcing I think we're going towards a second. Apparently one of the statements I'm stating here, I'm not sure if it has been published, but currently Benester has figured that out. So first of all, it's not just that this is excluded by star, right? In fact, star implies that the continuum must be really large and two to the all of zero must be bigger than all of omega. So basically what happens I think is that if two to the all of zero is some all of n for some finite n, then you will get some xn problems. But if you make it larger than all of n, then some of you have a chance and in fact, they're going to actually be equal to all of omega by some clinics serum. What I proved is that actually the smallest bigger thing is consistent, consistent in star holes. And you can also, the first possibility after this can do me almost. In fact, it holds, whenever you have any ground model, then you can extend this ground model by doing a cone forcing, it holds in the forcing extension, the joining bit omega many cone wheels. So Cohen, he invented this motion of forcing that takes one model, sets three and builds another one, a bigger one, in order to show that the continuum of forces may be false. And this is like the most basic forcing. I mean, they know they say a billion of different types of forcing, but this is still the most basic one where you just join new real numbers so to say to your model and you're just joining quite a lot of them. Some of it's all of omega would be the minimal thing that you can do in order to have a chance and well, I mean, bit omega many, but once you join that many cone wheels to your model, simple kind of forcing extension, which will ensure that this is true, turns out to always becomes true. Okay. And why do I mention this? Well, we don't actually have a, in this course, I will never use this principle star, but it's kind of need to know that you can ensure it. Often when you try to compute certain things, it's easy to figure out what the answer would be if this was true. And then all the things you really need, you can usually prove them without invoking this general principle. But there are also some situations where you might want it, for example, in order to put small x-groups out of banner spaces, where it really is the case that you get the expected answer under this principle star, but in general, these x-groups are just some onsets. Time's up, so. What's your question? I ask you a question. I really want to be part of the knowledge you're questioning. So, you know, some of the content, some of the content is that, what's the language, what's the meaning? Because in English, the language has two meanings. So in German, is this speech or language? Language, language, language. I mean, as I said in the first lecture, it comes from having the smallest possible weight. Basic question. So when you said that, when you do the x-groups or two globally compact meeting groups, and then you said that this is zero for I grade to the one. And you said, it's just because you need that, it's an inflatable. Like, I don't do that. No, I mean, sorry, I didn't want to refer to this. I mean, it's like, there's no simply negation if there's a computation, it comes out as I grade. But in the end, it's somehow nice that it matches what would be the other x. Thank you. So I didn't catch the previous answer, but I want to ask again on this computation of, so for example, if you have a compact billion group and you take the x to the reals, view this condensed. So you have a complex that computes it the terms will be continuous functions on various powers of this compact group to the reals. Right. You have to compute that this is a cyclic in higher degree. So I don't see exactly how you're doing for the group cohomology complex, which is you have this averaging by integrals. But I don't know exactly what is the measure. I didn't actually look it up in preparation for this. How does it work? So instead of meting through reals, it might also be that you want to map the reals from one. Sorry, how do I work? I don't know, sorry, I think it's different. You use that. How does this go? No, I mean, okay, I can't think right now, but you can find it in the lecture notes. All right, I don't see any further questions, so let's... So in the theoretical thing, what was the core limit on the blackboard you are now looking at? What is the X i of this is the core limit of what? I mean, so you write this thing as this huge core limit of all functions. And then you can write, I mean, just because you write the index here index by the same index category. Where all the terms are not something you can write down, like. And then it's precisely these derived limits that they are starting to be present. Okay, I understand now, because we know that there is a Comological Dimension Result for LfN. Okay, I see how it's related, because if you can... I saw, yeah, so... I think there were all Comological Dimensions. Functions with a subset of eventual dominums. Then another, for example, we continue all the processes, which would be the same as omega-1s. And then such x-groups in English, but I mean, I want it to tend to be bad. But the specific order type of this subset of functions ordered by eventual dominums, depends extremely much on the specific level of sensory.