 Thank you very much, Nate. So recap. OK, so far, so we have the category of stable maps. Then we showed that there are certain, then the sub-category of holomorphic objects, pseudo-holomorphic objects, is given by a theta, which just associates the weight 1 to such an object and otherwise 0. And the idea is to deform this theta j into a theta so that a certain number of properties hold. And so we discussed that in the first lecture. Then in the second lecture, we showed actually that there is a smooth structure on this category explained what that means. And what we would like to find are smooth things, which do this. And now how do we get them? And that is going to start to happen in this lecture is that we will define another category lying over it, for which the Cauchy-Riemann operator gives a section functor of this. So the fibers actually will be Hilbert spaces over each object as a Hilbert space. And the Cauchy-Riemann can be viewed as a section of this. And then the theta will be obtained in such a way for another kind of functors, which are this time defined here, which you can view as multi sections. And I will explain all this. And this, so here we already know there are some kind of smooth objects, so smooth functors. There will be also some kind of smooth functors here. And if you choose this thing in general position, satisfying some properties like this, but we have to formalize them for these things, then this one will actually have these properties. And it will be of a good type, namely it will be of the smooth weighted category type. So locally it is sort of represented by many folds divided. So it's good enough that you can actually integrate forms over it and so on and actually can define SFT. But there are some issues, there's something that we would have to discuss, namely orientation. And orientations are better done to actually go to a covering, where one actually introduce numberings of the punctures and so on. OK, so that is what we did so far. OK, so this bundle category, so we have our category of stable maps. And we take a functor into Hilbert space. So when I formulate things, I usually give some general formulation, but for the category of stable maps. So you can think of other categories you put in. This scheme works actually for a lot of series. So we associate to an object a Hilbert space. And in this category, the morphisms, so two different objects might get two different Hilbert spaces. But the morphisms are actually lifted to linearize the morphisms. So then you can define a new category. And then we take the object and the vector which lies over that object in the Hilbert space. And what are morphisms? Well, it's a pair of phi E. Phi is a morphism in your category of stable maps. E belongs to the Hilbert space lying over the source of phi. So the source of phi is, say, alpha. And what is the target of this morphism? It's just the vector obtained by applying the lift to the original vector E. So you lift each morphism as a linear map between the fibers. And the objects are the vectors in this fiber. And the target is the image under this linear map. So we have that. So I wonder who did that. So I don't want to destroy this piece of art. So OK. Ah. Ah. OK. So here we have the object alpha, alpha prime. So here's zero. The other zero. There's a morphism phi between these guys. And we have a vector E. And the lift, mu of phi, is a linear isomorphism which would map this here to mu phi of E. And this thing here, so we can identify then a morphism for this bundle category with the underlying object and the vector E. So morphism is phi. And the source of this is E. And the target is the image which you get here. So in our, what do we do in our case? Well, if you have such, so that's a building of height one, let's say, then what do we do there? Then this, then here, you have an equivalence class of maps up to r-action. So we take a representative. Now you see when you take the tangent space here, then because you have the r-action here, you can just identify this with r cross the tangent space of the underlying v-component of this map. Your twiddle has the r-component and the v-component. And our Hilbert space for this object consists of all maps which are complex antilinear from the underlying agreement surface at the point z into this thing, which is identified. Since I take a representative of here mod r-action, but the first factor is independently defined on which thing I'll take. And this map should have a certain regularity property. Namely, it should be away from nodes of the class H2. And that is because the Koshiriman section will act on H3 stuff when it goes down to H2. And at the punctures, you have to also take, you take exponential decay of this thing. So if you take a puncture and you take cylindrical holomorphic coordinates, you want exponential decay of the partial derivatives up to order 2, which is precisely when you take these stable maps, which are asymptotic to cylinders and have quality 3 delta 0, so 3 times partial derivatives with exponential decay. And then if you apply the Koshiriman operator, they would go to 2 derivatives with the same decay. So that is what you take. So that's the Hilbert space. And more generally, if you have a building, then over each of those, so you have alpha 0 up to alpha n, over each of them, you take such a thing. It's clear then how this acts. Namely, if you have a 0 of 1 form, you just take t. So then these morphisms come from biholographic maps between the buildings. And you map e to t phi inverse or something like this. Yeah, e composed with e composed with t phi inverse, the tangent map of the biholographic map inverse. Last question through this slide. So you said that you wanted a bundle of other spaces, but your definition of going to a bundle has not been given yet. But come. So in my talk, I first did the underlying space just as a category without smooth structure. Then we looked at all the relationships. Then I put a smooth structure on it, could say more about it. Now I put just an algebraic structure on this, discuss that. And then I put a smooth structure on these two things. And at that point, you are on the level where you can just unleash some abstract perturbation results, which brings things in general position. So that's the structure of the talks. So here, we inherit a lot of the structures which comes from this stable map. So we had this input, this evaluation maps, the function E plus minus. Well, we just compose it with this projection down here. And then we get such an evaluation map for E. The grading, the grading we take from the underlying object, then we can decompose this. And that is actually as before, except that this thing now has a little bit more structure over each object. So it has a structure over each object. Over each object, there lies a Hilbert space. And the next thing is to lift the data from S to E. And the data from S to E means, in particular, the covering business which we had. So how does it look like? Actually rather trivial. So in the base, we have this chopping functor. And then you just, over each of these parts, you have the 0, 1 form. You just put it forward. That's it. And it's fiber-wise a linear isomorphism. So I have these objects, which is a building. And over each of these floors, I have a 0, 1 form. And I chop it here. And I just take that forward. And that is an isomorphism on the fibers. So it then satisfies precisely the relationships which we had. So that's basically completely on the nodes. So you don't even have to think about it. And then there's a functor. Maybe if alpha is given, which consists of different buildings, you just, for each map on each building, coming from each building, you just apply the co-chiriman. So that's a functor. And so let's now think of, you might have noticed, it looks really rather like this. The color is better. That is the Joe's color, yeah? It's not purple. Yeah, well, I mean, it's sort of better than that one. OK, no, let's not elaborate on this any further. So now let's first algebraically discuss what we can do here. So the idea is to perturb theta j. And theta j limits my options, yeah? So if I put a pseudo-holomorphic object into this, then this vanishes. And this thing gives to the 0 vector weight 1, and otherwise 0. So then that's precisely this one. So then we want to perturb this. And what we see here, this gives just the weight 1 for the 0 section. So the idea is now, say, if this is a 0 section locally, I want to sort of have a partition of unity of this and just move. So I've used a 0 section as several 0 sections, but with rational weight. And then I move this individual parts away to achieve transversality. That is what you ultimately want to do. So that's the idea. Of course, then you run it. So this one will be perturbed, let's say. But when you move this away, you want to keep the symmetries. It should stay a functor. For example, locally, you have the action of the automorphism group. This whole thing, which you get, so it might be. So this will be turned into something like this. If this is a 0 section, so here's a base, S, and here's sort of E. It would sort of look like this. And you want to turn it into this. But you want that this whole stuff, these things each have a fractional weight. You want to want that this is invariant. Then, of course, you do this at different places. So it becomes a little bit more messy. So if you go globally, then the things you constructed might be then bifurcated further off, and so on. So that is what you want to do. So you need to develop sort of a machinery to be able to pick such things with sort of this up. So you see, if lambda 0 is replaced by some lambda which consists out of this section, then this is only positive if the Koshiriman operator perturbed by such a section set is 0. What does it mean? If this puts the weight on a graph of different sections, then this here will only, if I perturb this, only become positive if you solve Koshiriman of alpha equals one of the things in the graph. And this you want to achieve transversely. And if this is transversal, then that actually will be a smooth object, a smooth function. So that's sort of the idea. And for this, then you have to develop a little bit of machinery. Is that clear sort of what the aim is? You're the chair. You are not asking questions. OK, you're good. It's like the most basic possible question that I should ask you. What would happen if your perturbations were not on purpose? Well, then you lose some symmetry. And I think there's still a theory, but it's not the theory we want to do. I think you can actually do some brutal, some really brutal stuff and ignore some of the structures that you can do. And that's another way to produce data. And then out of this, presumably, you can produce some invariance. If they're interesting, I don't know. It's like, if you do S1 invariant more theory, then you just forget the fact that it's S1 invariant. And then you have usual more theory, something like that. So it's on that level. But if you work locally, then what the functor is doing is really saying, you choose something locally, and then it's consistent with something else. You choose to show it somewhere else. Yeah, so you have to keep that. But you might not. So in some sense, when you patch it together, you want that it fits. But you might maybe, yeah. It could fit in some slightly complicated way. It could be. Yeah, so you have to think about it. Because just saying, now, this should fit with that is easier to say with a whole lot of structure, which we have, than actually saying, I relax it somewhat locally. Like, I could say it should not be invariant under the action of the isotropic group, and then say it can match it up globally. Because the constructions are local, so you write, as you'll see, a perturbation as a sum of a lot of perturbations. And you construct them locally, and then you have to transport them all over the place by the morphisms, the local construction. But I think it's possible, at least in a general framework, to if you have a criterion to forget some of the structure. For example, you could definitely forget the structure that you want to, when you perturb, that if you add trivial cylinders to it, that they should just appear as pseudo-holomorphic cylinders. You could use them in your perturbation. So that would stay consistent as long as it's invariant under the morphisms. You could also disregard the fact that things are disjoint unions. I mean, if you have disjoint unions, if this is a solution to this, you could also go away from this. However, I don't think for the latter one, you would get a new theory. Because since you can do it, there's at least a cobertism from not doing it to the one doing it. And then I think, when you arrange the data, you might actually get just a lot of cancellations. But I haven't carried this out. So there are a lot of things you can think about. OK, so what are the requirements on lambda guaranteeing the desired properties for Sita? And here they are. Actually, they are even better to formalize than for Sita. So first of all, we have to define something on the Fibre product. So if this on the Fibre product thing, so I ignore, I don't write down that E zero lies over alpha zero. So then you just take the product of the things. You don't want any zero, do you? The zero? Oh, yeah, so I guess I want everything. So this was my keyboard, I guess. Tap the wrong button. So this is a zero, sorry. So this is a restriction. Then we have this covering function. So here's the algebraic version of this, which corresponds to the version in the first lecture. But these things can be also lined up according to the underlying phases you have, which was lecture part two with a lot of confusion, which hopefully decreased in the discussion session, which would be the same formula. So this is the algebraic version. So is the right-hand side not equal to just taking this maximal breaking of any one object? Right, but yes, if you have this property, yeah. So what this says is basic. So maybe I should never write this formula. But so what it says is. I guess mainly I'm asking, is there some interesting sign that I need to be aware of? No, it's just a lot of consolation. So ultimately it's one term. If you have this. Let's say lambda is always plus 1 0 minus 1. The result here is always going to be plus 1 0. So now here that is now important. So if I have an object alpha, then it has associated remnant surface. And then the remnant surface can be decomposed as the component. So let's first think a building of height 1. The parts of the surface which carry the things which are not trivial cylinders and the things which carry trivial cylinders. And if you have a building, a trivial cylinder building is just a line of those guys. And so when you look at this thing, you can see the trivial cylinder buildings and the rest of the components. So that's a natural decomposition. And you have a forgetful functor. Namely, it forgets the trivial cylinder buildings. OK, we have that already. So now there is, first of all, a Whitney-type decomposition of E. So if I look at my Hilbert space and have a 0 1 form over it, I can put this thing 0 on the trivial cylinder building over the trivial cylinder building. Or I can put it 0 on the complement. So this here is the part which is defined on the original building, but it is 0 over the trivial cylinder component. And this one is perhaps non-zero over the trivial cylinder component, but it's 0 on its complement. So you have this decomposition here. Here it's written. And I have a question. So if you have a two-level building and it has no trivial buildings in it, so it's non-trivial on every level. And suppose on the bottom level, you have a trivial cylinder and something non-trivial. Then what's the corresponding splitting? Is ETC just restricting the trivial cylinder buildings or restriction of trivial cylinder buildings? So if I have something, and then what you said, I have some non-trivial cylinder here and have a trivial cylinder there. And then I have this, which is trivial cylinder. So I have a 0 1 form over this. So the ENTC would just put the value here 0, not here. But it turns out when you do inductive steps for actually constructing lumber, then this bit already appeared earlier. And then the thing was already having required properties here to begin with. And the trivial cylinder is something which is inner. It's sort of 0. It's a homotopic to something. To a geomorphic cylinder, yeah. So here is a picture. So here is a lift of the little c. It just forgets the underlying trivial cylinder building and just restricts and gets this new object here in E. So before, you just forgot about part of the stable map. Now you throw away part of your object in E. So now you have two functors. So one is pi is just the projection here of this bit later decomposition on E on this. So in particular, it preserves the underlying. It covers the identity on objects. But this one, c, does not cover the identity. It covers the forgetful functor below where you actually throw away trivial cylinder components. Why are we having such a careful discussion of the trivial cylinder components? Because of if you want to SFT or want to two, OK. So if you think of this here as a preparation of producing ultimately data by integrating forms of a components and so on, then the next step is what can I do with the data? Can I represent it as a chain complex or something like this? Then if you want this thing to have certain properties, and in this case, an algebra property, you have to discuss the cylinders. They look, of course, completely trivial, but since you make concatenation, then you add something into it and so on. So they play actually a non-trivial role. So if you would disregard them in some way, let's presume you're also some theory, but it would be different or possibly different. So now this is, of course, a projection here. And here, if the underlying object doesn't have trivial cylinder components, then actually you have this identity, obviously. Because there's nothing to put zero. OK, all this functor, so this is also a retraction. It's linear on the fibers. It covers the other retraction. So that's sort of the structure which we have. And these things commute in this way. And then what is important is that if you restrict C to the non-trivial cylinder part, you get actually fiber bias and isomorphism. That's actually important when I, that allows me to pull back perturbations by this because I have the linearity in the fiber, the isomorphism in the fiber. So if you go through this list of things here, you find that it's actually rather trivial in the concrete example, but this kind of things, how I write it, is actually in all the problems, like flow theory and so on. So it's just always this kind of structure. OK, requirements. So there's this pullback operation. And here, so what does that say? So lambda E should satisfy this. So if lambda E is positive, that has to be this. So if this is positive, this has to be 1. And what does that mean? This is 1. This means that over the trivial cylinder part, the component of E is 0. But that means I actually don't perturb over trivial cylinders. So if you don't perturb over trivial cylinders, that means then d bar over trivial cylinder is 0, which means it's actually the pseudo-holomorphic cylinder. I don't have to perturb there. Is that clear? So if lambda E is positive, here I have lambda E lambda E here, then this one has to be 1. So this here is a part of E over a trivial cylinder building. And lambda 0 is our original thing, which puts weight 1 on the 0 section and otherwise 0. So if this is 1, this means this is a 0 vector. And that means the E over a trivial cylinder component is 0. So then the d bar part on this, if d bar is equals that E over a trivial cylinder, it's a pseudo-holomorphic cylinder. So that you see how it already produces one of the properties of our theta. So we're going back to that picture. For the cylinder that's in the middle of the bottom level, can that be perturbed? No, because of the inductive nature of things. So on some level, like a level 1 building, of course, that is something which would satisfy this property. So whatever you construct in the perturbation because of this algorithm, it will actually not perturb over trivial cylinders. So you would always get pseudo-holomorphic cylinders out after the perturbation. So then, of course, there's sort of what we had before. If I have two stable buildings and I put them together, I can move them against each other, then you want that property here. If I take one of the representatives. So what does that now mean if this is positive? Let's just discuss what does that mean if this is positive on an object. So first of all, it means there exists a rational number sigma positive, a vector in the fiber over that object. So that lambda of E is E. And alpha satisfies this equation here with a weight sigma. Sigma is the number associated to this object alpha. So now, if alpha is actually a building of height k plus 1, so top floor is k, then the sigma can be written as a product of positive rational numbers. And each of the alpha i's, and this E then, of course, is a sequence from E0 to Ek. And each of those satisfies this equation with a weight sigma i. Yeah, so if you look at, so what's the interpretation of lambda composed with? So it means, since in the fiber of different vectors, that it satisfies one of the equations coming from the vectors with non-zero weight. And they have a weight coming from the underlying alpha. And so I get a sequence of equations. And each of them carries a weight. And if I add the weights all up, it's 1. That's precisely the splitting of the 0 section. But the equation itself doesn't depend on the weight. No, no, no, no, no, no, yes. So ultimately, in some sense, we count solutions. But we don't count them 0, 1. We just count them with a rational weight. And of course, there might be a sign also. But this contributes, so if I have two solutions, each topologically counting 1, and the equation has weight 1 half, then the total thing I see is 1. 1 half for that equation plus 1 half from the other. So it's a system of equations where if the equation is true and you have a solution, you just, it contributes according to its weight. It's like the S and P index. How big is the capitalization of a company or something like this? I was also interested in buying stocks. I mean, it's a sort of this kind of thing. But you're also going to say if you cap lambda of E equals sigma. That's what that means. Cap lambda of E, yes. So this is, of course, what happens here, which I said somewhere here, lambda of E is sigma. OK, there it is. Here it is. So that means if I solve the equation d bar equals E, this equation counts. And it's taken into the general bookkeeping with a weight sigma. So then this one here, if there's a length, it's decomposed in a certain number of EIs. And we have this product structure. And so each equation is this here, has this weight, then I can put this together to this one. And the sigma comes from this individual weights of this parts. So then because of this property here, each EI vanishes on the trivial cylinder components. Actually, on every I, because this one was already perturbed. And this EI, so this is a building of height 1. And this EI already satisfies a property that over a trivial cylinder component is 0. So all the trivial cylinder bits, for example, this one here, if you look at the blackboard, all over this, if there are solutions, would actually be real J-holomorphic cylinders. So then if the alpha I has different components, so I'm looking on a floor, then this components, you can decompose it according to the different components. And some of them have trivial cylinders in it. And each of them actually has a weight. And the sigma I would be a product of the weights for the individual components. So this is what the perturbation all does. So individual components are perturbed separately, pseudo-holomorphic, trivial cylinders turn out to be pseudo-holomorphic, and so on and so on. So now we come to the smooth. So algebraically, I think now it's all clear. And now we have to put a smooth structure on this thing and see that we can define what is a smooth lambda and so on. And then we are ready for the perturbations here. Can I ask a question? So what's going on? So at the very end, do you end up with something which is a Q linear combination of manifolds or something which is only locally Q linear combination of manifolds, whatever that means? No, OK, so model of the following is actually one of the things you said, which I'm explaining now. So if I have too many faults, weight of each is 1 half, I could view them as four manifolds by taking a copy of each of them with weight 1 quarter. That would be considered equivalent. So then if you have overlaps, so you have here some manifolds and here some manifolds with weight, what does it mean they fit together? Basically, it means that if you take a certain number of copies here and a certain number of copies here, you can match them up so that the weights are the same. So they're smoothly fit together. So that's one of the things which I said. Well, I don't have the four. I think the second one, for example, it's not a. It's in the middle. You can't break it up into manifolds and then give each of them a weight because they may be. There's no natural identification. You just can identify after. Locally you can do that. But not globally. No. I mean, you can put some artificial structure on it to say what you have to do at any given moment in the overlaps. I mean, you can say, you put the structure on top and says, you have to take so many copies and that you have to identify with this one. Which is actually, you have to do when you do extension properties like for sections. I mean, the same thing for sections. Because when you look at how, if I have a section defined over the boundary, how do I extend it to the interior? Well, the only method is you make a local extension, take a partition of unity. But if I don't know what to identify with, what do I add actually up then? So you need that structure of this identification extend and then according to this identification, you glue the things together to get an extension of the boundaries. Thank you. Good question. Any more? OK. So now the theorem is there exists a natural star means up to some fixing some discrete set of data. A strong bundle structure for this thing here. So it is basically like the polyfoil structure for this, except that we here have a strong bundle line over O. We have the associated translation group part. So the objects on top are vectors in a strong bundle over E. Then we have an action by the isotope group on this. And here it covers this thing. This is one of the size from the polyfoil structure on S. And we have this in a coherent way. And then if you have two of those guys, then we get the transition set. And that transition set is actually a bundle over the transition set here. And this has a strong bundle structure, as it was defined in one of the lectures last week. So this would be the generalization of the structure which we had here. And at this point, you can start talking about smooth multi-section functors. Is that sort of clear? OK. So now it takes a Cauchy Riemann functor. And it goes from here to here. And actually, well, just look at this composition here. Then it lies in the image of this one. And you get a local representative. And it turns out it's S.C. Freton, which was defined by Cartrain and Droll. And the modeler category, where this is 0, has a property that its orbit space intersected with each connected component in the underlying space. The orbit space of this is compact. That's Gormov compactness. So now we are in a smooth setting. So Cauchy Riemann section is an S.C. smooth section. That is what that property means, that this is Freton. Then if this thing vanishes, this means we have pseudo-holomorphic objects, which is sort of this associate modeler category to this one. If I take its isomorphism classes and intersect it with the connected component of the orbit space, it's compact. That's Gormov compactness. So that's by definition what it means, as you have a Freton functor. I forgot what O and S. O and S. So S stable maps. Good, that is what we're talking about for a while now. Good, good. Then O, this functor defines the polyfoil structure. So these are the things which are injective on objects. If you pass to orbit space, you get... So there's a point here, an object here, which is mapped to the original given object, alpha, and so on. But O is retracted. Well, it's just taken an M polyfoil. It's an M polyfoil. So then this is a strong bundle of an M polyfoil that was also introduced. So that is the model for... So is the statement that if I take... So E over S is some bundle that we already said that way. It's a statement that whenever you pull back to a polyfoil bundle, to a polyfoil trunk, I get a strong polyfoil bundle. Is that what it means? So that means that given any object here, whether there's a selection of a set of those guys, and if you put that in, then that's sort of the local structure near the object alpha. Let me try that again. So is del y bar defined by pullback? Yes, yes, oh yes, right. So in this case then, for this construction which you have, if you take any of those guys, it is a Fretom operator in the sense as we have discussed. So the pictures here, when I described before, you have the smooth functor theta, and I put my hand in, then I see sort of many faults. Now I have a bundle over this, lying here, and when I see the Cauchy Riemann functor, the trace, what it does here, and where it's mapped to, that's actually a real Fretom operator. And then I can put anywhere, and this structure gives, if I know something here, I can always transport it to a neighborhood of any isomorphic object. So that is what it... That means, so now here's a polyfoil packaging of the SFT data. So we have a strong bundle structure over polyfoil. The Cauchy Riemann section functor is S.T. Smooth and Fretom. We have S.T. Smooth covering functors with compatibility and some additional stuff, where we have for each phase, so I haven't defined this, but self-clear, we defined it for on the level for S. So if I face here, then this here is just the stuff of E lying over S.T.T.A. Then this was a covering functors. Then there were a certain number of compatibility conditions, which we discussed in the last lecture and also yesterday in the discussion. So you have this diagrams of these things and you have that. So I suppressed here the moving of components against each other. Then out of this data, then one can write down which we did before a requirement for the perturbation you want to do. But that is basically sort of the smooth packaging of the data, which you need to do, to produce the data, which you need for S.T. The last diagram also is commutative, if you replace P by D bar, right? All these functors commute. No, I mean, in the first two, which you talk about this or this or this. So there's three diagrams, right? And then there's two equities below that, which I read as if you change the arrows down and label the P to arrows up and label them D bar. Yeah, yeah, yeah, here. So it's compatible. So if you put the sections, if you put the Cauchy-Riemann section here, the local representative. So here this is, so this is a restriction of the Cauchy-Riemann and here the other one. I'm gonna have to ask you, why are there not three equalities at the bottom of the three diagrams? Yeah, okay, so yeah, so that's good because I forgot to write them. So if you apply, if you apply, yeah, so what do I want to say? Yeah, well, this more controls this one. So if I, you see here, here's the identity. So there's not too much happening with respect to D bar. So it's just controlled by the C, but the C has certain properties with respect to the P. I mean, that's what they want to say here if you have. But it is true that if P composed with D bar is simply zero, probably. No, no, no. If you have a solution, on the solution set of what you're interested in is identity minus pi composed with D bar would be zero. Which means on the trivial cylinders, you would be pseudomorphic, yeah? Identity minus pi composed with D bar equals zero means on the trivial cylinders you're pseudomorphic. Right, so that's exactly the equation here, right? So pi composed with D bar is in fact equal to D bar. It would be. Well, you might have a non-trivial cylinder which is not homomorphic in ass. You are a non-trivial non-homomorphic cylinder in ass. And then the other part of that would not be zero. No, then it would not be zero. No, I just said only on the solution set, ultimately. So I think you can write certain things under the sum. So identity minus pi D bar equals zero provided actually, what does that actually mean? Equals zero means, well, that is precisely what you can say. Identity minus pi composed with D bar equals zero precisely means the trivial cylinders which you see are pseudoholomomorphic. Right, but that is exactly your right hand diagram today. Ah, OK, good. Now, OK, so something in that direction. So let's put a weight on this 0.1. And so there's some truthiness to it. So now, constructions of SC plus multi-section functors which are a particular class of those guys here. So this is an important class. It's sort of, these are multi-section functors which you can sort of view as multi-sections of, these are sort of compact perturbations of this. So multi-section functor is of that particular kind, provided as the following properties. So if you take sort of this uniformizers and the underlying thing, the q0 would be the object where you're looking at, then this composition here is a count of the number of indices for having, it's a number of indices where this age satisfies this. So you go into the base, pH, so age lies in, so age lies in k, pH lies in O. These things are defined on O. And if Si of the underlying base point is equal to the vector you put in, you count the number of indices, this is that, and divide by the number of indices you had. And these things here should be locally SC plus sections. And let me remind you what that was. This strong bundle comes with a double filtration. Namely, it made sense to talk about mk, and here m, where k is less or equal to 0, less or equal to m plus 1. So in particular, you have a k01 lying over an O0. And the SC plus sections are actually going from here to here and lie in the fiber over m in m comma m plus 1. So the Si's go from, so they're defined on O0, but they go into k, m, m plus 1. And then, of course, what is important, that is why I said it's compact. If you go, this is a fiber regularity. And if you view them with respect to the different norm, that is a compact inclusion. That is why I call this compact perturbation. So these are some kind of sections, but they are constrained by having this property. And I think you talked about that, or it was mentioned maybe last week. I don't know what the definition is saying. It's saying that there exists Si's. Yes, so there exists finally many Si's, so indexed by the set i. And you look at the coincidences. So basically, the picture here is, if this is O here, and this is a fiber, then locally, so you have a certain number Si's, i and i. And each of them carries the weight 1 over i, 1 over the number of elements in this thing. And you just look at this vector here, how many graphs are there in which it lies? So you have this vector, so Si, this is an i here. I just don't understand. We have a definition of multi-section function before that. The only difference now is that we require the things that we locally represented by the s c plus. So locally in a chart or a uniformizer. So first of all, the multi-section functions were in each fiber, there were a finite number of vectors with which having weights adding up to 1. So now, if I put the chart in, then I have this, of course, on the image, but these different things should lie on graphs of an s c plus section. Is that clear? So if you put your hand in and you see in the fiber the different points, they line up as lying on a graph of s c plus sections. So now we want to, and this section should be sort of compatible with the group action, and that's sort of the compatibility. So there is an action of our automorphism group on the set i. And you have the orbits under the conjugation by this thing lying in there. So let me first say certain properties what it can do with this guy. So you can build this sum here, which is sort of a convolution. And this is smooth. So if each of those guys is an s c smooth, or s c plus, so I forgot the plus here. So if this is s c plus, then this is s c plus. Because what is the representation? What is the local section structure of this thing? You just have the section structure s i for 1, and t i for the t j for the other, and you just take all possible additive things and just take as a weight to take 1 over the number of the indices here times the indices of the other. You have even here lambda 1 and lambda 2 sitting on different bundles? No, no, they are there for our bundle e. Why do you call this a sum in that order? Well, it's a convolution. That's better. Because on the section structure, you take basically all possible things how you can add up things. So if locally, the first one is given by s i, and the other by s i prime prime, and this in index set i, index i prime, then the sum is given locally by taking all these combinations here. But where the index set is actually i cross i prime. So plus because of that. So then this one here just replaces the sections locally by t times the section. So of course, it's 1 over t on the other side. Otherwise, so this is a smooth family. So this is OK. So if t is 0, you just get lambda 0. So I put the lambda 0 up here. Otherwise this. What is lambda 0? So 0 should be up here. It's a section which just is a weight 1 on the 0 section. I mean, this is a smooth procedure. So this one here, what does that mean? The indicator function here, it just means that the local structure is t times s i. And then if t goes to 0, then you get the 0 section. So lambda 0 of e is 0, unless e is 0, in which case it's 1. Yeah. And all that works exactly because the total suburb weights over any fiber. It's 1. It's 1. So that is actually a smooth family if you change t. So then if you have an s c smooth functor into r, so it's clear what that means. That means if you compose it with a uniform as s c smooth, then you can put that in front of it here. So you can use partitions of unity to cut off such multi-sections smoothly. Then this makes sense as long as locally, near a point in a uniformizer, you just have that the family is locally finite. So if you take a point and then you have only finally many non-zero vectors there, and you just add them all up in this way. So then this is also, again, a good section. So these are important facts for actually constructing perturbations. This allows you to construct things locally and then just add things up. And then a good fact is if I give you any, so it should be a smooth object and a smooth vector, then there exists actually such a lambda of s c plus multi-section functor where lambda of e is positive. So I'm going to show you this, how to prove this. So now, for example, locally, remember when Katrin was describing the transversality result and perturbation result. So if you have a freterm, or even in finite dimensions, if you have a section of a vector bundle and you want to make it transverse by small perturbation, what you take is you add to it some ti times the perturbation to fill up the core kernel. Then you solve this with respect to the additional parameter. You get a manifold. And then you project onto the parameters you added and take a regular value. And for every regular value, that is a good perturbation. So now, what do we have locally? Locally, what we want to achieve locally is we want to break the symmetry. That is generally what we have to do to achieve transversality. Of course, sometimes we can avoid this. Then we take the orbit of this perturbation, which is also transversal. And then we have maybe some more perturbations. But for each of these problems, it's precisely that argument. So when you look at this, you just have to make sure that one of the local problems is transversal. You get a set of full measure for the perturbation. You take intersection. And then you take some of the values there. So that's the only additional complication. But otherwise, you use precisely this thing. So what that means is that actually rather than taking local sections, you construct local multisections and take that sum. And each local multisection depends on a few real parameters, t. You take the right sum so that it fills up the core kernel. And then you get sort of this branched manifold. And then you have a projection on t. And then for each piece of manifold, you require that that projection is regular. Now, these are countable conditions. So you find regular things. So that's the only difference. So it's a straightforward thing coming from there. Then you can even go further. For example, when you have a boundary point and the kernel lies a little bit stupid with respect to the boundary, like it's tangential to the boundary, then you could introduce multisections to have a particular linearization. You can actually tilt the kernel into the manifold and make it transversal. But that is also a little bit. So for this, you only have to construct a section which takes enough values to fill up the core kernel. Here, you have to think about that it has a specific, it might have, say, value 0 there, but it should have a particular derivative which, together with the linearized coefficient operator, has a certain thing. But that's the same problem, like in finite dimensions. So there's nothing new. I mean, it's, of course, not so surprising because Fritam's theory is locally a finite dimensional problem times something you don't have to care about. And for that finite dimension thing, these perturbations are as rich as in the finite dimension theory. OK. So let me just explain you how I construct a section. So I want to construct at alpha in the neighborhood. So I want to construct a section which has a certain property at an object alpha. Didn't you just explain to us how you construct a section? So now I do it. So no. Let me, on some level, now I give you on a precise level and still I have 10 minutes. OK. Five. OK, good. So you just have to put something on the table and then you get a good answer. So I want to construct something at the smooth object alpha with a given smooth vector over it. So what do I do? So I take such a uniformizer. So here's a picture. The underlying thing, so this is an orbit space that would be psi, the image of this one, if I pass to orbit space, would be this red stuff. So there's a point somewhere here which corresponds to the object. I take a neighborhood u there. So now, what do I need? So in Hilbert spaces, you always have smooth bump functions, but on certain Banach manifolds as well. But unfortunately, on certain Banach manifolds or Banach spaces where you don't. So I think C alpha, I think, does not have smooth bump functions. But so there was a study 30 years ago. People were interested in it. So there's a lot of literature on which Banach spaces have smooth bump functions. But SC smooth bump functions are a little bit more there because it's a vehicle requirement. But in any case, on Hilbert spaces where we're all set up, you don't have to worry. So here is my Z set O. And here is sort of a neighborhood. So now just here, say this is a point which goes to the object alpha. Now you just construct with a bump function this object here. So you just take a bump function, which at this object corresponding to alpha takes this value E0, which corresponds to the given thing in your fiber. You take the support in the small set. And then you rotate it around by the action. So now you have a local thing. So this is now on the image of psi bar of k. So then we define it by this formula, which is precisely the definition. And now we extend it to the whole category. Namely, if you have any vector, then if there is no more fism, which brings the underlying base point into the image of O, you just say it's a multi-section which has one on the zero section. And if you actually can reach this patch here, then you just define it by this, by what you reach. And that is a smooth function. Because if I go from one to the other, I have this smooth transition. So if I have a local section structure here, I just can move it over there. So that is a local construction. So now you can take a finite number of those parameterized by p to fill up the core kernel at alpha. Then the Frayton property actually guarantees that it nearby is also the case. You do this at different spots, covering the compact solution space. And then you have enough things to do precisely what Katrin says. Said some time ago. OK? So now I generously got five minutes. And I only use three of them. So I stop here. Otherwise this, huh. Are there any questions for our speaker? Can you go back just to the last slide, please? My computer is very entertaining. OK. Can you go through this again and tell me again which spaces are which in this picture? OK. So in this argument, actually, it wasn't so apparent. It's actually important that the underlying space is actually at least power compact. So I take a look at psi of O. It takes the associated isomorphism class, which is sort of this red stuff. Then that is here, yeah? So then in this isomorphism class is the class of the original object alpha given. And I take a neighborhood around this. And then this thing is that, so this is a metrizable space so it's normal. So I can actually find a small neighborhood around it that the closure in the whole space is still contained in it. That's important because otherwise that thing will actually not become even continuous. So then you take the preimage of U in O, which is this blue thing. So this red stuff is O. And this is a preimage of this U. So now in this one, you take a bump function which has support in this. And here somewhere is a point which corresponds to the object alpha which lies here. It's the object alpha, this is a category. There's the object alpha somewhere here. It comes from a point which lies in the blue region. So over the blue region, there's this point representing alpha, say Q0. And over this alpha, there was this fiber. There was a vector E, which corresponds to some vector lying over this Q0 in the bundle K. So now you just take a bump function, which is one in the neighborhood or at this point here, times this vector and extend it. So I haven't talked about extension results, but they are on the polyfoil level quite easy. So you can ask me maybe on Friday and I can show you how to construct them there. So anyway, so there is a section with support in the blue thing. And now you want to construct a functor. So what I do is I transport this section around by conjugation. And then I define, then I get as many, of course, some of, it could be if you have a symmetric section, then some of them are the same. But that doesn't matter. This is your index set. And you give each of them the weight one over the order of the group. One over the other. The definition of FG, suppose this Q0 that you had isotropy, how many you can still. Yeah, yeah, yeah. So this is actually coming from the isotropy group of this element. So if it has a large isotropy, then in general, I would, for example, construct something like this F, which is achieved some transversality nearby. Then since the bar is a functor, then if I conjugate the functor, it doesn't change. But this then, so then the perturbation by this one is a conjugation of the perturbed thing by G. So it's also transversal, at least in the region by 1. So moving this around doesn't destroy transversality. And then, of course, in general, you might see some other sections coming from the overlaps or so. But for the construction, that is sort of the minimalistic thing you have to do. And then you give each of them the weight one over the number of elements in the group. So that's precisely the requirement. So that means now on that slice, when you look at what happens, so the section is now defined on that same slice here. So now I have to extend it. So then I get an object here, here with some vector. If there is no morphism from this one, which reaches a point which lies in here. So if I cannot, so either I can reach this slice or not. If I can reach this slice by a morphism, if I cannot reach this slice, I put the weight on the 0 section, 1. And if I can reach this slice by a morphism, then I define it like this. So I look at what point is there. And I give it the same value. And that is an SC plus section now defined globally. So it's not very difficult. It's just really always the local constructions. And the language is sort of so high level that you basically always see everything on the nose. You don't have to go in complicated coordinates and say what it actually means, what you're doing. So that makes it, of course, it could have been equivalent theory. But the language level is much easier if you stay on that high level. In particular, since on that high level, every information is there. And there are abstract results who produce whatever you want. OK, so good? So I will ask a question that maybe I kind of know yes or no, I'm not sure. So what is the reason that we need to go to M polyvolts instead of working with retracts as a local model for this kind of category? Well, you could put retracts there if you want. You can also put M right now. I mean, this is a little bit larger. So this M polyvolts is locally modeled on retract. So rather than taking just something which has one chart by retract, then it could replace this one chart by the actual retract. I do this. It also has some advantage when I do cover, when I discuss the coverings. So at some point, of course, in the whole thing which has been suppressed, I have to give a definition what is actually a covering functor in the whole thing. And then I have to give a local model for this. Then on the top, generally, I have more points. So it's actually more union of retracts going down to the other thing. So the things are easier. But one could, but I think it would be unnecessarily restrictive to say that. I mean, it's a fair question. I mean, it's like, I mean, the equivalent would be in many folds. And I define a manifold as something which locally has charged isomorphic to an open set in Rn. And I just define this is a manifold because it's locally homomorphic to some manifold, to some smooth manifold with a transition smooth manifold. Yeah, so the manifold is like locally a manifold. No, no, no, no, no, no. Right, that's what this is. Also, one side is a category, and the other is actually some well-defined, smooth kind of object.