 Right. So I guess I want to start by saying that I'm trying to, I think anyway here, I'm trying to try to find line a bit. In the sense that my main goal, I think jointly with Katrin this week, is to sort of set up the analytic foundations that we need to make sense of what Helm is going to talk about next week. And the thing is that I could do that completely abstractly because the polyphold theory is sort of completely abstract. But it's also designed to be completely abstract, which also contains useful problems. So if I did it completely abstractly, I think everyone would get lost pretty quickly and not have it be founded or you wouldn't have much to connect it to. So I have to try and bring in elements of things that are hopefully familiar and pair them in just the right way, I think, with certain ideas coming from the polyphold theory to try and illuminate why definitions are the way they are. And so yesterday, I think it might have been sort of a good talk for if it was 80 minutes with questions or 60 minutes with no questions. And so I think things got a little out of hand at the end. And so I just wanted to kind of review what I had sort of hoped to say yesterday and then build off of that today. So quickly then, briefly, what did I try to do yesterday? So the first thing was that, well, we wanted to try and parametrize a neighborhood of a nodal map. And we're not dealing nothing pseudoholomorphic here at all. We're really trying to build this big ambient space of function. So we're trying to construct some sort of parametrization or be able to write down some sort of a chart for this. And so the first observation that we made was that this pre-gluing map basically yields us the right topology in this really big ambient space in which has this nice property that, in fact, our compact modulized space. So even if it has, you know, you're dealing with nodal curves or your broken trajectories or whatnot, this big ambient space has a topology in such a way that your compactified modulized space is a subset. And that the correct topology on your modulized space, your compactified modulized space is induced from the ambient space. And so this pre-gluing map then was something rather useful. So we defined it sort of like this. And so we said, OK, well, if this gives us our neighborhood in the right space, well, maybe we can use it as a parametrization. But we can't because it's infinity to 1. There's all this information loss. So well, what do you do? Well, I say, OK, HWZ, they introduce this minus-gluing map or anti-pre-gluing map, however you might like to refer to it. And so they introduced this. And again, it was some sort of unpleasant formula. But it had this nice property that when I pair it with the plus-gluing or the pre-gluing map, that suddenly for each fixed-gluing parameter A, this map here became a linear bijection. And so then the idea is to say, well, how are we going to make use of that? Well, that's useful because what we can do is we can define this set script O sitting inside this subspace here with the property that, well, we'll define it to be all those maps so that when you anti- pre-glu or when you minus-glu, you get 0. And that's good because what that means is that basically when we restrict our pre-gluing map, our plus-gluing map, precisely to this set O, it ends up necessarily being a bijection, or at least an injection, or bijection with its image. Now, that kills this problem. We no longer have this problem as long as we're happy to restrict our attention just to O. But now all might be some weird set. And so now you might say, well, so what? What have we actually gained? And so there was this notion. And it's an idea because in the end it's going to work out. But the idea is, well, maybe in some way this set O supports the SC calculus. And maybe there's some way to treat it as if it were something like a Banoch space or something like a scale Banoch space. Maybe you can define a notion of smooth maps between them. Maybe you can do that. So how might you do that? Well, you have to do a little bit of manipulation to rewrite the problem. And so we did that by writing down this function R. Again, its domain is the same, but its image isn't in the target space anymore. It's a map from your domain of unglued maps, basically, together with gluing parameters, back to itself again. And we define it in this following way. So we take a gluing parameter and a pair of maps. We apply this box gluing map, which is a bijection. We, I should say this here, this was asked yesterday. This is my projection onto the first factor, or sort of zeroing out the second factor, at least. And then we take this box inverse gluing. And then we made some observations, or very rapidly I tried to make some observations. This I would have spent some more time with if I had the opportunity. Was it this set O that we had a definition for right up here? Well, once we take this definition of R, and we see how it's defined, we see that it's precisely the set of fixed points of R. Then we also sort of check really quickly that R composed with itself is just R again. It's a projection, basically. It's a nonlinear projection. And it's easy to check here, basically, because this is a bijection. You have its inverse on the other side and a projection in between. And then a quick computation. It's a little one-line proof, basically, shows you that these two properties together guarantee that this set O is, in fact, the image of R. And so then there's this theorem. And this theorem says, after giving this space, C cross E is suitable at scale Bonach space structure, this map R is SC infinity. So I didn't really give you any justification for that. I'm not going to give you any justification for that. You can look in the literature, and you can sort of see, well, why is something like this true? And if I remember correctly, the proof of this essentially boils down to, well, you write out why we have an explicit formula for R here. And when you write everything down, you get some terrible sort of equation, some awful composition of additions and products and blah, blah, blah, you break it down into all the little pieces, though. And at the end of the day, you prove each little piece as C smooth, essentially by going to that list that is in the lecture notes for day one. And that tells you where to look in the SC calculus paper to tell you how to prove each one of these components as SC smooth. And so consequently, you rapidly build back up from this that this is, in fact, SC smooth. So now that by itself doesn't really seem to bias anything. Sure, please do. You say you have a suitable SC bound space structure. So that E itself is an SC bound space. Yes. And C, what you're going to do is rescale me a 0 by some gluing parameter, right? To change the structure me a 0. No. No. So what is this suitable? So I'm thinking of C as effectively as R2, which is a nice Banach space of finite dimension. And so I can take a. So it's just a standard product? Absolutely. I mean, when you say it's a suitable SC bound space. Whoa, whoa, whoa. E, E. I mean, when I say suitable is really applying to E, not C. What's given a suitable structure, then it says C. Well, you gave this example to me. I mean, it was double the P, delta E. Yes. Yes. OK. That's a suitable. It's not a mysterious thing. No, no, no. Sorry. I mean, I think it ended up being sort of a slight modification. I don't know if in my lecture notes I actually defined the scale Banach space structure on E. I gave you something very close to it, and I told you the base topology on E. And if you understand those two, you can obviously guess what this structure needs to be on E. I don't know if I said it explicitly, which is why I'm saying suitable now. But thank you. Are there any other questions? Because I don't want to blow through this. This is actually fairly important. Yes. So what was the purpose of this averaging function that you put in the anti-bluing? Right. So, yes. So let me see. I want to see where this. So here's how I would explain it. So O is defined to be the set of all points where the minus gluing is 0, right? And we define it precisely this way so that the restriction of the plus gluing ends up being a bijection. So then what you do is you say, OK. Well, if you look at the maps, right, so let's see. So script O then is going to essentially be the set of all points which are going to parametrize your neighborhood of your sort of non-nodal maps plus some nodal maps as well. OK. So what happens is if you think about having this nodal map, when you set up sort of a base problem, you typically want that nodal point to say, you want to model it so it goes to, say, the origin in R2N, say, for instance, right? OK. But then you start saying, OK, what are all my nearby nodal maps? And your nearby nodal maps, some of them are going to be sort of, some of them are going to be nodal, some are going to be non-nodal. Those which are nodal, you're going to want to allow that nodal point to move around, right? So now you allow this nodal. So now you say, OK, well, I have to allow sort of nodal points to move around. This forces you to add in this sort of, this extra constant C that I had in the definition over here, the corresponding scale Banach space. OK, that's all fine. You can do everything with the pre-gluing. There's no problem with the pre-gluing. But in the minus-gluing, right? Well, what happens if you don't have those, if you're not subtracting off those averaging terms, then what happens is when you look at this set here, it turns out that without those averaging terms, it necessarily must be the case that the only way for this equation to hold is if those asymptotic constants are 0, and therefore the node can't move around in the image. Let's think about it. If you take the nodal value, and now you glue and you got this long cylinder, what is the best approximation for the nodal value? It's the integral on the middle loop, the average over the middle loop. That's the approximation for the nodal value. Yeah, and that's the report shows up. Sure, any other questions? All right, so this is where we left off. So remember, I guess what we're trying to do is we're trying to investigate maybe scripto here supports the SC calculus, right? That's sort of the idea. And so then, I don't know how this was developed in practice, but reading through the material, then you make this sort of observation. You say, well, if it's the case that I have this map R, mapping essentially a scale-bonach space to itself, which is SC smooth and R compose R equals R, well, then I could do this trick, right? I could do this trick. I could say that a function F defined from the image of R to the image of some other R in some other space, right, is SCK, if and only if, this is my definition, if F pre-composed R is SCK as a map from this open set in a scale-bonach space to this open set in a scale-bonach space. See, we already have an SC calculus defined on SC-bonach spaces, right? We have that defined. But now the idea is, well, how about subsets? Well, if these subsets happen to live as images of these special, what we're going to call them, retractions, then it's the case that we can make this definition. O prime lives inside U prime. O prime is a subset of U prime. So there is sort of one good thing about the terrible HWZ notation is that it's fairly consistent. And so it always ends up being that O is always the image of U and U is supposed to be a subset in E and open until in, I don't know, five or 10 minutes, in which case we add in sort of this additional portion where we put in something like, I think it's called a partial cone. You can think of it as a partial quadrant in between here, in which case U is relatively open in a partial quadrant. But then this always ends up being the domain of R. And R goes from U to U. And R always goes from U to U. It has to go from one set to itself, absolutely necessary, as you would want for a retraction. Any further questions? OK, so I make this. So now the idea is we now have a definition for smooth functions between these weird images, these weird subsets. And then this is sort of the first bit of magic. And the second bit is that actually these strange subsets, and they really are, I mean, rather strange. I mean, even when you write down toy examples, I think there's a homework example on this where you can see these things might have finite dimension, but the dimension of the space might jump, right? If I had locally varying dimensions, locally varying co-dimensions, it might be sort of a full, it might be sort of a full set of sort of, if it's, if you have an SC-Banach space, for instance, which fibers over some other finite dimensional space, it might be sort of full dimension on one region and then sort of have infinite co-dimension on another. It's a very, it's a a priori, it's quite wild, but nevertheless, despite all the strangeness, it does have a tangent bundle, and you can just sort of see what it has to be. I have a map R, which defines it, which maps U to U, and R compose R equals R. And so I say, well then, it must be the case of the tangent map of R maps T U to T U, and you apply the chain rule, and the chain rule says, well, T R compose T R is T R, which gives you again a map precisely of this form, right? And so then you just take as definition the tangent to this subset is equal to the image under this, the tangent map of R of the tangent of U. It's kind of functorial. It's sort of the only thing it could be. And I should say, it doesn't actually depend on how I only understand O. Right, so we're not, I haven't even done, I haven't officially defined scale, scale smooth retractions, but this is sort of, this is sort of the first observation. And so then, once you've done this, I mean, these are sort of, I mean, almost sort of stupid things. I mean, you just, you're tinkering and you find this sort of stuff, right? Once you see that, then you should have this idea, this is the big conclusion then, should be, well, let's try to build manifolds locally modeled on subsets like O. So what are all the characteristics of O that we need in order to actually build some sort of, something like a manifold locally modeled on this? And so this is what I would have liked to have conveyed, I guess, in my lecture last time. And I think it's important to sort of see this story completely laid out like this, to see how, so these things here end up being, so it is the case that this is doable, I'll say this in just a second, but these things are called SC smooth retracts, these are called SC smooth retractions, and these essentially are gonna form local models for M polyfolds. And what I'd like you to be able to see is that, is that if you start with just pre-gluing, which is something that shows up in whatever framework you want, right? I mean, any framework where you have nodal or broken elements of your modulite space, you start with that in some sort of classical analysis, then you do this trick, you introduce this minus gluing, repackage it into this weird sort of non-linear projection, and show that it's SC smooth, then you necessarily are led to this idea of having these SC smooth retracts, which are gonna provide local models for our big ambient space, right? So any questions about this outline? Yeah, sir, so like this, these retracts that's really, really different from the usual setting, right, where you have close to pre-glued position with smooth maps, or other maps are always hard, like in silver spaces. I'm sorry, what was that? I missed. Like this is really different from the normal, or like the standard setup. This feature of R. Which, the SC infinity? No, no, no, that like post-composition with, or like pre-composition, but. Yeah, pre-composition with it is exactly preserving the features of that you want. Right, if you think of solar spaces and then pre-composition with smooth maps. Well, I mean, I think, right, if I have a smooth, if I have a classically CK function F, and I pre-compose by a smooth function, I mean, my chain rule should tell me that I just have, that regularity in that sense should be preserved. And so again, that's happening here. So that's sort of not so surprising. I think the surprising thing is that, the surprising thing is that, and this was commented yesterday, the surprising thing is that, if you reduce this from SC infinity, or rather increase it from SC infinity to classically C infinity, then the image can't have these weird properties. It sort of necessarily must be a Banach manifold. And so you then reduce to sort of the standard calculus in that case. Yeah. Am I supposed to think there's a difference between locally modeled on O and locally modeled on pairs E comma R? Okay, so that's a great question. I wanna get to that right now actually. Can you repeat the question? So there's this question of, when I say locally modeled, what actually constitutes a local model, right? And I keep saying O, and the first time you see this, the natural question is to say, you don't mean O, you mean R in there somewhere like R and E. So let's see. So, okay, so I can say definition really quick. So E is an SC Banach space, let's say U contained in E is open, and we recall my ambiguity. I really mean open in the Bayes topology. R is a map from U to itself, satisfying R compose R equals R and R is SC infinity. And all of this says is the definition for an SC smooth retraction. Then I'll say, here's another definition, which is that E is an SC Banach space. Script O subset of E is an SC infinity retract provided there exists an R, which is an SC infinity retraction, such that O is equal to the image of U under R. So my handwriting is a little sloppy, but everything here is basically just me writing this stuff down in, I don't know, some formalized version, there's no essential change. I'm not sure I entirely understand the second definition. So an SC Banach space comes with a whole lot of structure. How much of that structure is O supposed to remember? Right, so what do I want to say here? Okay, so I'll point out that that's your second question. So I want to answer your first question first. And your first question is well, what do we mean by sort of, what is the local model? And so that was the previous question. So here's the answer to that question. The local model, our sets are of the following form. And if I want to be, yeah, yeah, not E, not this and not say R U or U comma E, but we don't want R in here, which is strange. And it's also the case, this is also just in terms of notation, I can say that even recently I've been highly irritated by the fact that there's, you can't just talk, you really honestly, if you want to really be as honest as possible, you can never just tell me O is the local model. If you're really being honest, you have to give me both O and the ambient space. But it turns out that you don't have to tell me R. And the reason for that is that if you look at this definition, if you look at this definition, you sort of say, okay, suppose I have one retraction, which makes this function here SC smooth, and I choose a different retraction which defines the same set, then again, it's gonna have the same regularity. So regularity is independent of this retraction and the retraction doesn't really tell you any other information. So all of the structure that O has, all of the differentiable structure, I mean is essentially, I mean all the scale structure is induced from E, and any differentiable structure is induced from the fact that there exists an R which whose image is O. As long as there's one, you're fine. And so in the literature then, this then becomes your local model. This is replacing, say, open sets in Rn or open sets in, you know, Banach spaces. And I can make that even more precise now, unless there are further questions, by anyone. Can we talk about O also as having a scale structure? Yes, it has a scale, it's a, I mean. The image of the appropriate levels of E. Yes, so O, remember, it has to be a subset of E, and so anytime you have a subset, the natural thing to do is to say that the kth level here is just equal to O intersect the appropriate scale structure there. And you know, and. The subset would be zero, but do it. Well, remember, my ambiguity at the beginning of yesterday sort of says that, says that, says that when any time I have a set-wise statement like this, this is like, I always mean the base level. Explain what it means to be an isomorphism of such objects. An isomorphism of such objects. Well. Like for instance, whether I have O, E included inside E cross C. Is that an isomorphism, or is that? E cross C. I include it as O comma zero inside. Oh, I see. Well, I'll be honest and say that I don't quite know what you mean by isomorphism in this case. I mean, it doesn't necessarily have, don't think it necessarily has a linear, in general it doesn't have a linear structure to it. So yeah, so in general this doesn't have a linear structure to it, and so consequently isomorphism would mean diffumorphism, and that is a concept I'll define. But in fact, I mean, from just what's on the board you should be able to conjecture what it would have to be. You would say, well, I've got two such retracts, and we say they're SC diffumorphism, SC diffumorphic provided there exists a function which is a bijection between the two, which is SC smooth, or SC one, whatever. I guess, well, we'll stick to the smooth category, I think, for simplicity. So in other words, the notion of isomorphism doesn't care about you. Well, I mean, E is sort of running the background because I can't talk about a local model unless I have both of them. I have to have the ambient space, and I have to have the subset. Because the definition of SC smooth depends on, this is of course, It's passing through you. This of course is actually related to a Facebook post I had about whether or not it's acceptable to write statements like this, typographically. Most people hated statements like this, but writing down what these are subsets of is useful, I think. Vertically, it's okay. Vertically, it's as slanted, it's terrible. Okay, right, so. Surprised he hasn't already, yeah. I, let's see, so what you just said is that the obvious candidate for isomorphism is an F, like one definition which is bijected and SC infinity, but don't you need to require an inverse of SC infinity? Right, I mean, that's, I said bijection and then I want, you know, the function, and if I didn't say, and it's inverse, but it's supposed to be SC smooth, I wouldn't want just one direction, for sure. Sorry if I missed that. I don't see, I don't entirely understand why you need to keep E in the local model. I mean, can't you just take, somehow the knowledge of theta or O plus the knowledge of what all the SC functions and it are, that's out of my pay grade. I don't, yeah, I'm not sure. Defining SC smooth for some map between O's via U, which is in E's. Yes, I mean, I can stick another sort of containment in here. I'm making my board work. I'm just saying, like, that is where E comes into the picture, right? Because you are always just composing all of these maps. Yes, yes, I mean, why don't anyone say yes to it? I think it's right, yes. I'm not entirely sure what the question is, I'm starting. Define smoothness, unless you know because it depends. Ah, yes, yes, yes, thank you. I'm sorry, that was my fault. So now I'm gonna do something which is sort of, it's sort of obvious. I mean, I think it's obvious, but I also think sometimes, you know, doing obvious things on the board sort of solidifies how obvious they should be. So what I wanna do is basically give you the definition of an M-polyfold minus some line. So let X be topological space. X in X, a chart around X is a tuple. And X is a tuple, the phi O. So that this here is open in X. This is an SC smooth retract. I mean, in this sense here, or it's a local model if you like, O is a retract sitting inside E. And then phi is a map from V to O, which is a homeomorphism, right? Because at this point we have no further structure on X. Definition and SC infinity atlas on a topological space, X consists of charts of the following form that we've just seen, such that I missed a definition. All right, such that they are pairwise compatible. Compatible is the word I missed to define. And this collection of V cover two atlases are equivalent if union is an atlas. I'm sorry? E is fixed or is it in the definition or? No, E need not be fixed. So any chart of this form where E is some scale on OX space, always some SC retraction sitting inside that E, V is any open set and phi is any map. So they could a priori be different. That's actually sort of, yeah. I mean, things like that are actually important because when you wanna consider maps from say your domain is a Riemann surface, you could ask questions like, well, what do we really mean by a Riemann surface? Someone might want S2 to be sort of the set of all points of unit distance away from zero in R3, but someone else might have some sort of slightly different, you know, they want it to be sitting inside R4 or something stupid, right? So, you know, your domains are different. So strictly speaking, those are different spaces, but you would want to allow that in such a definition, right? Charts are compatible, associated transition maps SC smooth definition. Yes, here's an exercise that the domain of the initial results of retry. Atlas. No, no, I've done good. Definition important. An M polyfold is a para-compact Housdorf topological space equipped with an equivalence class of SC smooth. Now, I claim this definition should be sort of obvious, but also sort of necessary for me to write down. But there's a question over here, I think. No, I got it. Okay, good. Those are the easy questions to answer. Is there any questions about this? Why is it reasonable to ask it's para-compact Housdorf? Well, because I want to, I want, I would like it to be the case that if I restrict a finite dimensions, I recover the usual notion of a differentiable manifold. Well, it turns out to be the case that when you build these M polyfolds, for instance, for a Gromov-Witton SFT and stuff, they're para-compact Housdorf. All the standard poly-manifolds are too. When you say that they're compatible, you're asking to be a SC smooth, and you need to have that, if I remember well, that the k-th level goes to k-th level. So are you assuming that a priori or, I mean, you're just getting the homework that you can get to? Right, so you, well, right, but you have a hell, you have, so what's, so what happens? That's a good point actually, right? So what's happening here is that you're starting with, you're starting with something that's nothing more than a para-compact, para-compact Housdorf topological space. It seems to have no additional other structure. It has no level structure. It doesn't have, it has nothing else, right? And so what it turns out is, well, this is the structure that you need so that once you equip it with, you know, an equivalence class of atlases or even one atlas in particular, because it's the case that all of these, that all of your associated transition maps have to be SC smooth. They have to preserve levels. They have this sort of differentiable structure. Any information that you see in the local model that you would like to see in the M polyfold that's also preserved in the transition maps, which is essentially everything we've discussed, we can then claim is just induced on M via the local models. So yes, it is the case that after you have an atlas for it, this thing has a nice, you know, what I would call scale topology and variety of other structures, I think. But this, the transition maps will be defined from a subset of theta to a subset of theta prime, right? Not a, yes. So what does it mean that that map's smooth? That's not a retractable map. This was Helmut's exercise that he stated. Yeah, yeah, so it turns out that it turns out that the restriction of two overlapping, yeah, the restriction of two retracts is also, again, a retract. An open subset of a retract is a retract. An open subset of a retract is a retract. Yes, that's a good point, though. Any other questions? Is this open? It seems like it's probably not accepted. It's divided by closed condition. What can't be, I mean, in general, it's certainly not gonna be open because to be open, it's gonna sort of fill up an open set inside E. It's gonna be much wilder. But it is the image of an SE retraction. Yeah, it's the image of an open set in a scale-bonox base by a scale retraction, scale-smooth retraction. And that's sort of enough to make this definition work. Yeah? And the topology on O is a sub-states topology, right? Well, I mean, O, and strictly speaking, has a scale topologies, but the base one, yes, is induced from the base topology. Yes, yeah. Okay, so now I can tell you that I'm also lying. In the following sense, so, you know, if you're not studying Gromov-Witton, then pretty much any other sort of modulate problem that you're likely to come across is gonna have some more rich algebraic structure, which relies on your modulate space having boundaries in corners. And currently the way we have this definition of M polyfolds, nothing has boundary in corners. So we need to fix that. Uh-oh. Okay. That's triple, triple vowel looks terrible to me. I don't know. Oh, snap. All right. Ha, ha, ha, ha. A linear SC isomorphism is a linear SC zero map T. SC zero map T, this is how much notation. E to F, which is an iso on all levels, partial. All right, tell me which word goes here. Is it quadrant or cone? Is a closed set, closed convex set. Close convex set. Let me give it a name. C such that C equals T applied to, and T is a linear SC isomorphism. So this I'm sure is a partial quadrant. Are we calling these partial quadrants as well? You'll know, fantastic. We'll call this quadrant. And this our model, partial quadrant. And you might say, well, why am I doing this? I'm sorry. Yeah, absolutely, just a finite number. And you can say, well, why am I doing this? Well, what I want to do is, what I really want is to have a notion of boundaries and corners sitting inside a scale-bonox space. And so this is sort of your model for that, the same way you'd write down your obvious candidates sort of sitting even in finite dimensions are gonna sort of have this shape where this is gonna be R to the N, say for instance, or you could replace this with a bonox space, that's fine. And then we allow this sort of isomorphism so we can move things around. So in particular, this region here is now partial quadrant. I'm sorry? W is just some other scale-bonox space. Pardon? C is an E and it exists. Do I have a W and T subset? Yes, yeah, so yeah. So C, like I said, over here somewhere, this notation is sort of always standard, it seems. C is some gonna be some partial cone inside some scale-bonox space and we want it to be the image of this sort of model one under some linear seismorphism. So like in box is allowed that... Corners, so you wouldn't... There's a box to keep in the box. Ooh. Okay, so first of all, so not the box, but just the corner of the box is allowed. You can't have all four corners because it has to be... No, no, I understand. Which things are... I'm asking, he's the following thing, a finite-dimensional M polyfoil. So it is... Box, yes, octahedral. Probably not, so for instance, if I have a pyramid with four sides, not allowed, right? So it's... Is there a theory about it? You could, but the current theory doesn't allow this, yeah. Is there any reason for not allowing that, actually? I would sort of say that that's non-generic if I sort of think of my faces and I sort of move them around sort of generically, then that's a non-generic position. Actually, you can take any convex set with this open interior, this interior, and you do every single suspect for that, that would cover your case. So you couldn't have... Then spaces that are boundary is just modeled on the boundary of the convex set. So you've seen the answer. Yeah, you need an interior because... You need a cone condition, right? You need enough directions to say that the ideal number is determined, that's all. But I'm pretty curious, are there any relatively trivial extensions to polyphones allowed in terms of the options? Yeah. Yeah, I might believe that. Okay, so observation. This, the calculus extends to partial quadrants, the same way differential calculus extends to manifolds with boundary and corners. It's sort of the same way. This is sort of not a surprising statement. Just look at the definitions. And then now I can make a slight generalization so I can be less dishonest. SC infinity retract, this will sort of replace our previous definition, is really a triple O, C, E. Previously we didn't have the C here, so that this is a scale-bonox space, this is a partial quadrant, and this here is once again the image of a retract, but now U is a subset of C, which is relatively open, meaning it's open in the subspace topology induced on C from E. Is the C sitting inside E here? Yes, sorry. And R is an SC smooth. In the definition of linear SC isomorphism, is it important that you wrote SC zero instead of SC infinity? It's linear, so it should be the case that SC infinity is induced from SC zero. So why bother with this T at all and not just take the standard C all the time? Take the standard what? Why not just take the standard C all the time? The standard quadrant, you mean? Yeah, that's a good question for a helmet. I think it's useful to have this definition at your disposal when you need it, because otherwise you might come across a collection of functions so that the way you've defined it, it's just the easiest thing you have defined, is such that I think it's convenience. So I didn't get the question. The question is why have we bothered to define partial quadrant this way? Why don't we just always work with the model partial quadrant? Because if you have a tangent space, there's a natural notion of a partial quadrant, the tangent space at a point, and then, well, there's no zero opportunity, okay? There's one. So maybe I didn't get it, so look. Why have we defined C instead of working with the model partial quadrant? Yes, so for example, when I have an endpoint for it, and it takes a tangent space at a corner point, then there's a naturally defined partial quadrant with a tangent space. Yes, it is naturally defined, but- Is there something, and then, what can you say? You can only say it's either more- Well, I'm not convinced either way yet, but I have some more to talk about, I guess, so. And how do we know that taking finite k will be enough for applications? Because conveniently, our moduli spaces should be of finite dimension, and therefore, they're the dimension of their boundaries. I mean, the highest order corner you could possibly have is finite. So the redundant dimensions will always be actual full dimension, not, that's what we expect. No, no, no, they're not the full dimension. But the corner structure, they're usually finite. Just how, in how many ways can the stuff break, and that usually, if the energy is fixed in the final model, it works. Yeah, usually. I mean, in order to have an infinite dimensional corner, you need to have an infinite level building, and I mean, I don't know of any sort of standard analysis which cover any of that. Maybe the applications to generalize them, I'm sure it's not difficult to generalize them. For some. Right, so then what I would like to do is, but that's the case for, like, when we make them holomorphic, right? When we make them holomorphic, there is energy, lower bound on energy. But if it's not, since nothing is holomorphic now, we don't have any lower bound on energy, so potentially there could be. Right, but what you're saying is, I mean, but what ends up happening is, you're looking at some big union of spaces, B, so these are sort of non-nodal curves, and then, I guess what I want here is some sort of fiber product here, right? So, in other words, you're looking at B, union B fiber product with itself, union B fiber product three times, and so you take this big long union and no point in here is ever a point in an infinite number of... I mean, you could have mentioned that if you look at arbitrary maps, if you can break them infinitely many, often even, if you have a finite energy window, and you have something else, you know? You could... But no one map has infinite number of levels. I mean, you could think of... Oh, like one over n squared? No, no, no, look at the... Some of one over n squared. But you're trying to make one more than a native code. But the thing is, you need an ambient space. It just contains this stuff. So in the ambient space, there's no need for it than having more corners built in than for your modern space. But you could, of course, do so in certain cases. Okay, okay. So since this works, yeah, like, eventually we will test all the morphic case, and this will work. Yeah, okay. Can I add a voice from the back? Please. So usually, your space B is all things that are in one homogeneity class. So if you define the energy as actually the sublactic area, you actually have finite energy on all of B. I mean, you could imagine, if you look at the function, Rn, you put a lot of critical points in, you go to infinity. You couldn't have in such a way that the flow cannot escape, but you have infinitely many critical points. So at infinity, which sort of the difference of the energy goes to zero, then you could actually break things as often as you want. So that's possible, but for each space that you fix some points, it wouldn't happen on the level of the gradient flow. So we have a polyphonic discussion this afternoon, so I think Joel should. Right, which Nate has generously let me take about 10 minutes of, I think. So let's see, what would I like to do? So my notes are more detailed than what I'm gonna write on the board here, but I'll draw some pictures just to present the idea. So what you want is to sort of say, okay, if X is an M polyfold, let's see, did I say this? So okay, so once I have, okay, as I should say this, so once you have, oh, once you have an SC retracts, modeled more generally, then you just, you run through the same collection of definition in terms of charts and atlases to have a new definition of M polyfold, which allows these as your local models. That's the honest definition of an M polyfold, right? So now we have finite boundary and corners, but yes, but it will be the case that in that M polyfold, you have sort of corners of whatever dimension you would like, not infinite, but finite whatever you like. Okay, so then, suppose we have an M polyfold, what we'd like to have is sort of a measure of cornerness. I'm sure that's correct. So what do I mean by that? So what do I mean, right? So what we really wanna do is define something and Helmut will probably use this a lot called the degeneracy index, degeneracy index, which really what you want to be is a map defined, well, I think we'll just say dx is a map from your M polyfold into the natural numbers, but can sort of be zero. And so what I would like to do is say, well, how do you define this? It's not too difficult. Since M polyfolds are effectively defined through their transition maps and local charts, let's just define it for sort of the model case. And in the model case, here's a nice, here's a nice SC bond arc space. And if you want to, you could cross it or plus with another scale bond arc space. And these points in here that are interior points, this is where the degeneracy index should be zero, and these points here are degeneracy index one, and these are points so that your degeneracy index is two. And so once you have it in this sort of local model and the standard, and you see how this sort of generalizes, see how this generalizes, you can then do it for partial quadrants, you make that definition, you then, you can then define it in a chart. So you then, so in the notes, for instance, you'll see that you have a degeneracy index defined like this, right? This is some natural number, right? And then from the charts you want to define it for an M-poly folding you have, there's just this little trick here because presumably you can write down local models where you can write down local models where the, or a priori you can write down local models where the degeneracy index is different depending upon the local model. So you take the minimum over all your possible charts. And this allows you to sort of essentially define this degeneracy index, measuring sort of to what stratum of the corner are you in, right? And then after one defines this, it's important to state the following proposition. If X and Y are in polyfolds and U is contained in X, some open set, V is contained in Y, open set, and F maps from U to V is an SC, does it have to be smooth? Now let's say it's smooth Diffio, then the degeneracy index on X of a point X is equal to the degeneracy index in Y of F of X. Here we're assuming that X is in U, F of X is in V. So the conclusion here is that SC smooth Diffiumorphisms on M polyfolds preserve the sort of corner strata, which is exactly what you'd expect in finite dimensional manifolds with boundaries and corners. I also want to say that the X is equal to the case of C because the trivial thing which you define for C. I'm sorry, I missed what you said. So you can, the partial quadrant, you can see U as an M polyfold. Yes. But when you look at the corner as you can, you just look which, when you describe it in this picture, it's just some definition. It's not necessary, it's this definition because you took an infinite number or a possible chance. So you're saying every chart in that picture, every chart you recognize that corner where D is, is two, if you're always going to be in the corner. But do you really have to take a minimum? Yeah, because you could have a retrack which is this line going like this. Then the naive definition would be that the degeneracy of that corner is two, with respect to this chart. But the line is better for the retrack in general position. So you can have a bad retrack here. Yeah. Because you take just big space and put that thing somewhere and retrack it. Yeah, this would retract onto that. So you look for the... So the point is you might have some set like this and you're gonna have a retrack sitting inside here. And yeah, so you have a retrack sitting inside here and then the image of a retract in this, even this two-dimensional case might be this line. And then if you have this line, you would say, well, what's the degeneracy index here? Well, D should be zero, of course. What's the degeneracy index here? Well, is D equal to one or is D equal to two? In this model, it's in an ambient space where it should be two, but if you take a different, if you take a different local model here, you can find one where it's one. All right, and that's why you need this minimum. Any other questions? How do you know that you actually have the... So like I said, well, you know, you told me that I can make all kinds of weird spaces as these SD manifolds. Maybe I was able to also make the corner of this quadrant, I mean, the quadrant itself. How do you know that's not, with those sort of the line definition, how do you know that's not possible? How do you know that I can't make something like an M-polyfold with boundary and corners making use only of this definition, which doesn't allow for the boundary and corner structure? I mean, I... I mean, presumably you have to know that otherwise the degeneracy index just breaks. No, yeah, so... Well then, you cannot make it better for the real quadrant. You can take the longer corner, you cannot make the corner structure better. That's a zero. Yeah, can you say anything about the truth of that theorem? Well, I'll ask later. I think, yeah, Dory... I started late, so I'm only two minutes over based on when I started. But yes, I can go ahead and stop here. Well, I mean, you could, if you want, if you have a final question for me. Yeah, so I guess the last sort of thing I wanted to sort of state about this is that finite dimensional manifolds with boundary and corners have this sort of nice property that if you look... So here's sort of a toy example, right? Here's a toy example. They have this nice property that if you pick up... That if you get along the boundary of your finite dimensional manifold with boundary and corners, you can do sort of the following thing. You can look at the set of all points of degeneracy index one, and then you can look at the closure of this. And the closure of this is, again, a manifold with boundary and corner structure. And so you can see that from this picture, that's just sort of this sort of segment here. And that's a property that finite dimensional manifolds of boundary and corner have. You'd like to lift that up to M polyfolds as well, and that can be done, but there's just... There's an additional condition that one needs to place on your definition of retraction that define your local models. So this is called the taming condition. It's in the lecture notes. I'm not gonna present it here. And the key thing to know is that the conclusion guarantees for you this nice property that you look at the closure of your faces or the degeneracy index one portions. And these are, again, M polyfolds with boundary and corner. That's one. And the other thing to remember is that essentially all retractions that occur in practice are of this special type. There are sort of splicing type retractions. In these cases, the additional taming condition is automatically satisfied. So it really ends up being something you don't have to check too much. I mean, it should be completely straightforward. And then you had this additional property. Okay, and that's where I'll finish. Thank you for letting me run over. Are there any more questions? When should people read the lecture notes by? I assume it's done already, right? The lecture notes are online. Read them at your leisure. And yeah, and of course, if you have any questions you can ask. And I will be speaking briefly in Nate's discussion. I'm sorry for that, but I have to make sure enough stuff is done for Katrin to start tomorrow. So yeah, says Katrin. Any other questions? If I take the, I suppose I ask an SC manifold. I look at the- You want, sorry, just, I hate to interrupt, but SC manifold or do you want M polyfold? Because there is a, okay, thank you. And suppose I take the algebra of SC smooth function. Okay, continue. Does that determine the M polyfold? I think logically, that I'm in the work of a lot of space. No, I mean, this is Galvan theorem. What is the Galvan theorem saying? Okay, so I think there are a lot of spaces which don't even have smooth functions. I think this is a good talk for T-time. Let's say Joel again. So Nate was very generous and let me talk for a few additional minutes, just to finish up the stuff that I wanted to make sure you guys, I had presented before Katrin tomorrow. And as my voice is just about to give out, I think that's the universe saying it's really time for me to stop talking. So I'll try to be as quick as possible. So the main thing, so we'll start over here. Strangely, that's just sort of how things got laid out. So it goes one, two, three, four, five. So the main thing that I wanna tell you about is the statement of this theorem. So what I've spoken about mostly so far is M polyfolds, which I said this is like a generalization of a manifold. And it's a generalization in such a way that it's gonna have boundary in corners and it has both topologically and differentially, it has enough structure so that it will contain the compactified moduli space as a subset with the Gromov topology just induced from the ambient scale topology or base topology even. Great, so now you have manifolds, but of course if we wanna do, if we wanna study pseudo-hollow-one for curves, say for instance for other moduli problems, we want these to arise as zero sets of a non-linear fret-home section. And so that means we need some notion of a bundle. And so the main theorem that I wanted to point out here is this implicit function theorem. This is sort of the first step, right? Which basically says, well, what would you expect a finite-dimensional version to say? The finite-dimensional versions say without manifolds with boundary in corners would say, I have a finite-dimensional manifold, I've got a nice smooth bundle over the top of it, I take a section of that bundle and assuming that the zero set is the linearization of the principal part of that section is surjective, then it should be the case that you have an implicit function theorem which guarantees that the zero set is in fact a manifold. And then if the base of your bundle is a manifold with boundary in corner, then you'd like it to be the case that the corresponding solution set or zero set also has boundary in corner. And so that's essentially what this theorem says. So top of this board down to this line here, right? That's just essentially the m-polyphold version. Yes. Is x equal y? No, so y is the bundle and x is the- There's a little x and a little y. Little x and little y. Solution set equals y. This is cap x, I mean, I could rename this, well, x if you like. Is that the issue? But x was a specific sum here after we made x, x, zero. Oh, it's x and y are a little much x. It's okay, it's at every point x in the solution set. At every point x you want. Okay, thank you. Right, so that finite dimensional version of what you would sort of expect for this sort of an implicit function theorem is exactly what this says. But then I've underlined all the words that at this point in the course of lectures we don't know yet. So I wanted to kind of go over those somewhat quickly. And some of them I can go over quickly and some of them I can't. So for instance, a big one here is an SC Fredholm section. So this requires an entire lecture, but that's the point of tomorrow's lecture by Katrin. She'll be introducing SC Fredholm sections. And then there's a tame strong bundle which also takes a minute and that's essentially what takes up the bottom two boards here. So I want to pause on that for just a second and the remaining two pieces are good position and where is it? Sub M polyfold. So let me go ahead and drag this top board down just so I can point a little bit easier. So good position has a precise meaning. It is in the lecture notes as are essentially these pictures here. The point is that, I mean I think the point is that in order to guarantee that you're sort of the sub manifold that you get that's sort of cut out has a nice boundary and corner structure then the behavior of that sub manifold, before you've shown it has a nice boundary corner structure needs to interact nicely with the boundary of sort of the ambient space, the boundary corners of the ambient space. And so general position would be sort of somehow it intersects transversely and that's sort of ideal. That's what you would expect. I should probably mention this about HLBZ sort of terminology. So when you read, when they say general, I think generic, right? The general case in my opinion covers all possible cases but the generic one is the one that happens sort of most often but not necessarily always. And then when HLBZ say good, what they really mean is good enough, right? Because in general, I mean I would say that I would say this position is better than whatever I've drawn right here which I'll explain in a second. But I mean good enough because it's good enough to get the result that you want which is that your sort of sub manifold has nice boundary and corners, a nice boundary corner structure induced from the ambient boundary corner structure on the M polyfold say for instance. Right, so that's terminology and so right, so here's general position and so what does good position mean? Well, basically what it means is that you have this, you have a tangent and it's a little trickier because in general of course I've drawn everything with your sub spaces being like one dimensional and so it's a little bit trickier to sort of make these statements especially since your ambient space is not finite dimensional. But the point is that you want it to be the case that you have your tangent plane so it's gonna pass through the origin and you want it to be the case that you can sort of wobble it a little bit, right? It takes sort of an open neighborhood of nearby planes and then you wanna make sure that that open neighborhood stays or the corresponding rays if you will sort of stays on the interior of your ambient partial quadrant. And so that's why what you have this line right here is supposed to represent this finite dimensional subspace and it's good because I can wobble it, right? Passing everything still through the origin and all those corresponding planes are gonna pass through the interior here. That would be not the case here. This is a not good case where you have, say, this ray sort of travels precisely along the boundary because now you wobble a little bit and suddenly it's gone, right? Suddenly it's outside that partial quadrant. I see, so your picture, though, is that it's not a two dimensional cone coming into this two dimensional, it's a one dimensional thing. It's a sort of red thing and you're two dimensional triangle sort of very quick wobble. Yes, oh yeah, I think that's right. Right, so it's really the, when you say it's not good, it's because that one dimensional thing is coming along the edge. Absolutely, absolutely. And then of course here's something particularly not good. When you have sort of a line that sort of, apparently this happens in the theory sometimes, it has to be dealt with, you have some line which passes like this just through this point and then you can sort of see even no perturbations of this line sort of bring you into the interior. So this is just sort of a picture definition. There's the precise definitions in the notes but I wanted you to make sure that there at least be aware of something like that. Oh, and then a sub-M polyfold. So I can read this. Let X be an M polyfold. So we're thinking this is an ambient space. A is a subset in there and we wanna know when is A a sub-M polyfold, right? Well that's gonna be the case if for every point in A you can find an open neighborhood V around A and an SC infinity retraction R taking that open neighborhood to itself so that the image of that is A intersect with that neighborhood. I mean... I like your retraction handsome. Of course, it just, it's retracts right down. It's much easier. So, right, so I think if you think about this definition for just a moment, you should hopefully convince yourself that this is the only possible definition that it could be. I mean, in some sense, replace X with like SC bond arc space and you should essentially get, I mean you nearly get the definition of just a retraction in a model case. So all we're really doing is saying now you can also have retractions in M polyfolds and that retractions in M polyfolds can cut out sub-M polyfolds. And that's good because this theorem sort of tells you that that subset that you get in your M polyfold or you get in your M polyfold is a sub-M polyfold and then the bottom end of this sort of guarantees that that sub-M polyfold is essentially just a finite dimensional manifold with boundary and corners. Any questions about this? So having a retraction on your polyfold means that locally on the charts it is a retraction. Right, so I mean a sub-M polyfold, because it's, because it's, because of the definition, it must be the case that you can find a local model in which it's actually an M polyfold as well. No, but I mean because when you define your charts on this side, I'm not wrong, your retraction is defined on an open set. Right. But you should see that as long as you have something which supports the SC calculus, as long as you have a domain which supports the SC calculus, then you should be able to have a notion of a scale retraction, right. So it's just a one-step generalization from that. Good, so the last thing that I haven't explained then that I need to is the notion of a tame strong bundle. This tameness I'm not gonna, I'm not really gonna talk about too much, although it is, it's discussed in the lecture notes, basically just saying there's these sort of additional conditions on your M polyfolds to make sure they have nice boundary corner type structure. So what I do wanna talk about though is this term strong bundle, which the first time you see it might see a little bit strange. And so I wanted to sort of make you aware of something so that when you see the definition, it doesn't seem like what the heck is going on here, right. So, and to do that I can review some stuff that hopefully we all recall just from the classical theory. So the classical theory says, you know, a linear, I should have said linear, Fredholm operator is, it's a linear map between bonnock spaces with closed image, finite dimensional kernel and co-kernel. So this is fine. And it's the case that Fredholm operators are stable under compact perturbation. So I've said what I mean by this, you can add on compact perturbations and you still stay Fredholm and it doesn't change the index. So then if you have a classically smooth non-linear map, right, then we say this is Fredholm provided it's the case that when you look at the linearization of this map at any given point, the linearization is Fredholm. So again, this is classical. And so then we come down to this point here. We say, well, let's not worry about domains sort of too much. I don't wanna, I mean, I don't wanna have topology of my domains, I wanna try and keep this somewhat as simple as possible. And in that case we see, well, the Cauchy-Riemann operator is Fredholm as a map from something which takes maps of regularity K plus one to maps of regularity K, maps of sections of pullback bundles and so forth. But the main thing I wanna focus on is the fact that this is of regularity K plus one down to regularity K. So now, if we see that, and that's true for sort of any K that we choose, then the natural choice is to try and put this, fit this into the SC structure, like the SC type calculus. So then even in toy cases then, for instance, we think of the Cauchy-Riemann operator as acting from one scale space to another where this is your first scale space where the Kth level has regularity K plus one, and the Kth level in the target has regularity K. That's just this statement here translated into the SC calculus language. So that's not surprising, but then something a little bit strange happens which is, well, what are our compact perturbations? That's what we had right here in the classical sense. And so now what you say is, well, we remember from scale-bonox spaces that the higher levels compactly embed into the lower levels. And so as a consequence of that, if we're going to perturb our compact operator, we expect it to be the case that it's going to be a map from, it's going to be a map from one scale-bonox space to the other scale-bonox space, except we've shifted by one. So it moves one level up. And if you write this down and we look back, we look back in this case, well, that just means that we're tacking on a perturbation which maps HK plus one to HK plus one, right? So you have a differential operator which drops you down, a compact perturbation just adding on a lower order term. So that's not so surprising. But then we point out one last sort of complication just in terms of, and this is really sort of, in some sense it's a notational complication and then you just have to push through, carry through with it, is the fact that, well, the reality is that in our applications, the Cauchy-Riemann operator isn't just, it's not just a map from, it's not just a map between, say, scale-bonox spaces. We really have to think of it as a section. And so consequently we have to think of it as a section locally, at least, from E into E plus F. So this is our total bundle. And if this has regularity HK plus one, then we have to make, then this has to have regularity HK, which is essentially what I've written right here. And then you have your perturbation, but your perturbation sort of moves you one up, sort of lifts you one up in the target here. And so I've written that down here. So your sections have to go from EK to EK plus F one plus K. And this is sort of frustrating. It's frustrating because that means, in local coordinates, in a local chart, it's irrelevant. You say, okay, this is easy enough to do. But the problem is that you then have to build a bundle. And the thing is that, what that means is you essentially have a bundle with two different structures. You have this structure that you have to keep track of, and you have this structure that you have to keep track of. And so the fact that you essentially have two, the fact that you essentially have two scale structures that you have to keep track of, in order to have a good notion of perturbations together with your differential operator, motivates the word strong in strong bundle. So anywhere in the literature, in HLBZ literature, any time you see the word strong, that means that you have two scale topologies, two scale structures that you have to pay attention to. Essentially because you're dealing with bundles. Does that make sense? So this is to be able to have a notion of compact perturbations. Yes. Doesn't it automatically have two structures? I mean, if you have FK sitting over EK or something in bundle, and then you look at if FK has got an FK plus one in it. In fact, this is actually always apparent. I mean, once you see one example, you know you know it. But on the abstract level, to form a non-substitute, that's your point of view. So if you couldn't hear Helmut, he said, it sounds so, just to sort of highlight something about the polyphonic theory is that on one hand it's built on this totally abstract level, and on the other hand, we make sure it contains all the theories that we have, all the theories that we want it to. And the downside is that sort of in practice, when you actually write things down, you probably could get away with not having to keep track of these two structures because they would be fairly apparent. But in order to abstractify that and then have this abstract theory in which everything fits, you need to have this notion of having two different scale structures essentially on your bundle. Please. So just as, right, isn't she that you can always get, I mean, can you always get a strong bundle structure on a bundle where the strongness comes from this shifting by one, like you've got in this example? I, um... I think that would be more for sense. You don't think the transition map is preserved to scale things, but that's the difficulty that transition map is. That's the taming of the, the taming sort of the quality of the transition. I mean, taming the sense that it doesn't, and all stuff, it seems like it's rather not, comparably nice. For instance, I mean, you know, for instance, it doesn't make sense, like you might sort of say, well, maybe you're lucky, and you've got a scale structure here and a scale structure here. So maybe instead of taking the usual sort of diagonal filtration, we take a double filtration, keeping track of both simultaneously. But then you write down a transition map, you write down a transition map, and you see that you get some of that, but not all of that, right? There's this sort of unfortunate, there's this sort of unfortunate fact that, you know, if I'm considering, if I consider a map of regularity, classical regularity CK, you know, it doesn't make sense to talk about, it doesn't make sense to talk about a vector field along that map of regularity K plus two or something, right? I mean, you can't, that doesn't make sense, that doesn't hold in transition maps. And so then consequently in your transition maps, you don't keep track of the double filtration, and if you look at what's the best you can possibly do, right, in applications, well, the best you can possibly do is essentially to keep track, in applications is, you know, in general is to keep track of precisely these two filtrations. And then from this, you can get a partial double filtration, which is the way HWZ do it. So in the end, what ends up happening is you end up reading local, you end up with local charts that are sort of written like this, where we use this sort of left triangle here to denote the fact that this, well, HWZ will tell you that you have a double filtration, M, K, where M is, I'm gonna forget these, is this right helmet? Damn it. Right, yes, yes, of course. Yes, so you get this sort of double filtration, but it doesn't extend all the way up. And again, the reason is essentially because it doesn't make sense to talk about, you know, vector fields, which have much higher regularity than the map along which they're defined. That's essentially what this problem boils down to. But also simultaneously, needing to have a canonical class of compact perturbations in which to do the theory. So, yes. So this is implicit function theory. Right. And this tame, strong, bundle condition is a condition so that we have compact perturbations. Yes. So how is it related? We're not gonna perturb, right? It's already perturbed so that it's transverse. So for today, yes. Tomorrow, no. So tomorrow, you will need to have a perturbation. And the point of me presenting this is that I have to tell you what this strong bundle is precisely so that you have a space in which you can make perturbations. So yes, precisely you can make perturbations. For this statement, I suppose I could remove, yeah, for this statement, I think I probably could remove this strong condition and work with something sort of less. I mean, you could probably work through the implicit function theorem. It's not clear to me where that's what that was a problem. But how do you define Fredholm? I think it's sort of in one reject. But Fredholm actually requires that when you bring it into normal form, you use the strong bundle property. On the target. Yeah, that's right. Okay, yeah. So inherent into the definition of an SC Fredholm section is the notion of a strong bundle. It's inherently built into it, which you haven't seen the definition, so I'm just being vague at the moment. But I mean, I guess the one thing that I kind of wanted to clarify here, which doesn't seem I was successful at, is that when you see strong bundles, they're gonna have this weird new symbol where you're thinking of sort of, this is being a total space, this is the total space over the base E. And everywhere you see strong, it's essentially because you have to keep track of two different SC structures. And the need for having these two different SC structures is the fact that you have both the Fredholm section and you need to have a compact perturbation. Hopefully I can get that message across, if nothing else, well, better luck next. It's a little bit stronger. Because the Fredholm problem, as I said, the Fredholm problem involves this and constrains the flexibility of your corner changes. So Fredholm is defined by looking good, sufficiently nice in certain corners. And since on the target, you can only use things which preserves a double-fit fashion, it can actually be completely wide-sets. So it enters there. And then concerning your question, then having these perturbations comes in when you lose a soft smell, so you know. Yeah. But if you have surjectivity, I mean it seems like you were doing it. Then surjectivity, but the Fredholm property involves something of discovery filtration, in its definition. Yeah. And if you don't have surjectivity, then the perturbation that you can achieve is generic, generic surjection. I'm a little confused. So if you have a bundle, then you have a total space and you have a base. And if you have one filtration on the total space and one filtration on the base, why isn't that sufficient to recover the other filtration in this final direction? I think this was essentially your question earlier, right, Duzza? And I think the answer is that if you write everything down in terms of the applications that you want to study, the problem becomes much more concrete. Why is it not formally possible? Because you have coordinate changes. In one local chart, you have ET and FK. And the coordinate changes are, say, CK. But the fiber, say, is a quality CK minus one, and therefore at level K, and so at level K plus one, that's why you're allowed to go K up to N plus one, at level CK, and the CK coordinate change preserves CK things at the bottom. I mean, basically, you know, all the CK things are coordinate changes, the CK changes. So the attention bundle is not a strong one, for instance, right? Yeah. Okay, so I went longer than I wanted to, and I really wanted to hand things over to Nate, so I'm sorry for running over.