 Yeah, so the plan had been for Helmut to do a discussion, but that's been shifted to Wednesday, because organizers felt like we should make sure that everyone is on the same page from last week so that people aren't just not getting anything out of this week of polyfolds. So the main point of this is to take questions from you guys, but I thought that I'd start off by recalling for you the definition of polyfold fredholeness mostly as a way of putting a dictionary on the board between some polyfold concepts and classical concepts. OK, so here's the definition of polyfold fredholeness, and I'll underline the polyfolds specific terminologies. So a scale infinity section f of a strong bundle y over x, this is over x, a tame m polyfold, is scale fredholm if the following conditions hold. So the first condition is f is regularizing, and the second condition is that it has a filled version, which up to a scale plus perturbation goes up in the following form. Actually, what precisely at each point it has a filled version, because the filled version makes it quite different, rather different than different. Yes, thank you, Helmut. So let's say. At each smooth point. Gosh, OK, yes, thank you. So at each smooth point, or nearby each smooth point. Actually, it's a German manner of thinking. Guys, trying to give a slightly imprecise version of this theorem. I'm trying to be cultured, so I'm sorry. This is where you turn the weight. We'll get to it. Yes, you can show up bigger. I have a picture on the next slide. OK, so which up to a scale plus perturbation is of the following form. OK, so our bundle is going to locally look like the following. So w is some scale bonnock space. Bundle is locally trivial. So the base is r little n plus w, and the fiber is r big n plus w. And the section is supposed to be, well, it's supposed to have principal part, something called g. And on the right board, I'll tell you what property, g has to satisfy. So this is like the meat of scale flat wholeness. So that property is that if we look at little g, we subtract off g at the point we're centering at. And then we post-compose that with the projection down to w. So that's the w and the fiber. And then we apply it to a point v comma u. So little v lives in r little n and u lives in w. Then it's of the form u minus b of u comma v, where b is a, you can think of it as a family of contraction mappings parametrized by v. Let me, sorry, let me switch the order of u and v. So v is living in this finite dimensional space. Family of contraction mappings parametrized by v. And specifically what I mean by that is that if I make any choice of m and epsilon following inequality holds for v, u, and u prime close to 0, where the notion of close to 0 can depend on the choice of m and epsilon. So it's this ginormous definition, but the point of me writing on the board is not for you to completely understand it right this second. The point is to remind you of these words, which were introduced last week. And let's see if I can get all of them. Tame and polyfold. Regularizing, filled, scale plus, I think that's everything. And now I'll recall for you the dictionary between those terms and concepts you're used to. Yes? You don't have bonus appearing here because you're placing the seven smooth on the step two. This is stated in the boundary list context. I should have said let's. Yes, there is a version of this with boundary. So we don't have too many concepts at the same time. Let's assume that there's no boundary. Nick, can I ask you about the inequality on b? You have a b, the element of b, u, b, u, so it's inside w. It sits inside rn plus w. rn plus w. So then the norm, should it have an n attached to it? Yes. Yes, it should. Quantifier attached to the epsilon, I mean, you really mean for every single epsilon, it's going to be as epsilon goes to zero? Yes, but this will be true on smaller and smaller neighborhoods of zero. It will be true on smaller and smaller neighborhoods of zero. It's sort of when you have this scan structure, then when you go higher up, it just takes smaller, smaller neighborhoods. So it's really some kind of a zone condition. The closeness is allowed to depend on epsilon on n. Yes. The other ones, it's definitely going to be close. Yeah, well, that's what I was going to say. Otherwise, there's no application for this. I was trying to understand what it said. Yeah, and if there's time at the end, I will, I can motivate this because there's a similar property satisfied by classical Fredholm maps. And in that context, you can think of b as something whose differential vanishes. And that's why, in that case, this inequality is satisfied, though, on small balls where the smallest depends on epsilon. Yeah, I think that what is going to explain is if you have classical Fredholm theory, you could give an alternative definition. And if you take this, you see that that has to be a definition. Thanks. OK, before I move on to the dictionary, any other questions about this statement or complaints? OK, dictionary. All right, so here's the scale setting, and here's the classical version. So the first word is SC plus. When you see SC plus, or more generally, when you see SCK, you should think of C infinity and CK. When you see the word strong bundle, you should think of a bundle where the notion of compact perturbation makes sense. OK, let me not put tame into this dictionary. When you see M polyfoil, you should think of a Bonach manifold, not a orbe folds in any way. When you see SC plus, you should think of a compact perturbation. And the only thing I left off is filled, because it's not exactly a classical notion. Let me recall for you that what is filled I mean, well, a priori, this section F isn't defined on an open subset of a scale Bonach space. It's defined on a retract sitting inside such a thing, which is not such a nice space as far as we're used to thinking of spaces. The dimension can vary locally and so forth. And therefore, in order to have a meaningful front home theory, you have to beef it up to a map that actually goes between open subsets of scale Bonach spaces. And that's what this filled section is. It really, yes, question? Oh, yes. I should put that in here. Regularizing means you should think of elliptic regularity. So if our section comes from the del bar operator, the regularizing property comes from elliptic regularity satisfied by that del bar operator. Could you say something about tamed? If you're going to put it in the picture. Tamed, say something. Yeah. So I don't know if I remember the second condition of tamedness, but the first condition of tamedness says that, so you look at your retraction. Let's say that r, which goes from x to x, is scale infinity and it's a retraction. Then the first condition of tamedness is that the degeneracy index of r of x is equal to the degeneracy index of x for all x big x. So I don't want to really get into this because I think it's a little bit more technical than the rest of the stuff on the board. Have I stated that first condition correctly? And what's the second condition? At the point the retraction, the tangent space, at the most point the retraction tangent space has a complement in the reduced tangent space of the column, which actually, in this case, there's no boundary ever since, can you tell me? Yeah, right, yes. So at the last minute, I changed this theorem to the boundaryless setting, and therefore I could have erased the tamedness hypothesis. That sort of thing, when you have a retract and you have a boundary of the ambient space, you could have a lot of retracts which don't show you the typical thing of the boundary structure of the last group of you. So you want to have them that they show somewhat that that was actually a boundary, this corner sense. So you have to force it. So for example, when you have a quadrant and it takes off the diagonal, it's not a tamed retract. Because when you retract on it, near zero you have some problems. You have to retract at some point that is a boundary. OK, so the condition or the picture you're thinking of is that this is your C. Just take an argument. So if you retract there, so near zero I think, I mean near zero you have to leave. I mean if you take an open neighborhood around zero, then it contains points that you have to lie with. Right, so the point is that if you look at this point right here, then the degeneracy index of the retract is one, but the degeneracy index in the ambient space C is two, I think. So but then nevertheless this line has induced structure, which is C, but it doesn't come from the ambient structure. OK, so I suggest that we leave it there. Great, so you're welcome. So we have this dictionary just to bring everyone back to speed. And the last thing that I want to say before I move on to whatever questions you have is that the most basic reason, I mean the real reason for using this wonky definition of Fredholmas is that in the classical setting, the reason that all of the theorems you're used to for finite dimensions are stated in the Bonach setting for Fredholm maps is that they satisfy this contraction normal form, and therefore things like the inverse function theorem hold. You want to use that in a polyphold setting, but there's some problems with the levels so that if you just assume that your linearized operators are all Fredholm, those theorems that you want to be true, like the implicit function theorem, would not be true. And therefore we build this directly into the definition. Like I said, I'll come back to this at the end of we have time, but I want to stop talking about this Fredholmas property now. Can I ask another question? I think you asked already, when you said close to zero, you mean that there is a fixed open set, which doesn't depend on the level? It does depend on the level. So for every m and epsilon, there's an open set so that inequality holds on that open set. Because if it didn't depend on epsilon, then you could put a zero in there. I mean, if it was the same set and it was fixed, it's too full of solid, and then you could have it done. So clearly, you passed it to me, sorry. Sure. OK. So let me open up the floor for questions, which I will either answer or deflect. So I think there's a lot of questions that come. So this means that if you, so the science of being regularizing Fredholm sections, cannot make too rapid moves locally. It's a regularization property which constrains the power moves. Whereas if you have a general SC smooth section, first of all, for this SC smooth section, the linearization usually does not depend continuously as an operator at the point where it takes a linearization. So that's one of the features. Like when you are near nodes or near broken orbits, there's something rapidly changing which makes the linearized operators usually not continuous as operators. So the linearizations then can be very moving, depending which direction they go very rapidly. And that kills the implicit function zero in general, unless you have a taming device saying it has a little bit more regularity, which is this one. So you have a, this is a jump condition in this SC one. So if you have this jump condition then, it turns out you have an implicit function in the usual way. If you linearize and it's onto, then nearby, you have a solution manifold. So that means you have nearby a solution space which actually, from the ambient space, gets a structure which turns it into a other smooth manifold. Can we actually have that written down? Yes. What's the implicit function zero in general? Yeah. Sure. Presumably for the operators have indices and index, right? Presumably, yes, that's right. Whoops, I should. Gosh. All right. So if you want to make some money, challenge me to shuffle board this week. Right, so here's the implicit function theorem in the polyfold setting. OK, so let's say that y over x is a tame, strong bundle. And let me just point out before Helmut does that this is the boundary list setting, but for some reason, you put the word tame there. So I suppose they're cut and paste. All right, so it's a tame, strong bundle. It's a strong bundle over x and m polyfold with no boundary. OK. And f is a scale for at home section with the property that all of the linearizations are onto. So such that for every x in the zero set. No, no, you only need it at one point. Then here. OK, so the difference, OK, so you want to give the, oh, is that how? Cool. OK, so you want to give the goal. So what about if you give the version of the point and the solution f of x equals 0? OK, I can attempt to. You can correct me. So let's see. So let's say that such that at this particular point x0, the linearization is surjective. Just a second. So this is supposed to go from the tangent space of x to the fiber over x0. OK, so then the theorem is that there's going to exist an open set. Here's where I might make a mistake with topology. So there exists u, an open set in, let's say, the 0th level of x containing x0 with the property that if we look at the zero set of f intersected with this open set, we get a finite dimensional sub m poly fold. And then it's a theorem in HWZ's papers that it automatically inherits the structure of a finite dimensional c infinity manifold. So it's really stronger than sub m poly fold. So it's a sub m poly fold which is so good that it also has the infinity structure. Is x0 a smooth part? It's, yeah, by the regularizing property, it's a smooth point because it maps to 0. OK. Oh, I'm sorry. So you're saying that. Yeah, so it's actually, so the sub m poly fold is actually rather strongly tried. But it's correct. So if you just say it together, it's a sub m poly fold which, in addition, has an equivalent structure on the solution. That's good. And the linearization of any other solution set is also on to it in a new way. Yes, which is how you're going to prove that. So it's an open condition of this. Yeah, so there's a small open neighborhood that the full solution set carries the structure of a smooth manifold in the classical sense. And the linearization of every solution is that u is surjective as well. Yeah, so let me add on that point. So it's basically what we would expect from the interesting point. Right, and for all other x in u intersect the solution set, f prime at x is on to. Oh, yeah, here's a good exercise which you can write down for everybody here. So if you have a retraction from u to u, which is s c smooths from u into index lifted by 1, then the image has a natural smooth manifold structure. OK. So just to give you some idea, so if you find a good proof of this, I would be interested. So I would only remember when Wolf-Skrollen felt I wanted to prove it. He at some point started to throw stuff around. And he thought I was going to stop, so he won't stop. But it's just the implicit function, some honest implicit function. So tell me if this is what you just said. So r is a retraction, but as a map from u into u upper 1, it's s infinity. But u is sitting inside some square of arm space here? Yeah, so. Yeah. So u is at b. Yeah, so just, OK, so let's say you have a retraction here as usual. And I told you to not let him get to the chalk. There you go. And r from u into u 1 is s infinity as well. Isn't that the original state? No, it's a little bit straight. I think, I'm sorry to say, but I think it's clear because I made 22 stuff. So this is a retraction, everybody understands this, right? But if you lift the index by 1, it means you're going to the more regular space. It's still s infinity, so that's the strong condition. Then r of u is the same, maybe in a natural way, induced from the end in space. That's a nice exercise. OK, great. So Dusse, does that answer your question? Yeah, that's very helpful. And the proof is actually to construct such an r such that f composed with r is 0. And the tangent map of r maps at each point onto the kernel of f prime. So you have to construct an f, you said, construct an r. No. R is given. No, no, no. Given f, let me go back to the implicit function set. Oh, the proof of the implicit function set. Yeah, so the image of the, what they actually construct is that the image of this r is precisely the solution set. So you're saying that the idea of the proof of the implicit function theorem here is, is use the exercise. So construct such an r. So that f composed with r is 0, and that the tangent of r has precisely the kernel of f prime at that point as the image. So the image of the tangent of r is equal to the kernel of f prime. OK, I have a question about this. Can I think of this as implying various ordinary gluing theorems that we know? If I have a configuration of four-linked spheres where I can satisfy, I can verify some trathosology conditions, and I don't want to read that chapter of Guzan Dietmar's book, then I just imply this theorem. Yes, so when you set up this, so in the previous week we had this discussion on how to glue at nodes and so on. If you set this up, you would get, say, the retract x. So we have a noted sphere. We look at what's nearby. We get the retract x. We construct this bundle. And then we look at this Kozhe-Riemann section. And then in the ninth case, where the linearization is surjective, then you have this implicit function theorem and the nearby solution and the gluing solution. So what's the precise trathosology condition that needs to be verified? So in this case of a broken thing would be actually for each bit the classical surjective. So you have two spheres at all. I think what I remember in their book says a little bit more. I think there's also a condition of trathosology and for valuation less. Yeah, OK. So well, you need the right index with that, actually. I'm sure you need the same conditions, just that you have to check them and apologize. You have to check them and refer to them. I mean, it's precisely the condition of what they say. That is what's going to be the same. I can just translate that. No, sure. I mean, whatever classically it's true is true. Whatever good thing you can say classically, you'll find here as well. And it has the same modifications. So I didn't understand what you said about how the transversality of the evaluation map, in that case, would get built into. Well, when you set up, the things are, so the two operators don't move independently because they're defined on spheres which have common points. So that gives you some algebraic obstructions. So you have to verify then that that is surjective. So if you have transversality of things across terms, opposites, and so on. So that's going to change the scale of bonnock spaces that you're working with. The condition that your nodal point has to be in common between them. No, that is in the set up. So just, can I try this? Precisely this, so precisely, which I think you explained or Joel, that is why you have the negative glue in this average in terms. So in this case, this is precisely where maps from two spheres, the two distinct parts, and the nodal value coincides. That's already built in that they actually coincide there. So because of this term, so you cannot look at the two operators completely separately because you have to constrain that when you linearize, that you only look at things which coincide over this point. And that transversality condition precisely means that it doesn't matter. Yes. OK. Of course, I understand the answer I don't need to add. I would be happy to hear whatever you were about to say. OK, so I wrote down sort of the putative, filled thread home section in the Fleur case. And you might remember that it was just sort of d bar, d bar in both components. So in that case, you really just, yeah, it's a Cartesian product of two classical thread home operators. Those both are transverse. So in the Fleur setting, you don't Yeah, so there's a real product. In this Fleur setting, however, I already assumed that, but I didn't tell you, but I was just assuming that the Hamiltonian projectories are cut out transversally. So if you now did the same shenanigan in a more sport situation, for example, program of Witten, then the pre-gluing map, even the chart map for the ambient poly folder is not just defined on the product of maps from spheres, but on the fiber product where you have to take the variation maps at the nodes. And so this fiber product is going to sort of go all the way through to what your d bar, what your filled section then is. So you can either put that product into your operator or into your domain. You can pick, but essentially, exactly what happened says come out, but it's clearly to see already from the pre-gluing construction that you have to require equality at the node. And so if you went through my whole setup, you would get the two operators d bar at d bar, but not on the Cartesian product, only on the pairs in the Cartesian product which have the same value at the node. So that's a thing that will need to be transversed. Or alternatively, you would say, OK, I take the, what is it? This to go, you could also say, let's say, the big modular space is equivalent to the big modular space being cut out or the pair being cut out transversely. And then from the pair of modular spaces, the variation map to the nodal thing needs to be transversed to the diagonal, which is exactly the classical. OK, so I strongly suspect that someone besides Helmut, Chris, and Katrin has some basic confusion from last week. So I want to encourage questions like that. Anyone? So what about the dimension in that setting that I mentioned? Yes, good question. Right, so that's correct. And this is going to sound trivial, but nonetheless, I think it's useful to note that if you, let's say, that we start out with a, oops, yeah, this is a shock, we start out with a scale for at home section F, which we don't assume to be surjective anywhere. Then it's a theorem that the filled section, so the filled section means put together F with the isomorphism that you assume you have between the sort of complement of the retract cutting out x and that of y. So then we get this filled section. Let me call it capital F. So then it's the fact that this filled section, which is now going between honest scale bonnig spaces or open subsets thereof, has classically Fredholm linearizations. Yeah, isn't? No? I'm sorry. Yes, thank you. But scale Fredholm linearizations at every x in x infinity, and then Helmut correct me if I'm wrong, but the Fredholm index of, well, but it has a Fredholm index. OK, and let's say that the index of this linearization at some point x naught is equal to i, then you can first apply a theorem saying that you can always perturb using scale plus sections to get this transfer to be solidly satisfied at this point. So such that I think I might be mangling this. So hold on for a second. Don't like it safer. So usually when you talk about a filled section, it comes from having chosen a point for x. So there's usually one. Because when you look at this condition there, when you go higher and higher up, the point where it can only be defined, the filled section basically only exists to go to higher and higher levels near the original chosen point. So you're complaining about the fact that I should have been clear about the localness of the filling. Like you said, for every x, usually it makes only sense at one point. So the thing that I wanted to get across, and clearly I'm not saying this correctly, is that there's a notion, thank you Helmut, there's a notion of Fredholm index. And even if you don't assume surjectivity, and you can perturb to get surjectivity, and then the dimension of the finite dimensional c infinity manifold that will cut out is equal to the Fredholm index of the original thing. So rather than try to make that precise, let me just erase this. So I think what Helmut is saying is that this filled section, you fix the x and x infinity first, and then you look at the exact because by the theory, every point might have a completely different, a priori could have a completely different feeling. All right, so it's just the order. That's it, think of a really wise set. Then at each point, when you have to fill, you have to put something else. So the filling is actually only of auxiliary nature. It doesn't have any intrinsic structure. It just exists, and that's it. Even exists and it's retarded, it doesn't. And the nice, of course it looks like a complicated definition, but the nice thing is in applications, once you see one example, I think all the answers follow. It's almost canonical in applications without the filler actually. It's usually the Hessian at the node of the linearized operator at the node, or at a periodic orbit or so. So it's usually standard stuff. Right, it's the idea being that, like I mentioned on Friday, if you're studying Flur cylinders or something, the asymptotic operator, if you look at your operator and you take the limit as you go off to plus or minus infinity, that asymptotic thing is gonna be an isomorphism. And that's what you use to define the filler. But the filler's defined on the whole of this infinite cylinder that you sort of lost when you needed to do it. It's a negative, defined on the cylinder coming from the node. It's actually not, in general, not the standard cylinders. If you look very precisely, this cylinder, so you should take the pen on the gluing pad. Namely, there are three. Because the identification depends on the gluing pad. You slide some over and further and further, it always looks like a cylinder, but it's not canonical. But you've actually come out, you're coming out, right? I mean, you're just saying that there's sort of canonical choices of coordinates, but those coordinates depend on what your gluing parameter is. Yeah, there's actually two choices, like, for each cylinder, two choices of canonical coordinates, which depend on the gluing part. Okay, so any more questions? Yes? So it's in applications clearly that this property of G is satisfied with this form, or is that something? Well, my understanding is that the easiest or most natural way to prove in applications is to use this alternate definition that Katrin came up with. So she came up with this definition, which sort of looks more complicated, but in fact is easier to use. But I'm not the expert on this stuff. Do you two care to comment? Is that correct? Well, I saw that all of Katrin's alternates again. Maybe hers would be easier. Yeah, so far, so I'm seeing it's the obvious way to prove these things. Yeah, okay. So that they're smooth in all directions other than gluing parameters. And then, yeah. And some uniformity of these derivatives in all, in the good directions, disrespect to the bad directions. So that is definitely, so it's not so, it's actually not that difficult. I mean, I think to put in this framework is sort of on the level of proving some gluing theory in a simple situation. Do you like two caps or two spheres? What Katrin is saying when she says that differentiation is, I think that the key of what Katrin said is that when we're looking at this reparameterization action, it was not differentiable, but it was not differentiable exactly because of what was going on with the gluing parameters. So you could differentiate in the function direction as much as you wanted, but the bad stuff was happening in the gluing parameter direction. Yeah, but the mid-parameterization is there in all directions. I mean, if you slide the, yeah. But I mean, if you fix the gluing parameter, then it's. Yeah, but the thing is, when you look, of course it has something to do with the domain, but when you look at the, when you look at the change of coordinates, it's usually by a different morphism depending on the gluing parameters. So the group now enters over a family of different morphisms on the domain. That's, okay, so. My impression was that philosophically that was built directly into Katrin's equivalent definition of fretfulness. This thing I just said about the reparameterization action, but not quite sure. And there's a really great write-up of the proof of polyfold fretfulness of the Delbar operator in the Hamiltonian Flur case in a paper of Katrin's on the archive. Don't remember the title, but it was 2012. Is Katrin's ultimate definition for splicing zone or for retractions? It's for splicings only. The definition is. Here, your equivalent definition. The equivalent definition is for maps between the order and subsets of Skiobana space. Once they're filled, so. And so, I then just, I don't explain the filling. Right. And just write down the filled version in that paper. Okay, do you expect any applications for virtual meet, retractions, and non-just splicings? I think that the answer is no. Why? Well, what's an example? How do you predict the splicings? I think they have to say expect. In all current applications, my understanding is that splicings are necessary. I don't think any are known at the moment though for which retractions are going to be necessary. In mathematics predicting the future, I find it really bad. It usually won't. I stopped doing this. But anyway, in. What's the difference between a splicing and a retraction? Okay, good question. Great. Right, so first retraction. So a scale retraction is a scale infinity map R which goes between open subsets of Skiobana spaces with R compose R is equal to R. So scale retraction, that's it. And it has this really simple definition. The definition of splicing is, takes slightly longer to write down. But it's a special case of scale retraction. So a scale splicing is a map following form. So let's call it P going from, and let me write down the case without boundary. R D plus E to itself of the following form. So it's going to send a point V, E to V, pi sub V of E with the following property. So the first thing is that these pi sub V's are families of linear retractions. For all V, pi sub V, which goes from E to itself is a, what do you want to call this, scale projection? But yeah, so let me just say, so it's a linear scale zero map from E to itself which squares to itself. And then the second property is that P itself is smooth. I said that correctly? Yeah, okay. So, right, so. That's not to say it's smooth, it's a little flat. That's not to say it's smooth, apparently. That's right, so. No, no. It's not even continuous. What's good? That's the whole problem. Not standard smooth, it's S-C smooth. But it's smooth in the sense that when you put them all together, capital P is S-C smooth. But I think your question was, if you look at these operators, yeah, it's not even going to be continuous as a map from Rd to L of E comma E. By the way, Joe reminded me that there are splice, that there are attractions which are not splices coming up in good applications. Which applications? Like, construct the manifold of maps from one manifold into the other. Can construct that basically in 60 seconds? Maybe that's good for Wednesday. All right, yeah, that's your hour helmet. Right, so, and let me just remind you that there's this example that we went through on Wednesday. Sorry, just before you write them through your notebook. Is it that the word linear is the crucial thing that distinguishes the splice from the attractions? I would say that B is the first component, is the crucial. I mean, both of them. So it's a family of linear projections, whereas this R a priori, like you have no idea what kind of form it has. Yeah, there's not this Rd that's gonna split off of you that parameterizes some kind of family of maps even on linear ones. Is the splice being out too much to retain, or do you? Yes. Yes? Yeah, and I think, I haven't double-checked this, but I think it's, the crucial element is the fact that the first component is V. It's sort of an identity. That like, because V here in this case is kind of acting like a boundary defining function and it's not mixed in with the rest of what's going on. And so when you can kind of separate it out, I think that is the essential feature which guarantees sort of tameness and a lot of these other properties which follow from first splicings relatively easy and not necessarily for attractions so easily. Because a boundary is all seen in Rd. You take some of the property of the boundary. That's the only thing that you get. So the boundary is there, and everything is nice. Exactly, that's why I would think. The boundary is nice to see. So let me recall for you that this example that I talked about last Wednesday where the retraction was homomorphic to this satin side of R2 is an example of a splicing. So in this case, V is running in the horizontal direction. And for any positive V, the projection pi sub V is projecting onto a bump function or projecting onto the one-dimensional subspace spanned by a bump function centered at E to the one over V. For V less than or equal to zero, pi sub V is zero. And as was alluded to, every single retraction that has come up when constructing modules of the faces of the homomorphic curves has been a scale splicing. So the big one is when you're projecting onto the kernel of anti-gluing, that's gonna give you a scale splicing. So whoever asked that, are you happy now? Yeah. Okay. When you said this, I remember, but you know. But you might need a retraction that's not a splicing. In theory, yeah. I guess that Joel actually has an example that he just recalled when you're constructing a manifold of maps, but. Me? No. Oh, that there might be a scenario apparently they have a scenario where you need to consider retraction and that's not a splicing. Okay. Me is a strong word, but very useful. And the example of scope. Will be shown on Wednesday. Think about the following how it seems different. No, no, no. So if you look at a differential geometry, you have a little bit harder time already to find some book which actually talks about many of the boundaries. But if you then want to look at somebody talking about many of this boundary in corners, I think it's basically impossible to find such. Why is it? Because you also doubt, before this algorithm talked about sub-manifolds which are causing problems, but if you come now to boundary in corners and you want to talk about sub-manifolds, it gets a little bit of a zoop, yeah? What you can say. Now, you know, if you justify the sub-manifold, this classical differentiability as the set which is locally a retraction, then in the interior it will be a real manifold and near the boundary you have a tangent space to the set. And you can say more about the boundary behavior if you know how the tangent space lies with respect to the rest. So my proposal is, the differential geometry works, so you shouldn't actually build everything on a retract. Because it's just absolutely easy formalism, much faster construction of the map goes like degrees, everything. So that's my proposal to get the boundary. Okay, great. Right, so I think one more question then I want to say something. What's that? Can you say, you wanted to say something about the classical analysis of the Fredholm Commission? How it looks like in the past? Yeah, so I think I have something more important to say. If you want to read about that, it's like half a page long, it's super easy, and it's nice, and it's in a paper titled, A General Fredholm Theory Number Two by Hofer Wiesotzkin Center, in the introduction. Any other last question? Okay, so what I want to do in the last 10 minutes, I hope I can fit it in, is prove the easiest possible version of regularization theorem. And the reason that this is relevant is that the polyfold version of this has exactly the same proof, basically with scales stuck in front of some words. Right, and I should say that this is lifted from Caption's course a couple of years ago. Right, so here's the idea. So let's take a finite dimensional vector bundle, E living over B, and S is a section of it. So here, B is a finite dimensional manifold, E is a finite rank vector bundle, S is a C infinity section, and the zero set is compact, which turns out to be crucial to the proof of this theorem. So what's the theorem say? It says that you have perturbations. So inclusion is there exists a set called P sitting inside of the compactly supported C infinity sections with the following properties. So okay, the first one is saying that there are elements of P and they, in fact, you can find arbitrarily small elements of it. So there exists a sequence P sub I, such that P I goes to zero and C infinity look. Okay, the next one is transversality. So that's pretty important. So for every P in this curly P, if we perturb S by P, then that thing intersects the zero set transversely, which is to say that for every B in the zero set, the linearization is onto. So DB of S plus P is onto as a map from the tangent space of B at little B to the fiber. Yeah, yes, thank you. Yeah, okay, and then the last one is that compactness is preserved. Yeah, so the proof is short and it's pleasant. And like I said, if you know the proof of this theorem, then you also know how to prove it for polyfolds, essentially. Okay, so let's fix B naught in the solution set. So while our solution isn't necessarily transverse to that point, we know that the co-kernel of the linearization is finite dimensional since E is finite rank. So then let's choose a basis E one through E M for the co-kernel of the linearization at B naught of S. Okay, then let's extend these guys to compactly supported sections. Okay, and so then using these finitely many sections, let's soup up our original vector bundle. So let's look at the following things sitting over B times R M. So the projection is the obvious projection. And now we get this new section called S tilde. And it's defined by setting S tilde applied to B X one through X K is defined to be S at B plus X one T one at B through X M T M at B. Okay, and then we're just, of course, doing the trivial thing in the R direction. Now the point of this is that now we've killed off that co-kernel at B naught. So we know that S tilde is transverse to zero at the point B naught zero. Okay, and it follows from that that there exists delta greater than zero and U which sits inside of B, which is supposed to be a neighborhood of B naught with the property that S tilde is actually transverse to zero on all of U times the ball of radius delta centered at zero. Okay, and then the point of this is we're now gonna exploit the compactness of the original solution set to say that we can more or less cover the original solution set by finitely many of these sets U. Any questions about this so far? So let's cover B, which is compact by finitely many of these open sets U since we can do this original process at any B naught in the solution set. Okay, so then what that allows us to do is we can. B must not be as compact as the zero is compact. Is that what it's supposed to be? Cover the original solution. S inverse of zero is compact. Thank you very much. It's a good question. Great, so then what this allows us to do if you write down what this implies is construct a fattening up E times RK living over B times RK and a section S tilde with the property that, let me say this correctly, right? S tilde is transverse to zero on B, or excuse me, on S inverse of zero times the ball of radius delta. So it's now crucial that there were only finitely many of these sets so I could choose the uniform delta. Okay, so now we're almost done. So let's set sigma to B at B times B delta of zero intersect the solution set of S tilde. Oh, and I'm sorry, this should have been this should have been a neighborhood U of the zero set. So U is supposed to contain the zero set. So now it makes sense. Okay, so sigma because the zero set of S tilde is cut out transversely on this guy here sigma is gonna be finite dimensional manifold. Okay, and now we're essentially done because we can consider what happens when we include sigma into the base and then we project down to RK. So let's call this map Q. So then we can apply Sard's theorem to Q. Note that we're in the totally, you know, finite dimensional setting, no problem with Sard's theorem. So Sard's theorem tells us that there exists a point, let's call it wine RK, which is as small as we like though I won't say that. So that's what's gonna allow us to prove the first part of the theorem. So this is a regular value of Q. And so it follows that if we look at S plus Y1 T1 Y1 T1 all the way up through YK TK, this guy, which is the section now of our original model E over B is transverse to zero, which is all we wanted in the first place. So anyway, that proves the theorem. And let me tell you what you need to do to put this into the polyphold setting. So the first thing is that you need this contraction part of the definition of scale fret home in order to be able to say that, you know, solution sets of things transverse to zero are finite dimensional smooth manifolds. And then the other thing that you need are, you need your scale box spaces to actually be scale Hilbert spaces in order for bump functions to be defined. Yeah, you just need, yeah, you need that to see smooth bump functions, but they exist also. Not all but space is fine. Okay, so anyway, you need bump functions. And besides that, the rest of the theorem carries through. Any questions about the proof? And then your Tis would be S plus. Yes. Tis are not in there. And that's otherwise you can just copy them. Yeah, so you're going to conclude that you can get transversality by perturbing only with scale plus sections. Yeah, Felix? Yeah, I guess there's a different kind of regularizing. It's a regularizing in the sense that you end up with a smooth manifold. That's very confusing the language. There's two kinds of regularizing. Okay, I will stop now. I suppose we had a full hour of questions, but is there any chance there's some last minute ones? Yeah. Could you say one more time? Yeah. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Yeah. Could you say one more about how I get co-bordisms for different perturbations in this five-dimensional case? Let's see. Can I say anything sensible? So I haven't worked it out, but I think that you're just going to need to... So you're going to start off with these things that are transversally cut out, and you're going to basically need to extend whatever perturbations you made to get transversality into this whole co-bordism. I mean, you just need to prove a version of this theory where on some close, in the neighbor of some close set, you already have a regular perturbation. And then you want to extend it a regular way. Yes. But you see immediately that that works as well. Because then in the other case, you have two boundary components, which are already regular, and then you just extend that in regular fashion. And it's the same idea. You just add one parameter, three, but you use the same reason. So the one thing I'm wondering about is the initial step when you take your perturbations and you want to extend to something which is not necessarily transverse, but you certainly at least need Fredholm. So, yeah, so you need, of course, again, that problem is still Fredholm, but that is not an issue. But in 44 or more, in five dimensions, it's a different problem. And then you already regular the other boundary because they have a two-regular perturbation. Yeah. And then you have just to do this thing inside to fill out the co-bord. And then you just have me see perturbations with your parts and that's it. So it stays the same. Well, the one thing that's confusing me is certainly if you've gotten this homotopy through Fredholm operators, you'll be okay. But why is that immediate that you can do that? Well, you look at your principle. You just add this V has one addition parameter T, which doesn't affect... Oh, I see, I see. Okay, right. So key is that we use this contraction form for Fredholm-ness, right? I mean, it goes immediately into this.