 Very good. Thank you. So yeah, so first, right off the bat, I want to apologize for this juggling of the schedule. I have some personal obligations, which are just making it difficult to schedule. So hopefully, this will be the last of these switches. But who knows, I suppose. OK. Right. I want to, so a couple of people pointed out a couple of things to me after my talk yesterday that I wanted to bring up. And it was sort of funny because these people sort of raised exactly the sort of point that I've raised with a helmet on a number of occasions, which is there's oftentimes you start talking about this stuff, and there's just ambiguity in the language. And once you're sort of brainwashed enough to sort of understand what's going on, then that ambiguity becomes sort of natural. And then I committed those same sins yesterday. And so I kind of wanted to point out something about that. So there's this ambiguity in terms of how I write and in terms of how HWZ write, to some extent, in terms of E, which is meant to be a scale, Banoch space, and E0, which is just a level, in particular, the base level, the zero level of that SE Banoch space. I mean, in particular, this looks suddenly confusing. What does this really mean? As E is a scale, Banoch space, then it's a whole sequence of Banoch spaces. So what do I mean by to say there's sort of an open set, sitting inside it? And so the ambiguity has always meant that E is both. When I write E, it's meant to mean both the whole set of Banoch spaces. But anytime you see a set-wise type statement like this, I'm talking about the zero level. So open sets, in particular, in a scale Banoch space, always means you take an open set on the base level, and then it has the scale structure induced by taking the intersection of that open set with all the other scale Banoch spaces, all the other Banoch spaces in the scale Banoch space. Does that make sense? So hopefully that clears things up. And it's just, the more you go with this sort of stuff, the more you discover that you just need more and more notation, and then at some point it just gets too difficult to keep track of all the notation you need to allow for some amount of ambiguity to actually make anything understandable. So that at least seems to be my preference. So if at any point you get stuck, even with these basic questions, because believe me, the first 10 times I read this stuff, this is exactly the sort of thing that would drive me nuts, please ask. Yes? So being open in the base level E0, is that not So the higher levels are compact embeddings, right? So shouldn't this be equivalent to U intersects EK being open for all K? That should also be, I mean, that will be the case. OK, it's kind of equivalent to. I'd have to think of it's equivalent or not. I need to see a precise definition. Any other questions? OK. OK, so then the first thing I really want to do during today's talk is just to sort of recap what we did yesterday, because we're going to sort of build on this. So the first I would say sort of main thing that I wanted to point out was that the action of the reparameterization group is not classically smooth. So we had this sort of toy problem. We showed that it showed up in transition maps and more in setting up sort of trying to set up Monarch manifolds for more somology. And then I sort of, and then I said that the same sort of action shows up in Gromov-Witton and sort of essentially any moduli problem you want to consider. Action of reparameterization is not classically smooth. And then it was pointed out on the total space, on the ambient space of functions that you want to work with. It is a smooth action if your moduli space is cut out transversely, and the action is just restricted to that space. So then we said, OK, well, we want that what we'd really like to have this action on this total ambient space to be smooth in some sense. So we introduced scale-bonox spaces and scale-differentiability. And as a consequence of that, we had sort of two key facts. One is that now the reparameterization action is SC-smooth. And it's also the case of the chain rule holds. Chain rule holding basically means that we now actually have this a new notion of differential calculus in some sense. And then reparameterization acting smoothly is nice because that meant that you could build transition charts now with some notion of smoothness between them. So you could build something like a scale-bonox manifold and some toy cases. And then the last thing that I wanted to point out, I didn't mention this last time. But for me, if someone tells me that a function is smooth, that always means c infinity. But I know there's some collection of people who also think that smooths should just mean, or rather, a smooth function should be c infinity for me. But there's a number of people who think a smooth function should just be c1. And so for all of my talks, smooth is meant to mean infinity. And in particular, SC-smooth is meant to be SC-infinity, not SC-1. And then you can sort of say, well, did we actually show that the reparameterization action is SC-smooth? The answer there is strictly speaking no. But this proof carries over. You just sort of iterate it. And then you can actually prove that the reparameterization action, at least in the toy case that I presented, is, in fact, SC-smooth. Any questions about that? So new material. Today, we want to parameterize this downright. Maybe I'll say it this way. We want charts near nodal, let's say, maps. And we're thinking in some sort of a Gromov wooden setting. So we're not going to worry about quotienting out by out of morphism groups. We just want to sort of understand what charts near nodal maps should be. So the image of your map here we're thinking of is looking something like this. Two spheres, maps from a nodal sphere into some manifold or R2N, say, for instance. And so even to sort of state this, we're kind of implicitly assuming something in the background here that we know what it makes sense to be near a nodal map in a reasonable sense. So you're already, I'm already sort of assuming that we have some sense of what it means to put a topology on the space of nodal maps in this larger space. In particular, you would like it to be the case that near this nodal map is, say, for instance, this map, which is not nodal. You've glued it a little bit. We'd like it to be the case that this is sort of close to this. That's what we mean by near. And that, in particular, is what we want to try and find a chart for, assuming our chart, say, for instance, is centered at this sort of map. So that's our main goal for today, and how this appears, particularly in the polyfold framework. And so the first step, I guess, is to take this picture here. And for simplicity, we really want to cut away as much of the topology of the problem as we can. And if we do that, then that sort of turns it into some problem that looks my drawing on the fly is a little bit poor. So my apologies for that. But I hope Joe's delirious. But I hope that you at least, very good. So this is what I want to do. I kind of want to chop away sort of the interesting topology. I want to turn it into this sort of problem here. And we're going to see, with pictures and then more precise statements, how this is going to end up being useful for us. So what I would like to do, though, is a little warm-up problem. Or rather, I want to give a warm-up problem, which goes like this. Definition for fixed delta bigger than 0 and k, a natural number, define hk delta r. I'm going to use supers, or subscripts, rather to denote coordinates on these guys just as it gets convenient to do this if I want to be precise. When h is what, don't use this for so long. No. So hk is equal to wk2. And I want to define this to be equal to the set of all f in hk loch such that d alpha of f times e to the delta absolute value s is in l2 for alpha between 0 and k. And then the norm in this case is the sum alpha in absolute value between 0 and k integral r cross s1 e to the 2 delta s d alpha f squared ds dt. And what I want to do is say homework for 0 strictly less than delta 0, strictly less than delta 1, strictly less than delta 2. And then I'm going to say all of this less than 2 pi show hk. Now let's see. I'll write it this way. Show g sub k defined to be equal to h sub k delta k is a sc buttock space. So there's some hints in the lecture notes that I have online. And of course, you can ask Nate. He's worked through this, and I think, yeah. So that's an option as well. So it's mainly showing the inclusion circumstances. That is the key step. So that's the key step. And I guess, let's see, so the hint that I provide in the notes is that this sort of exponential decay, in particular the fact that you have sort of increasing weights in terms of your exponential decay, is crucial to guaranteeing that compactness result. However, I would add is sort of an addendum. As an open-ended question, I would just sort of say, explore what other possibilities one might have. In particular, it is, for instance, important that the deltas go up to infinity, but do you need exponential decay? Sorry, the deltas increase what I mean. They strictly increase. Don't go to infinity, although that would also work. So then that raises also a natural question. Why do I have 2 pi here? This seems sort of completely strange. And you can choose a different cap there, if you like. Because I'm talking about problems in Gromov-Witten, this 2 pi ends up showing up as being relevant. And for those who've done sort of more analysis in the subject, this basically, this number here has to do with the spectral gap for an appropriate asymptotic operator. And 2 pi sort of shows up in the Gromov-Witten case. And if you're working in sort of SFT or fuller homology or something, then these caps sort of have to be different. And so there will be sort of corresponding changes there. But in any case, this is sort of a prototypical scale-blanach space that occurs in a lot of polyfold literature, I think. Everything is sort of modifications on this, I think. Good. OK. So now what are we going to do? Is there some intuition as to why that's the norm you write down? What else could it be? Starting from what? Starting from where? Starting from here, I would say this is the obvious norm. So why is this the choice? So I would say that the point is that you have a non-compact domain. And if you want to have a scale-blanach space, then you need compact inclusions of higher levels into lower levels. And so you need some sort of increasing the regularity alone won't do it. You need some additional information. So the exponential weights that we put on sort of heavily weight things on the outside, and that, well, I mean, you can tinker around with it. You can tinker around with it and then sort of see that guarantees compact embeddings. Another way to think of it is that in some sense, morally, those exponential weights kind of force, in many ways, allows you to sort of treat the problem as if you're dealing with, instead of an infinite domain, sort of a bounded domain. Why don't I take even more of that cutoff? Well, you wouldn't. I don't think you would want a cutoff. I mean, I would say, I think double exponential weights. Yes. So again, so my addendum to my homework problem was explore the necessity of this sort of exponential. And so I mean, so the solution is basically it doesn't have to be exponential. It could be a lot weaker. It could be a lot stronger. But there's an important key in there. But you choose exponential. I think one of the reasons you choose exponential is that this plays very, very nicely with the corresponding asymptotic analysis. I mean, it's a phrytonc. I mean, do you want data to be phrytonc? Sure. And then it's sort of nice for that sort of stuff. If you had a double exponential, you'd be filling out some of the things that you wanted to count as solutions. That's exponential, but not necessarily double exponential. But if you just want to construct a space. But if you just want to construct a space, you have a lot of freedom. But if you want to construct a space that's useful for applications, then you have to be a little bit more careful. And they all tend to follow this form. But these are all good questions. So it's certainly reasonable, though, to be speaking about ends of the pseudomorphic curves from here, but this definition. For this definition, the most natural thing to think of is, well, let me draw the domains here right now. And then hopefully that'll start to clear things up. So let me draw what I erased. So just, you have to buy time. That's better, Joe. That's what I expect from you. It is supposed to be a cylinder. A half cylinder. OK. So here's what we have. So I said, with a picture that I erased previously, we had a nodal curve that you might see in Gromov Witton. And so then I said, OK, I want to forget about the part, the topology as much as possible, reduce it to sort of this pair, this nodal disk pair, basically. So this is what I have. And so then what you do is you say, well, look, this is a disk. But now I had this nodal point. I want to treat it like a puncture. And so if I treat it like a puncture, then I have a holomorphic coordinates, which take me to sort of a positive half cylinder. So this ends up being r plus cross s1. And then over here, you do the same thing but in sort of the opposite direction, r minus cross s1. And so now, if this is your domain, so now if this is your domain and you have a map defined on this, well, you can sort of pull that back to sort of maps on these two cylinders. And then in particular, you'll have sort of an asymptotic matching condition in this case. So then what you want to do, because our goal for the day, although I erased it, our goal is to essentially try to find charts for neighborhoods of nodal curves, we have to do, well, let's see, where is this in my notes? I mean, the right thing to do is that that's going to involve the sort of pre-gluing maps. So I have to define what those are for you. And so I like to start with a picture. And so here's this picture. So the idea is, if you have the nodal map, the nodal disc map, and then you pre-glu, you know you end up with a cylinder of finite modulus. And so that's the picture that I'm drawing here. And in fact, I'm going to name it. And it's going to be called, it's going to be named ZA. And I'm going to define it very precisely, somewhere on the board here, ZA is equal to, this is 0 comma r with an s-coordinate cross s1 disjoint union minus r0. This has an s-prime coordinate. This has a t-coordinate cross s1. This has a t-prime coordinate. I want to take that disjoint union. And then I want to quotient out by this equivalence relationship, which identifies these points, st with s-prime plus r comma t prime plus theta, where r equals e to the 1 over absolute value of a minus e. And a equals mod a e to the minus 2 pi i theta. So what am I doing here, right? So I mean, I don't think this is significantly different than what's done in the McDuff Solomon book, right? The idea being that if you have a nodal pseudoholomorfer curve, and you want to find nearby pseudoholomorfer curves, well, there's this sort of argument that you make. The argument says, you find this pre-gluing map, which allows you to sort of construct nearby maps from nodal ones, provided you've given me this complex gluing parameter, which I'm calling a. Which thing is called the pre-gluing map? The pre-gluing map hasn't been written down yet. I'm about to do that. OK, why is it called pre-gluing? So I'm using Catrin's terminology. This is where I've sort of acquired this from. The idea being, my understanding of Catrin's idea, she can yell at me if I'm wrong, is that in something like Gromov-Witton or in the various fluorohomology or whatnot, there's a gluing map and that the gluing map should be understood as you take a broken solution to your problem, and the gluing map takes you from that broken solution plus a gluing parameter to another solution. Pre-gluing says, give me two a priori non-solutions, but put them in a function space that's close to where the solutions should lie. So it's called pre-gluing because usually, my understanding is that you take solutions to this nodal problem or broken problem, you pre-glu those solutions together, and then you run sort of a card iteration or Bonac fixed point argument to basically say that there has to then be a nearby solution. Is that roughly correct, Catrin, or are we busy with something else? No, that's correct, and that's not my terminology, I think. Well, that's where I learned it, so what's the? It's in our books. That's a good reference, I think. OK, this sort of standard language that I'm just not aware of. OK, very good. OK, so what's going on here is that we have to define that pre-gluing map, but in order to define that pre-gluing map, you need a domain for that map. And in order to define a domain for that map, well, you have to do a bit of this type of pre-gluing here on your domains, and then we're going to find maps on this new domain. So let me do that now. Given, OK, so I'm going to say, well, I can be precise here. Given A and C, and we're really thinking of mod A being near 0. And given U plus or minus as maps from R plus minus cross S1 into, for convenience, again, I'm just going to work in R2N, and we can change things later if necessary, although with Retrex you can do some interesting tricks, we have this pre-gluing map of U plus U minus. Well, let's see, it's defined as a map from ZA into R2N. So you give me a gluing parameter, and then two of these maps here, and then I'm going to construct a map from this sort of finite cylinder into R2N. And it's going to be given by theta plus of A U plus minus is equal to beta S minus R over 2 U plus S comma T plus 1 minus beta S minus R over 2 times U minus S minus RT minus theta, where beta essentially has the following form. It's a cutoff function. This is beta of S here so that beta prime has, that's the derivative, has compact support. 2 beta prime is less than or equal to 0, and 3 beta of T beta of S plus beta of minus S equals 1, identically. So with this beta defined, then this gluing map, or pre-gluing map rather, is well-defined. And so then, of course, if you haven't done any pre-gluing analysis before, then something like this looks really unpleasant. If you have done gluing analysis, you can at least sort of see what's going on here, I think, hopefully fairly clearly. Basically, what you're doing on this side, you have a cutoff function, and over here, you have 1 minus a cutoff function. And so what that means then, I mean, if you've done this, if you tinker with it, if you haven't done it before, is that you're essentially interpolating between u plus and u minus modulo of these sort of shifts in the corresponding domains. And then in the picture, I mean, in the picture, that's essentially what happened. Catherine actually drew this picture, a similar picture during her lecture. The idea basically being you take your two domains, and then you shift them, and you add in this relative twist, and then you want to identify sort of this truncated region, which we define to be ZA. And then in the ZA, say, on the far left-hand side, it's u plus, which is defined on the top part. And then on the right-hand side of ZA, it's going to be u minus, which is defined on the bottom most part. Does that make sense? This is just the pre-gluing part. This is rather smooth. This is just, yeah, so this is just pre-gluing. And everything done here is smooth for A away from 0, I think. No, I'm not going to say that. I have to think about what smoothness means in this case. Smoothness is a little bit strange, at least, because your domains of your maps are sort of changing. So you would already need to build a space where it makes sense to even compare them. So I won't make any claims about smoothness. Good. And so? Do you mean like the variance in there right now? Well, no, I mean, how do you compare? So suppose I take, suppose you even fix u plus and u minus, and then I compare A equals 1 and A equals 1 half. The point is that if you look at the definition of Z, your domain is changing. And in fact, the way we've carefully defined Z, even if we consider A equals I and compare it to, or let me write, e to the i pi over 2 compared to e to the, say, I, say, for instance, right? Even in this case, with this careful definition, your domains have changed. They're all different morphic, of course. Even in this case, they have the same modulus, but your domains have changed. And so all I meant to say was, what does it mean to sort of say smooth at this point already? Because these things are, the domains of your function are changing. And if you want to compare two functions, I want them to have the same domain, I think. Any other questions? OK. So now, I want to say, I want to try and keep this. So sorry, can you repeat what the role of this gluing parameter A is? Pardon? Can you repeat what the role of this gluing parameter? So it's, I mean, so as A goes to, so when A equals 0, it's as if your maps haven't been glued at all. So it's the nodal map. And so then what A controls is, well, it controls in some sense how you construct. It essentially controls how you construct the ZA, which tells you how to construct, say, the modulus of this neck in between, more than that. I think the modulus of the neck at this nodal point. Why do you have to twist? Well, because, well, for one reason, if you didn't twist, then you'd have boundary. And then your Delaney Mumford space of Riemann Services would have boundary. And you'd know you'd done something wrong. I mean, there's just, I think if you want to get all nearby maps, you have to include that twist parameter. OK, so here's some ideas. So the first idea is the right idea. And yeah, I guess I should say, yeah, I should say, and so Catrin's told me this about a million times. And it was only yesterday that the light bulb went off in my head that this was really the right idea. And that takes a while for stuff to get in. Let me tell you. The idea was to use this pre-gluing map to define a neighborhood of a nodal map. And I bring this up specifically because during Duza's talk, someone sort of said, well, wait a second. What's this topology on the space of pseudoholomorphic spheres or something? Once you compactify, what's this topology? And the immediate answer was sort of like, oh, that's I don't want to talk about that. And that's a perfectly valid reason, because when you write down, because the way you would define the topology is essentially via Gromov compactness. And you read Gromov the definition of Gromov compactness anywhere, and it's complicated. And so you kind of don't want to do that. However, if you look at the pre-gluing map, if you look at the pre-gluing map on this ambient space that you're trying to build, then it essentially, restrict it to your moduli space, it gives you the Gromov topology. And it's not so difficult to see how this pre-gluing map ought to give you sort of neighboring curves, right? You sort of say, OK, I take a nodal map, and I give a gluing parameter, and then I find these sort of approximate non-nodal curves nearby. And that gives you a bunch of sets. And that bunch of sets defines a topology for you, as long as everything's sort of open in the suitable sense. And it's, I think, a very clean way to sort of define the topology or what you expect the topology to be in this sort of ambient space. So this is a good idea. So here's sort of, I have to be a little bit careful. It's right but wrong. This following idea, which is to use this pre-gluing map to build a chart for these neighborhoods. And so let's see, what do I want to do? So let's see, I can even make that more precise, what I mean with the second statement, as this will be useful to keep in mind and sort of explore. So what do I mean? Well, let's say via, I have a map from C cross E into capital Z. And so C takes this input A, which is a gluing parameter. And I'm going to use C. In reality, we're thinking of just being an open neighborhood of 0. But just for convenience, let me say C. And then E consists of these pairs, U plus and U minus. And then we want to take these to their image by the pre-gluing map. And then I want to define these function spaces for you. So E equals the set of all pairs, EU plus minus, defined on R plus minus cross S1. Sorry. What is capital Z? Capital Z is another function space, which I haven't defined yet, because I'm busy defining the first one. Such that there exists a constant C in R2n, such that, let me try to write this clearly, E delta 0, absolute value S, d alpha U plus minus C is in corresponding L2 for alpha and absolute value 0 up to 3. And Z is the union of A in C, where again, I'm being sloppy with my notation. H3 maps from ZA into R2n. Do I want to regard what? What kind of object is this? Oh, right. What kind of object is this, by definition? I mean, so each one of these is a set. And I can take a union over a set. And I get another set. So it's a set at the moment, right? But you can see that what should Z be? I mean, in this context, if we actually capped off our domains by these sorts of disks, well, then we would think of Z as being the function space from a bunch of different pre-glued Riemann services, basically. I mean, in particular, if you throw on some additional mark points, your modulus can change, right? And so now you have this very large function space of H3 maps from each one of these pre-glued Riemann services, but they each have a different gluing parameter, so their each thought of is completely different. Does that make sense? So this joint union then? Well, none of them are contained in each other. So yeah, I mean, I don't think it matters that much, but maybe I'm wrong. I can put a disjoint union. So the fiber over the private A, right? That's true. Yeah, that's true also. Any further questions? Yeah? Why are you taking half to be bounded with three? Right, because I want to try and follow the Gromov-Witton paper as closely as possible, and they do that because these functions here then end up being in H3, and then that guarantees that they're C1, and C1 is important once you want to put on transverse constraints, these sort of transverse hyper-services, right? Further questions? Why not just take it to be much bigger, and then not lower it? How much bigger? I don't know, like arbitrarily large, you know what I mean? Well, you can. Arbitrarily large in what sense? How much? H1, do it. Ah, I see. That works as well. Oh, I see this number. Yeah. That's a minimum regularity. Yeah, this works. Although I would imagine that that might change things in terms of. It takes a scale anyway, so one part of H1, and it's just awesome. Yeah, I mean, I wouldn't be surprised, though, if somewhere along the lines making that choice forces you to do some extra work. Say, for instance, proving convergence in terms of Gromov compactness. Now you have to make sure that things converge in sort of, you know, it's not enough to even sort of converge in sort of low regularity. They have to converge in higher regularity. I mean, I don't know how the argument goes exactly in the polyphold framework. It's not clear to me, I think. But in general, though, the point is you want this to be as low regularity as possible, just to make sure you're capturing all your curves. Because otherwise, you could run into the same mistake that you already suggested, which is put in a double exponential, and now a priori, you're sort of excluding something. So keep things as keep, you know, by making this be large, you're restricting your ambient space. And just in general, being cavalier about this, you might lose some information. In this particular case, yeah, you're right. Probably it's not a problem. Yeah, you have to make a choice. This way is easier to write than it really is. Wisdom, yes. So just to recap, these cylinders, right? So what was it again? You had two nodal disks included in the interior. And these represent polar coordinates near the nodes. Well, so what happened was, right, so we started with a pair of nodal disks. And then from that, you have holomorphic coordinates around each one of them that you fix. And that gives rise to these two half cylinders. And then those two half cylinders plus a gluing parameter a gives you za, which is the sort of cylinder of finite modulus. And then from that, once we had this sort of domain, we also wanted to find if we had maps defined on that pair of nodal disks, we want to know what should the corresponding map be defined on this za. And we arrived at that with this formula right here. And then now, of course, we said that there's some ideas running around here. And so one is to use this pre-gluing map to define a neighborhood of a nodal map. And so what I'm saying is, in some sense, this is sort of part of our neighborhood. This ends up being part of your neighborhood of a nodal map. And I wanted to be able to make this precise here. And we had the second idea, which is right but wrong, which for the most part, we should think of as wrong, and I'll show you why, is to use this corresponding pre-gluing map to say that, I mean, if this pre-gluing map defines a topology, that's essentially telling you that you're finding all nearby non-nodal curves to a given nodal one using the pre-gluing map. See, if this defines a topology for us, which it does, then you're finding all nearby maps. So why not try to make use of this pre-gluing then to actually just parametrize all your nearby problems? Sorry, all your nearby non-nodal maps. I thought the pre-gluing maps are not solutions. Correct. Correct. So which of my statements is now confusing? So why does useful curves to consider? I think it's when you say a neighborhood of a nodal map, you really think it's still in the ambient space. Still in the ambient space. It's not a nodal jaguar morphic map. Right. Yeah, I mean, so you have to, I mean, together neighborhood, you have to complete. See, it's just a cylinder. You have to sort of add in the disc that you forgot about. That's true, although I would say that the most of today's talk is to sort of try and forget as much topology as possible. When you're talking about the neighborhood of the maps, if you're a nodal map. I have a neighborhood of a nodal map. A nodal map. From what, right? I mean, I could have changed my definition from a pair to a two-disc. It's like a no-ended two-disc. Is it completely obvious that I then want to discuss neighborhoods of things with many nodes? I'm not going to run into additional trouble with different errors from the different ones, I mean, you can see from this definition of sort of pre-gluing here is that it's purely a local phenomenon. And your nodes are always sort of separated in a sort of, yeah, your nodes are always sort of bounded distance away from each other. It's sort of a thick, thin type decomposition sort of tells you this. And when you make it whole morphic, it will not be so local. That's a different question. So at this point, all we're, yeah, I mean, this issue sort of, this issue, right? Yeah, nothing's holomorphic. I'm building ambient spaces, right? I mean, in some sense, that's all we're doing for this entire first week is building ambient spaces, right? And with appropriate bundles and structure and so forth. And so a benefit is that, yeah, we don't run into these sort of, you know, we don't run into these problems of what? Associativity in terms of gluing, for instance. Good. Just one more question. And the z is there to make, like, to account for the shift, because it's only in the alpha equal to 0 case. All right, so what's going on here? So that's actually a good point. So if you look at this, if I don't put this sort of c here, then this function space is essentially the same one as the practice one. I mean, modulo truncating your domain to sort of half cylinders. So why is it the case that we have this c in here? And what does that do for us? Well, what that does for us is that if you think about, if you think, OK, I've got a nodal map, sorry, a pair of nodal disks as my domain, and I think about maps from there. For simplicity, once you're writing down problems, you're going to sort of assume, say, that you're basing your problem when the image of that node is sort of at a base map is going to go to 0. But then you're going to want that node to move around. And that's what this c allows for, basically. And so yes, I mean, if I were trying to construct purely toy problems for this lecture, I could have said, OK, let's kill this. But I really want to keep things as close as possible to Gromov-Witton, because that's sort of, I mean, I think that should enable Nate to recycle a lot of the stuff that I've done and even write down charts in that case. So now I want to address this point up here, which is the second one. It's right but wrong. And I want to emphasize at the moment that this idea is wrong for sure. And so why is this idea wrong for sure? Inductivity. So yeah, that's exactly right. So Katrin even mentioned this yesterday as well. If you, when you look at this problem here, when you would try to do this, well, if you'd want to use this pre-gluing map to build a chart or a parameterization or whatnot, you'd want it at least to be, at the very least, you'd want it to be injective. And we have a problem that because of the way that this pre-gluing map is defined, it sort of truncates your domain. You sort of lose a ton of information. And so consequently, this map ends up being infinity to 1 in general, general being A non-zero. And so infinity to 1 map is a terrible idea than to use as a parameterizing map. So in this case too, you're saying we lose data? Yeah, absolutely. Where's my hook? I get to be a pirate for my talk too. So let me move this down so that we can still see this definition up here. You can see that what's going on here, right? So this bit here is U plus. So U plus is defined, say, certainly on this region here. And beta in this region here, I should really say, is 1. So U is getting 1, but 1 minus beta is 0 in this region. So your map defined on this region right here is just U plus. And the same argument sort of tells you on this region here, it's just U minus. But when it's just U minus, then U plus is defined on all of this region here. So all this information beyond here is just being killed. It's gone. It's being killed because this is a cutoff function. I erased it, but it looks like this. It cuts off U plus after some finite amount of time. And then whatever the map is in this region, that makes sense? Good. OK. But it's obvious to see that there is no way to not lose information when you do this. From this setup, what we've done so far, I mean, you're absolutely forced to lose information. You always have to truncate here. I mean, yes, it's always going to be the case that you lose information from this setup. Actually, I could crap. No, no, no. No, because I got really confused about this, because there's another way of making pre-doing maps by which you don't lose information, which is you rescale everything, like that whole entire infinite cylinder. You can rescale that to half of the cylinder, and then just attach the two to each other. So puzzle, what's wrong with that as a shot map? That's a good question. Homer problem. That's excellent. I love this. OK. So I said right but wrong, and now we've seen why is this idea here is wrong. So then the question here is, why is it kind of right, though? So the key thing that fails here is that you sort of, in this setup anyway, is that you lose a bunch of information. And so then there's a fix that one would like to try to do. And that idea is to find a way to keep track of lost information. And so the way we're going to do that, and I want to write this, OK. So I'm going to define, OK, this should be good. I'm going to define another cylinder. And this is a doubly infinite cylinder, and I'm trying to draw it sort of suitably parallel to the other ones because it's related. I'm going to call this CA. Actually, let me fix that. If I were only considering this finite portion of the cylinder, this is then essentially going to be ZA. But if I want to consider sort of the whole thing, the whole thing is called CA. So ZA is still my finite cylinder. CA is my doubly infinite cylinder. I can write a definition for CA, and you can even sort of, you should be able to guess it from what we've seen so far. r plus s cross s1t, disjoint union, r s prime minus cross s1t prime, modularly equivalence relationship, which is the same as the one that we had before. st is related to s prime plus rt prime plus theta. And then I can define theta minus gluing. Sorry, the minus gluing, or rather minus pre-gluing. What is it anti-pre-gluing? That's what we're calling it as the following. Sure? You're not talking about holomorphic currents at all. It just is a map, which looks like your broken map, namely the one that takes that central circle to a point. So why are we mucking around with making a broken map? It's already there. What? I mean, we're not allowed to have a glass circle to a point. Right. We don't have a holomorphic map, so you are allowed to have a glass circle to a point. Right. What I'm trying to do is motivate, here's what's, so the goal of my talk, which I'm sort of running short on time here, is to sort of say, look, here's sort of a sequence of things. I mean, I wanted to introduce the pre-gluing map, which is sort of standard in this sort of analysis. So you're not going to get away from doing some sort of pre-gluing map. And then what I want to do is say, okay, well, what we'd like to do is to use that map to make a parameterization sort of nearby something nodal. And then the reason that you can't do that is that there's information loss. So now what I want to do is keep track of that information. And then the punchline of this is that if I keep track of that information in kind of a clever way, and the clever way is sort of the way that HWZ defined it, what you can do is in this, what ends up happening is in this weird subset of a scale-bondock space as it turns out, it's an incredibly strange subset. I mean, it's very difficult. I think, anyway, it's fairly difficult, jumping dimensions and so forth. But nevertheless, it has a straightforward, smooth structure on it. And this provides, with this, you can then build essentially something like a manifold, which is a smooth manifold, SC smooth manifold in some sense, which has enough structure to, I mean, it has a differentiable structure. You can build a Fretholm theory on it, et cetera, et cetera. And so- Was there any way to answer my question? Yes, because what you're suggesting is I'm going to say, what next? How do you build a Fretholm theory on your problem? How do you prove a perturbation theory on this? How do you prove regularization with any other option that you're going to try to do? And the point is that I'm following HWZ, so I know all that stuff is going to appear. If you want to make a change at some point, then I'm going to say, okay, you've got another 1,000 pages to write. I mean, that's how I would think of it. But Deuzer probably has a more polite answer. If you have a juxtaposition, you just sort of say a one-half use use, you plus on the other half use use minus, and don't worry about that sort of converging to the same thing, assuming they go to a C. Then you lose control of the analysis, I think that's a problem. And the whole idea is you've got to stay in control of the analysis, and these kind of functions allow you to do that. That gives you some kind of smooth transition you can estimate for the working order. In our book, we used the plus-gluing, the minus-gluing we've been using for polyphonic theory, but with a plus-gluing, you show that you start off with a holomorphic thing, then the plus-glued holomorphic things are sufficiently close to being holomorphic, that there's a union process that makes them holomorphic, but if you need to, you need to do the process. That requires an estimate, but you do it. So you need to remain, what he's saying is you need to remain in some analytic framework, but if you dump like the one they've done, you're welcome to make your own. I think that's also minus-gluing. Do you have his tattoo on it? No. I think you subtract the mean. No, so this minus should factor through, so this becomes a plus, and this should be a plus. This is a minus? You've got a minus-minus, so it's one minus beta times a plus. No, you have to correct the end thing. So subtract the mean value from the end. I have a minus from the last time you gave this thought. Oh, it's the beta, and there's the beta here. Yeah, you're right, you're right, you want them to add. Yeah, thank you. What is what? Pardon? What in just words do each of these pieces mean? It's not obvious. I'm getting there, right? So, yeah, even if you've seen pre-gluing before, this should look terrible, and so what I want to do is try and make this a little bit more understandable. So this looks awful, and there's sort of no way around the fact that it looks awful, but once you kind of open it up and sort of see where it comes from, it makes a decent amount of sense. The definition of BA, the first one is the U-class, right? Sorry, what have I made a mistake on? You've read the plus. And it should be R over 2, presumably not R over 2. Yeah, yeah, that's right too. There's like a hidden matrix. Yeah, yeah, there is, I'm showing that in a second. It makes life a lot easier, right? So, what do I want to say? All right, so I guess the first claim is that sort of this keeps track of sort of the lost information. I'll make that precise in a second. If you kind of want to forget about the fact, one big simplification, and we'll see this in just a second, is that if you're just seeing this for the first time, just kill these terms, these average value terms. Just pretend they're not there and write the same thing down, and that at least gives you a toy problem to tinker around with, right? I'll tell you why those terms are needed in a second. Is that any different from the thing that was there before? I mean the one that's right above it, it's clearly different. I mean, how to say it, by some sort of obvious symmetry. Yeah, there is a lot of symmetry. The idea is to keep track of precisely the lost information. So, whereas on one side you see a beta, you'd want to see a 1-beta here, and where you see a 1-beta, you'd want to see a beta. So, I mean, yeah, there is symmetry. It's designed precisely to keep track of lost information. I can show you how it does that. So this is giving you the other side. It's keeping track of the portions that were killed by your cut-off functions previously. So you could have done like one interpolation or the other, and this is the other. The point is that you interpolate between, you sort of, you have a domain, which you might think you have two domains sort of split into two pieces. And your pre-gluing map sort of keeps track of this information and this information and then interpolates nearby. Sort of the minus-gluing map keeps track of this information and this information and then interpolates in between. And my claim is that you can actually reconstruct the first two maps from the latter two. May I make a suggestion? We need to end promptly at 315. Yeah. Why don't we let Joel finish what he had in mind and then we have our wonderful TA and answer questions. And of course I can be assaulted with questions as well. The voice upstairs. So... I don't know... Is that a hofer reference? Oh, the voice upstairs. Right, of course. This helmet's so tall, I just thought. So here's a trick. And this trick is really, I think, sort of where this comes from. So I'm going to write this as a matrix. I want to keep track of both pieces simultaneously. And then if I do that, well, what do I have? I have betas of A, one minus betas of A, beta A minus one, beta A, the identity map, zero, zero. This is the shift map corresponding to A U plus U minus plus zero, one minus two beta A, minus two minus. So now I have to tell you, you can kind of guess beta A of S is equal to S minus R over two. And if the shift map of A applied to some map U evaluated at ST, it is then U of S minus R T minus theta, assuming I have, are those pluses or minuses? Indeed I did. Thank you. Those are minuses, good. Okay, writing it this way. So one, first step is to just sort of verify this sort of is true, that you had this sort of equivalence this way. Second, I should have an A, V in here. Second, I said assume the A, V, zero, for a toy problem. And then if you do that, you just have this matrix. And the first thing I want to point out is that this thing is just sort of obviously invertible, right? Because compute the determinant, and it has to be one because these things are cutoff functions and the properties of these cutoff functions have. So you invert this, and then this operator here, I've got an identity and I've got the shift operator, which is also clearly invertible. So writing it this way, it's clearly invertible, I think, for any fixed A. And then okay, it's a touch more work and in the lecture notes, there's a homework problem and I provide the hints which allow you to walk through that this is still a bijection even with these terms added in. In fact, it's a linear bijection. So this, so I think what this sort of does, in some sense anyways, at least cleans up this mess. I prefer it to be written this way. And then, so then I can say, then I can say, sorry, through my lecture notes, by the way, this will run over for just a couple. Thank you. Just like this, you promised to say what the averaging terms were. Right. So I thought I said it at least, I thought I said it once, I can say it again. The purpose of the averaging terms then is to allow, is to allow, right, we have a question up front, why we had this constant C and probably the function spaces I just erased. Oh good, right, I put them up here. We wanted to allow this constant C in here because you want it to be the case that your node can move around, right. And so consequently when you write down this math, you need these averaging terms to actually allow you to, allow it to be the case that you can, you still allow it for the case that that node can move around. So you'll see why and sort of hopefully. Because if you do the same for closed theory, it will not be there. Essentially your orbit there in the end is fixed. Yeah. So what I want to do is say, so here's basically what happens. Why is sort of any of this relevant for anything? So we had this idea, so we had this idea that we would like to use this plus gluing to build a chart for our neighborhoods. And the problem that we had was that we wanted to build a community to one. So then I said, well, let's keep track of this lost information. So now you keep track of the plus gluing and the minus gluing. And we said that that was a bijection. And that's good because what that means then is that if I set O to be equal to the set of all triples, say, A U plus U minus such that O minus A U plus U minus is zero, and then the plus gluing maps essentially O into Z bijectively. This is for A not equal to zero. And so now at least we have a bijective correspondence. And once you have a bijective correspondence and the next natural question is to say, well, is it possible to give this space of differentiable structure? And you look at this and you sort of say, OK, the zeros of this map, the zeros of the minus gluing map, that's going to be some set. Who knows what that looks like? And a priori it might be complicated. And so why should that have any sort of differentiable structure? And the magic is it has, or rather it, let's say, it supports the SC calculus. And in just a brief second, I can sort of tell you why this map here, we can call this map here, say the box dot gluing of U plus U minus. And then what we said was, well, this thing is invertible. And so now what you can do is you can define this map R of A U plus U minus equals, so it's basically you take box dot gluing, you compose that with sort of the projection to the first factor composed with box dot inverse. And this, let's see, sorry, otherwise this is A U plus U minus if A is equal to zero, A not equal to zero. So you define this map, and it ends up being the case, just to go over this very quickly here, is that this map has a very nice property, which is R composed, R is equal to R, and it's SC smooth. And a consequence of this, and it's a very fast consequence, which I would go over if I wasn't already five minutes over my time, is that as a consequence of these two simple facts, you can then define a notion of an SC smooth function defined on, say, the image of R. The image of R is precisely the set of points where the minus pre-gluing is zero. And you do that simply by saying, well, F defined from, say, O into some other space Z, well, I don't want to say, well, into any other space say O prime, is SC smooth precisely, or say SCK, precisely if F composed with R on a slightly larger space is SCK. And so the point is this R acts as a retraction, and images of retractions are called retracts, and if there also happened to be SC smooth, then that guarantees that they sort of support an SC calculus. And I'm going to talk about this more at the beginning of my lecture tomorrow. But the whole point of this, what is essentially, so I can summarize very briefly, we said that what we really wanted to do was have a parameterization or a chart nearby our nodal maps. And what we did is we said, well, the pre-gluing map is infinity to one, so we made use of this minus-gluing map in order to cut out a bunch of garbage and make that map one to one. But now what I said is if you're clever with this rewriting, you can see that that set can be written as the image of this SC smooth retraction, which I'll talk about more next time, and as a consequence supports the SC calculus, meaning we have a notion of a differentiable map from one set of this form to another. And these then provide the models, the local models for M polyfolds. But we'll talk more about this next time. I can answer a couple of questions, yeah. Okay, we have a few minutes for questions, but again remember that we have many voices upstairs. Including guys. Any questions? Is it a fact of a usual manifold with the image of a smooth map that squares itself as a manifold? It's a manifold, yes. That is Capcom's last year. It's a supplementary problem. But in the SC calculus, it's something more general. Yeah, Banach manifold is also true. In the SC calculus, they can have locally varying dimension points. They still have tangent space, because you apply the chain of TR, that's the tangent space. And it doesn't depend on the choice of the retrac as the image is the same. So it's a definition. So you can even have one-dimensional stuff, or finite-dimensional stuff in infinite-dimensional space, which is the right dimension of the tangent space. Which you would never see. Any other questions? Sorry, just a question. Projection onto the first factor. If you have... But you can sort of zero out the second term. I should have said that P r1 of xy equals x0. What's the image of r? Right, so r... Well, it retracts to precisely O. Oh, yeah? It's a subset of C cross E. I'll make this more clear next time. I went really fast in the last few minutes. I'm sorry about that. But the claim is that it's O. The claim is that the image of r is O. Tinker with it. O is sitting inside this funny space with maps on the main Z-A. No, no, O is in C cross E. Yeah, O should be a subset of C cross E. O is a subset of C cross E. Have I written something which should be... So there's a subset of C cross E, which is a smooth retry, and if you do the pre-gluing, it's a bijection. And there's a union of this arbitrary long cylinder that is mapped on it. But it is in fact sitting inside C cross E as a handle of some map. C cross E is not the kernel. It's the domain of a map. It's the domain of r. So actually, if you fiber it over C, then fiber-wise it's a linear retraction. Yeah, for fixed A, yeah. But if A is zero, it's the identity. This is nothing. And if A is non-zero, it actually goes on the proper subset. Yes. Yeah, yeah. Yeah, I mean, that's what I was confused about. I was confused about A equals zero. It's changing coordinates and doing projections, yeah. Okay. If A is zero, nothing happens. Yeah, it's just U plus U minus... So minus gluing when A is zero is defined to be zero? Minus gluing when A is equal to zero... Or it can be defined as zero. Yeah, I guess it has some sort of definition in that case, yeah. C A is BMP7 in that case. Pardon? C A is BMP7 in that case, I think. Right, but there's one map from the empty set. And if it takes value of the vector space, it's zero. Okay, fantastic. Okay, with that, I do actually have to go. So thank you for your question.