 Thank you very much, Joe. All right, so preliminary bit to my talk here. Oh, I guess the first thing I should do is just apologize to everyone that we actually have two polyfold lectures on the same day. I sort of feel that's a little bit cruel, but what can you do? The second thing I want to point out is that at this web address here, there's an outline to all the polyfold talks, and it has a decent amount of information. In particular, it's got abstracts that can be found on the IGS website, but also a decent amount of lecture notes. So my lecture notes for today's talk, for instance, are on there already and tomorrow's talk. And soon from my talk on Wednesday, they'll be there as well. We're hoping to add up, if someone writes up Katrin's lecture notes in a semi-readable fashion, I can put those up there. And I'm fairly certain I'll have helmets up there by the end of the week, so if you want to follow along. In particular, last I heard, helmets talk was going to be a slide talk. And so I should be actually be able to have the slides up there as well. So when he goes too fast, you can quickly backtrack on your laptop or iPad to see what the heck he was talking about. So there's a fair amount of stuff on there. In particular, there's going to be homework problems. Those are already posted and references. So there's links, for instance, to all the HWZ papers on polyfold so you can quickly access a variety of stuff that's referenced. So that's preliminary information to my talk. Right. And so then just also briefly before I actually get into the stuff that I'm sort of talking about, before I start talking about the SC calculus, I think there are these words being thrown around like polyfold, polyfold theory. So I think it's a natural question to ask kind of, what is a polyfold? Can I have some vague idea before we sort of continue? And so I thought I would write this down just as kind of a guide in case you're sort of confused. So I want to answer this question, what is an M polyfold? I want to answer it vaguely, but kind of give you some idea. And so to do that, I'm going to say, well, an M polyfold, it's like a Bonn-Ock manifold in the sense that it has local charts and so forth. It has potentially boundary in corners. And it's a suitable ambient space to do this regularization procedure. This is kind of the space of all possible maps with the right topology on it, which allows curves in Gromov-Witton to develop nodes. It allows floor trajectories to break. And in particular, it's possible to build bundles over these for which the Cauchy-Riemann operator is a section. And we can do this regularization procedure. So the key idea is that it's what's like a Bonn-Ock manifold with boundary in corners, but it has better structure. And we'll see, well, better in some sense. It has more general structure, which allows us to do more things. And then also this key point here is that an M polyfold is to a manifold, what a polyfold is to an orbifold. So what's the difference between a manifold and an orbifold? Well, essentially, it's having a local group action. And in the same way, that's the difference between an M polyfold and a polyfold. So the discussions this week are all going to be about M polyfolds. And then it gets built into the larger, more general polyfold framework next week when helmet talks. So this is just to give you some sort of a vague idea as we move forward. So in my lecture here, I want to talk about the SE calculus. And so to get there, though, I want to start with a motivating problem. And I guess let me just make a quick note to myself as to when I'm supposed to finish by. OK, so motivating problem. So here's a motivating problem. We want to consider the maps, the continuously differential maps from the circle into the real line or real-valued maps from S1. And I want to put a group action on this. And I'm going to define this group action essentially by saying that, well, G applied to, say, S, comma F is equal to F, S plus whatever the input is. So it's essentially just a shift map in the domain of your function F. And so or you can think of this as a reparameterization action. And so now I can say, well, claim. And this, you know, I think Catherine will probably tell you that this is the most important part of my talk is that G is not classically smooth as a function. And by classically, I mean it's not for a shade differentiable, for instance. So in fact, we can make a slightly stronger statement, which is that claim G is nowhere C1 proof. Well, it's a homework exercise. OK, now that's a little bit cruel, simply because if you start with this, especially if you don't think doing a lot of analysis is fun, this might seem just a touch overwhelming. So I wanted to give you some ideas as to why it's not anywhere differentiable. So you should start with at least a little bit of intuition when you look at this, right? And your intuition should basically be the following. If I look at this map and I say, OK, well, if I want to vary F, that's fine. I vary F. But if I vary S, then I'm varying in the domain of F. And in particular, then if I differentiate with respect to F, S, if I differentiate with respect to S, then I end up having to differentiate my argument. And that's a bit of a problem. I mean, in particular, if F is just C1 and no better, and I try and differentiate with respect to S, then I'm going to have to differentiate F, and that's going to kick me out of my space, problems. So this, at least, is intuition, right? But we can be more precise than that, because this is a stronger claim. This is nowhere differentiable. So why not? So let me give the ideas. So outline. Well, step one is to say, let's suppose it is differentiable. Then the derivative, dg, well, it has to exist. And it can be computed via computing directional derivatives. So then in particular, if you make that computation, you see, well, then dg at the point Sf in the direction sigma phi. Well, after you make this computation, you can see that this quantity here has to be equal to, let's see, sigma F prime of S plus something plus phi S plus something. Great, OK, so what? Well, differentiability then guarantees that, yeah, this is a derivative of F with respect to its input. This is the derivative of F. Well, because I'm differentiating with respect to F. Oh, sorry, sigma is the derivative of S. Yes, sorry. So the following equation must be true. The limit as sigma, phi tends to 0, 0 essentially in the C1 topology of the following, g of S plus sigma minus g of Sf minus dg Sf sigma phi. So this inequality must be true. This is essentially what it means to have our derivative be sort of this approximating linear map. So this inequality here has to be true. However, what you then do is you tinker a little bit. And you say, well, consider a family of functions, which I'm going to write as phi sub sigma for sigma in 0, 1. And it essentially has the following form. So this height here is sigma squared. This is S. This point here is, I think I want sigma, 2 sigma. This point here is sigma. And in particular, as a consequence, you see the slope here of my function, phi sub sigma prime is equal to minus sigma. And here, phi prime of sigma is equal to sigma. And then we want these functions, this family of functions to be in C1. So we sort of round out the corners just a little bit. That's point is more or less irrelevant to the key fact, which is the following. Sigma phi sub sigma goes to 0, or I guess, 0, 0 in C1. But star does not hold. In other words, if I plug in this family of pairs into this right-hand side here, and then I pass to the limit, I don't get 0 the way I should. And this is under the assumption that star does not hold when sf is equal to 0, 0. And that's actually not too difficult to check. So if you get bored with the rest of the talk, you can go ahead and try and work out this example. And then the point is that to finish off my claim that G is nowhere dense, sorry, that G is nowhere differentiable is just a matter of shifting this point around, like make this point more general. So then it's sort of not too difficult, especially if you can see why this is true, then it should be the case that you can do the work necessary to prove the more general case. And the conclusion then, just to emphasize, as I suggested that Cashman would think it's the most important part of my talk, is that this reparameterization action here in this toy case, this action is not classically differentiable, period. So if you have a reparameterization group acting on your space of functions in general, that will not be smooth in a classical sense. So something has to be done about that. Any questions so far? I'm not sure I entirely agree with Cashman's estimate, maybe, but I suppose I have spent a lot of time locked in Starbucks working on this stuff. That may count for something. OK. So I have a question that if I were seeing this stuff for the first time, I might ask. So my question is, so what? I have this toy problem. I said that this reparameterization action isn't classically smooth, well, so what? What's that relevant to anything? So let me give the following example. So here's a point P. And here's a point Q. When you can put these in some fixed manifold or Rn, whatever you want to do. And suppose it's the case that you're considering trajectories that sort of in minus infinity time limit to P and positive infinity time limit to Q. And in particular, maybe you define some space of function. So let's consider the space of c1 functions mapping from R into, well, whatever my target is here, I'll just name it W, such that, maybe I should put a gamma here, such that gamma and minus infinity equals P, and gamma in plus infinity equals Q. And the reason why one might want to do that is say, for instance, you have some Morse function, and P is a Morse index. One critical point and Q is a Morse index. Two critical point. You want to count gradient trajectories between the two. And so maybe here's one, but there's some sort of an ambient space running around here. And then, of course, in this setup, well, there's this problem that arises. And that is that, well, if I have some gamma, which is going to be a solution to the gradient flow problem, then I can also shift it in the domain by T and get another solution. And since I can do that, I have a whole R's family worth of reparametizations of a single solution, which are sort of showing up here. But of course, if we're doing more somology, we want to be able to count these things. So you kind of want to quotient out by them. And so how might you quotient out? And if you just sort of say, well, naively, what might I do? How about if I not try to do anything sort of too complicated? And what you do naively, I think one could consider doing is say, well, I choose a local patch of, I choose a local patch, or yeah, let's see. I choose a codimension one manifold locally anyway, near the image of this trajectory for which this trajectory crosses through transversely. And then what I can do is I can say, well, now what I can really do is say that I can define b sub h to be equal to those gammas in b such that gamma of 0 is in h. And this sort of cuts down my domain a lot. This allows us to essentially choose, assuming that this point in here is injective. And even when it's not, you can make appropriate modifications. This sort of cuts down your function space by exactly one dimension. In particular, what this looks like, at least point-wise, is that this is really b mod r, where I'm thinking of r as sort of automorphisms of my domain, which in this case is r. So great. And so that's good. You can say, as it turns out, in fact, b sub h, it has a nice bunnock manifold structure to it. So you can put a bundle over the top of it. You can talk about sections of the fret home sections of that bundle and show that the gradient equation essentially gives you a fret home section. And you can do perturbation, blah, blah, blah. You can do a fair amount of stuff at this point. But then you say, OK, well, now the problem, though, is that I have essentially a bunnock manifold, which depends on my choice of h. So what you want to do is just sort of double check really quick that, well, if it's the case that I chose a different h as my hypersurface, as in general, there's no natural or canonical choice of this hypersurface here, if that's the case. Then, well, let me just choose a different slice. And then let me just go ahead and make sure b sub h prime is well-defined. So that's fine. That's not an issue. And in fact, this ends up being a nice bunnock manifold too. And then you can say, well, this is a nice bunnock manifold and this is a nice bunnock manifold. So I should probably just check to make sure the transition maps are smooth. And so you say, well, I write down a transition map. So that's going to be something like bh into bh prime. And it necessarily has to have a form of the following type. Phi of gamma is equal to gamma compose t sub gamma plus whatever. The point is that if I have one function in b sub h, it maps, 0 gets mapped into this hypersurface. I want to keep the same image and just shift that along. So I reparameterize my domain so that now gamma applied to 0 is an h prime. So this is my transition map and that's no problem. And then you want to check to see that it's smooth. It is a problem. And the problem is that, well, if I differentiate with respect to gamma, well, this t is a reparameterization action which depends upon gamma. So differentiate with respect to gamma means that I have to differentiate this function here with respect to gamma. But then the chain rule says I then have to differentiate the entire thing with respect to gamma. And now I'm in the scenario where in order to differentiate phi, I have to differentiate my argument problem. It's exactly the problem we had with the first example that I gave showing you that that was not classically differentiable. Any time you're in this sort of scenario where I have a function defined between function spaces, and if it's the case that I have to, in order to differentiate that function, I have to differentiate the arguments of that function, that's going to be a problem for classical differentiability. And so that shows up here. It showed up in the first problem. And effectively, it shows up in all moduli problems. This is a problem that needs to be resolved. Any questions about that so far? Now, what's your fixation with C1? Why not just write C infinity? What's the Banach space structure on C infinity? So this is the problem. Yes. So the point is, so the point running around here is that at the end of the day, you have this regularization idea that Catherine outlined. And in order to do this, you say, well, what do I really need to come out of that? I need some sort of implicit function theorem to show up. So whatever space I make use of, whatever ambient spaces or bundles whatever I use, I need to have an implicit function theorem at my disposal. And in order, the most, well, I think the most analytically easy set up to do that in is one in which you're dealing with Banach spaces, Banach manifolds, Banach bundles, and so forth. That's the easiest way. There's hard implicit function theorems that you can do. And they share a number of similarities with things that HWZ have done. But my understanding is that you really want an implicit function theorem, and the right way to have your implicit function theorem is coming from considering Banach manifolds. Yes, Chris? The problem that you're illustrating would go away, I think, if you did things in a slightly different order and first restricted your attention to the space of actual gradient flow lines, which is a manifold, and then if you act on it with this recapitulation, after that you're fine because those things are smooth. So the question, so again, the order of things. Yes, so I suppose there's also this issue of order. So yes, so I think to reiterate essentially what Chris said, I think that if you consider your space of, if you consider essentially the space, your moduli space, whatever it might be, gradient flow lines in this example, more generally pseudo-holomorphic curves, you first consider that, and then consider the automorphism group acting on that. Well, that is a smooth action, for sure. But then the difficulty is that what if those spaces aren't cut out transversely? Well, then we need to have, or they're not compact, right? If they might not be compact, if you need to glue in nodal curves, for instance, they might not be compact because you have to glue in broken trajectories, for instance, and now you want a larger sort of ambient space in which you can sort of perturb and then have this essentially regularization theorem as Catherine sort of suggested, right? So the fact is that because it's the case that we want to be able to, we want a large ambient space in order to make our perturbations, that's essentially why I'm focused on this presentation. But that's probably a good point of clarification. Any other questions? Let me maybe make two remarks. I'm already failing at seeing this. No, no, no, the space. Even if you take the infinity, really the question is what norm are you gonna put on it? Doesn't matter whether it's complete or not. I challenge you to find the norm on c infinity in which this is different. And Chris's question is exactly why I spend so much time trying to say why do we need to quotient first before we solve it. Good, okay. So I'm gonna spend a second here on racing. So if you have further questions, please raise them. You wouldn't get any mileage out of using a simplest space instead of c1, would you? No, I mean, I would say that you end up with all the same problems there. Well, since we're waiting, I'll just say if you do it in the other order, so you solve the equation or some kind of fastened up equation first and then, so you have finite dimension on the spaces and then you quotient out. Probably possible to do it, but it needs to all kind of complications. I mean, that sort of the currency approach which probably works, but it's not easy. It's not straightforward. This is much more conceptually straightforward because you're putting a nice structure on an ambient space. So you don't have to piece lots of little bits together. You've got one whole object. And of course the downside, I mean, there is sort of a downside is that in order to do the polyfoil approach, you do have to carry a lot of sort of analytic overhead with you. I mean, as you're gonna see, you need to know about the SE calculus and retracks and some language and how the stuff all fits together. Conceptually, I think it is sort of easier than, well, my understanding, it's easier than the various currency approaches, but there is sort of additional overhead that you kind of have to keep track of as you go because of that. Good, so okay, so there's this problem. So what's the solution? And so the idea, this was HWZ's idea, is to change the notion differentiability. Differentiability, that's a C0 approximation. Okay, so how do we do that? So well, what we need is this definition of an SC monoc space. So I would say it's four things, a collection, really a sequence, of number of this monoc spaces, E0, E1, E2, on and on it goes. As vector spaces, we have that E0 is contained in, or contains E1, which contains E2, et cetera. Because we have containment in this way that you can't quite see, we'll be able to see in just a second. The third condition is that EK plus one, the inclusion now is a compact embedding. And for the infinity level, E infinity is defined to be equal to the intersection for K in the natural numbers of EK. EK is, as a property, is dense in each EK. So before trying to describe these properties, I think it's probably good to see some examples. So here they are. You're gonna try and keep this fairly simple and say, well, CK maps from S1 into R, to real-valued maps from the circle. Regularity CK, maps of so-called class WKP, sigma into, say, R2N, or N, R2N, doesn't matter so much. Say, P bigger than two. And then lastly, EK is equal to V for all K if dimension of V is finite. So here's some examples. Yeah, yeah, sigma's compact. This is a closed-remin surface in this case. The sort of things you would want for Gromov-Witton. Okay, so, right, okay, so good. So now the idea is somehow the first step is to replace a Bonach space, which is just this complete-norm linear space with this particular collection of complete-norm linear space which satisfies some properties, some relationships. And what do I want to say? Yeah, I guess I'll just leave it at that for the moment. So now, I want to say, well, I said, let's see, that the idea was to change the notion of differentiability. So now we have to at least define what it means to have continuous maps between scale-Bonach spaces. So here's the definition. F is a map, say, between E and F, which are SC-Bonach spaces, is, so there's a couple different ways to say this. You can say either it's scale, and I should say, oh yeah, so SC sort of is shortened for scale, I guess. Scale because you have sort of this scale of Bonach spaces, and this was terminology that was used by, I can't remember who it is, helmet remembers, but it was terminology sort of in the literature elsewhere. Interpolation theory, thank you. So you either say SC or if you want to pronounce it scale, whatever scale-Bonach spaces, this is an equivalent way to do it, this sort of conventions haven't completely been established yet, but in any case, this is SC continuous or SC zero provided F maps EK into FK continuously for all K in the natural numbers. And so with my examples here, I think, just to be sort of clear, as you might think of this as just one Bonach space, I really want to think of this as a whole sequence, if perhaps that wasn't clear. And this is sort of a constant sequence, okay? Good. So now we have a definition of continuity. We can get to the definition of differential building in just a second, but I have to mention a few properties first. So one property is that if you have one scale-Bonach space and you want to take the direct sum with another scale-Bonach space, then the level structure is just sort of what you might expect it to be. Another property is that, well, given one scale space, you can actually construct some other scale spaces, essentially by forgetting a finite number of terms off the front end of the sequence, or the base end of the sequence. So in particular, I can define E superscript one, so subscripts denote levels in your scale-Bonach space. Superscripts mean you sort of kill off some finite number of levels. So I can write this as E one, the kth level of the E one space, assuming E is the S-E-Bonach space, is equal to E one plus k. And more generally, E n k, the kth level of E n is equal to E n plus k, right? So you can extract some others this way. What other properties do we have? Let's see. Maybe I'll just try and conserve board space and right over here. So three, this is more definition than property, but I'll list it here anyway, as it's a good place to. If U contained in E naught is an open set, then U has an S-C structure given by U sub k is defined to be equal to U intersect E k. So I can point out that this issue here is at least slightly vague, I think. What does it mean to have an S-C structure on sort of a topological space, which isn't a linear space? And I complained about this a lot, to Helmut and his co-authors, but they never seem to clarify this point. And then so then in the user's guide, that Katrin and Roman and Oliver and I wrote, we tried to clarify this a little bit by bringing this notion of scaled topology. And it's sort of interesting that, an attempt to make this less vague sort of makes the concept less clear. So I think the easiest thing to do is just allow this ambiguity and sort of say, well, kind of what's going on here is that having an open set in the sort of base level of a scale bond arc space also ends up having this filtration. And it has a lot of sort of the nice features that you want to, there's sort of local properties anyway, sort of local, certain local properties are then induced onto these open sets. And it's best just to sort of leave it a little bit vague, I think. And the last property is that if the dimension of say, your base level is finite, then ek is equal to E0 for all k in. This essentially has to do with the density of requirement, which is up over here. You need the infinity level to be dense. And the only way to do that with these finite dimensional vector spaces is if they're all the same. OK, any questions so far? So now I need to do something a little bit strange. So I have to define the tangent bundle of E, which is some scale bond arc space. And unfortunately, it's not what you would naturally think it should be. In particular, maybe what I'll do, let me do the following as this will show up. I want the tangent bundle of an open subset contained in some scale bond arc space. So this is a little bit strange. I can just write it down. I write tu is equal to u1 o plus e. So remember, the superscript 1 means we're sort of shifting the scale structure a bit. If you're following along, then I think your natural question is to say, why isn't this equal to actually u o plus e? This is far more natural, I think. So here on this side, here, this is the base. And this here is the tangent fibers. And so this is simply kind of odd because now it means that your tangent bundle isn't defined everywhere. It's only defined at those points, which in some sense are have at least one level of regularity. So in other words, if I jump back to my examples, which have now been erased, I think, of what are some examples of scale bond arc spaces, we said, for instance, the set of real valued CK maps from the circle. And so now you can say, OK, suppose the base space is the set of continuous maps. What's the tangent bundle to this? Well, the tangent bundle, the base of the tangent bundle is only defined at those points, which are at least C1, not C0. So this is strange. And I can't really tell you why this is the right definition to take until I tell you about what it means to be scale differentiable. And then this definition will become a bit more clear. So you should be confused. That's a good thing, I think, if you're confused, because you're at least following along enough to be confused. Yes? Isn't this supposed to be the u prime plus c prime? No, that's another point. You'd also think that this would actually be e1 up here. But it's not. It's easier. So it's also a C-bundle space, right? So it's like t, u, k is equal to u. So if I wanted to write down, if I wanted to write down, so I can even be clear, if I want to write down, what's the sort of bundle space or what's the scale structure on this? Well, then it has to be u1 plus k plus ek. Yes? I object to your question. I think it makes sense, because tangent space is the linear approximation. That's only going to make sense at once differentiable. But the question is, what are you, I mean, linear approximation to what? It's a linear approximation. I mean, this implies, I think, a notion of differentiability. And we haven't talked about scale differentiability. That's what I said was a necessary component in order to tell you why this definition makes sense. In the back of our minds, yeah, we know that what's coming up, what we'll see in a second here, is that the reparameterization action needs to be sc smooth. In particular, it needs to be at least sc1. And there, that's a situation where I need to be able to differentiate this function. And in order to differentiate this function, I have to differentiate an argument, which means I have to lose regularity. And that's essentially the key. I'll get to that in a second. I'll just restate it, I guess. But that, I think, is the key idea as to why you need something like this. The other point is, if you're taking, in a traditional sense, you take a space of c1 max, and then you want to know what's the tangent space to that. Well, that actually consists of c1 sections of the bundle. So that, I always find confusing, because you think the tangent space would consist of c0 sections. And it doesn't, it consists of c1 sections. So the standard, this is really not the standard notion. But I have to say, this is where the key of SC calculus is, to the fact that we have this new notion of differentiability. Because all the stale stuff was used if you looked at old implicit function theorems in KAM theory and stuff. They used sublet states, the scales of sublet states. But with a traditional notion of differentiability. And the whole point here is you've changed the notion of differentiability. That's what makes it new. Yeah, absolutely. So should I think of this filtration as anything other than just increasing amount of regularity? Is there any other example? I don't know of any other examples. Helmet, Katrin? No, I think regularity is a good picture. I mean, yeah. And it also kind of makes sense, I think, as in terms of Fredholm theory. Because Fredholm sections, in particular, those arising from differential PDE, sort of elliptic PDE, they behave nicely in between. They work nicely from, say, ck plus 1 to ck. Or sorry, that's a bad example. I'm sorry, sobolev class, sobolev regularity k and they do this for sort of arbitrary k, even though the map itself sort of doesn't change. So the fact that it works nicely across all these levels is also sort of an indicator of why it would kind of make sense to have key track of all this information. Because it behaves nicely on the whole scale. Yes? So you were saying that you should live with this. Live with this. And because what should I understand when you say you as an officer have this c-structures? Just like a filtration of the set? Or I mean? Well, it's exactly what I've written. I mean, in some sense, I would argue that what's happening here is because you have this sequence of bonnock spaces, that you really have a sequence of topologies. And on different topological spaces, each of which is dense in every other sort of a space is below it. And so in some sense, that's kind of the way I see this is just a sequence of sets all arising from having a sequence of open sets all arising from taking this intersection. And they each have their own sort of topology. But the topologies sort of interact nicely in the same way that the topologies in the scale bonnock spaces interact nicely. It's about the best I can do. So when you talk about the direct sum of an open set with an SC-bonnock space, you mean like partition product? Sort of? Yeah. Helmut, what's the right notation here? Hi. It's a product, is that what you're trying to say? It just indicates. You're using the vector states. Yeah. I mean, it's just notation. It's u cross e, this is particular for trains. Yeah, maybe that's, yeah, yeah. Although, well, but it needs to have a local notion of sort of some abilities, an affine space or something. It needs this because that shows up in the definition of SC differentiable. So you can't just be the Cartesian product. It has to have sort of a local linear structure. If I take a point inside, I need to be able to add small point, other small vectors, and stay inside the space. No, maybe you didn't like that there's an open set on one side of the bonnock space on the other. There's lots of things to not like about your definition. But again, by the way, I think this is sort of a point where it really makes more sense to say, I mean, it makes more sense to tinker with this and sort of say, well, what's really meant? How is this being used? And then allow for the ambiguity there. Again, if you try to make it precise, I think it gets messier and it becomes less clear what's really meant. So in particular, what I tend to do is just think that this U is actually an entire bonnock space, and then I'm just restricting myself to a neighborhood inside some larger bonnock space. It's kind of what's going on. So now I think I can provide the definition here. Unfortunately, I had to erase that, but I can write that up again. Ah. Definition, an SC continuous function, F, which maps U to V. These are open sets in corresponding scale bonnock spaces. Yeah. Is SC differentiable or SC1 provided 1 for each x in U1? So they have to have a regularity, at least 1. There exists a bounded linear operator, which I will write as df, which depends upon your domain point x in bounded linear operators from E0 to F0, such that the limit as h goes to 0 in the 1 norm of f of x plus h minus f of x minus df at x in the h direction. And all of that in the 0 norm divided by h in the 1 norm is equal to 0. So that's going to be our first condition. Second condition, tf, which is defined to map from tu into tv given by tf x comma h is equal to f of x df of x applied to h is SC0. So on one, you mean for each x in U1, right? Yes, that's what I've written. Sorry. Well, it says ek. But other than that, spelling error, it's x in U, and there is a subscript 1 there. And it follows that df actually takes e to fk. Ek, what does? That df, your bounded linear operator, you said it takes e0 to f0. And in fact, the second condition implies it's got to take ek to fk. Yeah, not necessarily. It depends on the regularity of x. If x is only on level 1, then you can only say that over level 0. If x is on level 2, then df is actually level 0 and level 1. If x is smooth, then that's true. Any other questions about this? Can you say a few words about why are we doing this stratification? I mean, it seems like a lot of people don't really know what's going on. Yes, I will answer that question. So yeah, and I'm getting there in just a second. I want to add one piece of information here that I think was helpful is the first time I saw this. By first time, I mean for several times, I saw this. It just seems like this definition seems kind of technical. I'm looking at this. I'm easily overwhelmed by too much information. And so this just looks complicated to me. So before moving on, the first thing I want to say is, or perhaps remind us, what does it mean for a function to be differentiable? Well, you need two things. One, you need an approximating linear map. And two, it needs to, in some sense, vary continuously. Now, you have to make these notions precise. But these are the essential ideas in the classical setting. And I just wanted to make clear that that's exactly the same thing what I'm doing here. The first statement is guaranteeing the existence of a linear map. And it has to be an approximation to your function. It has to yield an approximation to your function. That's essentially what this limit is saying. So the first step is just guaranteeing the existence of approximating linear map. And then the second condition is essentially demanding that those things vary continuously in some sense. And our new notion of continuity is showing up, making use of the scale continuity, instead of, say, classical continuity, which would require something different. But if you believe this in the classical sense, then you should at least be able to stare at these two definitions and say, OK, well, I don't really understand everything that's going on. But at least I can see conditional 1 is an approximating linear map. Condition 2 is saying that those linear maps vary continuously in some sense. So now I can try to address your other question, which is, why are we doing this? And so I think it's instructive to consider a couple of examples. So the first example is to say, well, this is sort of a cheap example, but it's useful to sort of consider. Define ek to be equal to ck plus 1. These are maps from, say, s1. Real-valued maps from the circle. fk is equal to ck s1 from r. Then f gets mapped to f prime. The derivative is sc1. Now, this is kind of a cheap example, because it's classically differentiable. But that raises a good point, which is essentially that if you have a function which is scale continuous and is classically differentiable on every level, then it's going to be sc1. So essentially, classic differentiability, morally, classical differentiability is going to apply scale differentiability. So we are generalizing the notion of differentiability. But again, in terms of examples, this is kind of a cheap one. Let's do something more interesting. Yes? Sorry, if ek is what? In your previous example, ek is s1 plus ck plus 1. And the map is just re-parameterized. I'm doing that example right now. I know. I think that means it's a brilliant talk, right? That's the only thing Elon could possibly conclude. Well, I like brilliant. That's such a better. Yeah, I suppose that's true. So here's an example that we've already seen. Action by re-parameterization. Did I make a mistake? Yeah, I want a bonnock space just so everything's compatible with the definitions that I provide. You can change it into s1, and then you work with sort of scale manifolds, blah, blah, blah. But I'll keep it as r for now. Claim, this is sc1, right? Proof. And in the lecture notes that I have online, I provide some hints just to get you and moving in the right direction. But this is good to sort of work out. And so now this is really important because. So can I interrupt for a minute? Absolutely. Just to explain, because I would never be able to do this exercise if I didn't have a really big hint. And the really big hint is if you look at the definition at the top for what limit you're taking, the top of it you're taking in the zero norm, and you're dividing out by h, which is measured in the c1 norm. That's the proof of it. So the c1 norm of h, of course, is much bigger than the c0 norm of h, because you've got the growth of the thing. So you're actually dividing by something which is much bigger, which tends to make the limit zero when it wasn't zero before. And it's that fact that you have the two scales in that limit there, which is making the same way. And then you should be able to do the exercise. If you actually just read that and then you do a very simple case, one has a hope of doing the exercise. Yeah, I think so. Thank you. Right. So why is this example important? So this then goes back to the very beginning, sort of the motivating example that we had. The motivating example was, well, we have this function space and we had this reparameterization acting on it. And the key fact, I said, the most important thing that the thing which Catherine will tell you is the most important in my talk is that this is not classically differentiable. It simply is not classically differentiable, and that causes a problem. And that causes a problem because it shows up any time you want to write down, essentially, any time you want to give something like a manifold structure to, say, for instance, a Banach space of maps, modular reparameterization group acting. So this shows up any time you try to write down transition maps, this action by reparameterization shows up. It's not classically smooth. We need something else. And now this tells you, at least in this toy case, that now this reparameterization action is actually SC differentiable. So now there's sort of a hope. And of course, the idea is that the things that SC1, for instance, also sort of tell you that the corresponding maps, if you're setting up, say, Morse homology, floor homology, Gromov-Wittnitz, et cetera, those types of reparameterization actions are also SC1. In fact, you can prove that they're SC smooth. So I can say, just give you sort of a quick definition then, is that f mapping from, say, one SC Banach space to another is SCk if tf is SCk minus 1. So if your derivative has one degree less of regularity. And so now I have to tell you something rather. So now we should sort of say something important, I guess, which is, so the important thing, I guess, is that, so now what do we do? We sort of said, OK, look, it shows up a lot in these sort of moduli problems that you're going to have this action by reparameterization, and it's not classically smooth. But it is sort of now, it's sort of scale smooth. And so you should think, aha, we've solved a big problem by just changing this notion of differentiability now. Instead of working with classically differentiable, we now have the scale differentiable notion. And so it seems like everything should be great, but you can't celebrate yet. And the reason you can't celebrate yet is, well, if someone sort of says, look, I have a function that I'd like to think is differentiable, but in fact, it's not. And then this person tells me, oh, but I changed the definition of differentiability. Now everything's fine. You should say, yeah, I don't think so. So what you really need is a theorem, which is the following. It's essentially the chain rule. So the chain rule says that if f is a map from e to f, and g is a map from f into, I ran out of letters. Well, let's say g. And let's say this is sc1, and this is also sc1, then g compose f is sc1, and the corresponding tangent map of g compose f is actually tg compose tf. So exactly what you would expect. And now that you have the chain rule at your disposal, then essentially anything from, say, finite-dimensional differential geometry, any construction that you can make there that doesn't involve the implicit function theorem, then carries over into the scale setting. So you can talk about scale manifolds, scale smooth manifolds. You can talk about scale differential functions between them. You can put on scale differential forms. There's quite a bit that you can do. And in particular, because it's the case I've erased it by now, that one of the key features of finite-dimensional scale bonoc spaces was that the scale was always constant, it turns out that any time you're dealing with finite-dimensional manifolds, the scale calculus is just the classical calculus. So what we've really done is generalized this notion of scale calculus from this classical setting from, I'm sorry, say, bonoc manifolds. Yeah, yeah, yeah. We've generalized this notion of classical differential geometry on bonoc manifolds into these scale bonoc manifolds with a chain rule, and so that its action by reparameterization is now SC smooth. Sorry, but a question like, all this, unless I know that this gives me enough for a guy to do the gluing that I want to do, then. So it buys me nothing, I mean. I want to know that this is sufficiently regular to do the gluing that I want to do. So Catherine sort of said that there are sort of two key features that a smart graduate student, if they were given these sorts of things and locked in a basement for a year, I assume there's food and water and oxygen being provided as well. One could sort of produce sort of the whole theory. And this is half the story. And so the half the story is the first step is that even before you allow for breaking or anything, we have this problem. And the problem is that if I try to write down this quotient, bonoc manifold of maps from R into a manifold between two critical points of a Morse function, and I try to quotient out by R, does that have a bonoc manifold structure? Before you even talk about breaking, bubbling, anything else, does this have a bonoc manifold structure? And the answer is no, but it has a scale bonoc manifold structure. And exactly what we wanted to do before, what we wanted to do before was cut out by a hypersurface. And then you sort of say, well, if I cut out by two different hypersurfaces, are the transition maps smooth? Well, they're not classically smooth, but now they're scale smooth. And so now you're in this category where things look promising. And so then what's the next concern that one has? Yeah, well, how about breaking and bubbling and gluing analysis and so forth? And that's essentially tomorrow's lecture. And how about the implicit function that's in the curriculum? So catchment would tell you the two key ingredients to the whole polyfold machinery that are really new. I would say there's 2 and 1 half. And I would say the Fredholm theory actually constitutes a half. You really have to rethink what Fredholm theory gives you in the classical setting. And you have to provide a different definition, something that defines the same thing for you in the classical setting, and then use that modified definition, which is equivalent to the standard one. And you need to port that over into the sort of the polyfold scale calculus type framework. It does get a bit more complicated, but there is a version of that. So as an organizer, I'm going to remind the speaker that I'm out of time. But I do want to mention just a couple of things, which is that one looks at this. You look at this definition. And people are already sort of, I can tell, people are already thinking about this. You look at this definition and say, oh my god, how am I ever going to prove something is scale differentiable, as SC1, let alone scale infinity? This looks really hard. This looks like something I don't ever want to do. And so to some extent, that should be true for most people. And so, right, but? Why is it more difficult than the old notion of differentiability? Why is it more difficult than the old notion of differentiability? I think that's already very hard. Well, that's essentially my point. If I said, look, here's a smooth function. To show this function is smooth, is C infinity in the usual sense. What are you going to do? How are you going to do this? Well, you develop some basic maps. You say, well, I have some collected. I have polynomials, for instance. And I've got sums, products, compositions, these sorts of things. And so you prove a little bit in each one of these cases. But look, you already have the chain rule at your disposal. And you already have that anything which is classically smooth is also essentially scale smooth. And so then what happens when you're talking about proving things are sort of smooth? This gives you a lot of structure. And then someone says, well, yeah, but if you're going to actually make use, really make use of sort of C infinity functions, which say, aren't analytic, then you have to do one difficult thing. You have to do this painful thing. And that painful thing is show that cutoff functions are smooth, something which is constant on an open set and then increases up to something else. So this is a hassle. And one does this. But then once you have this, now you have a ton of other things that you can do, because you can interpolate between things. You can approximate. You have a lot of tools at your disposal. And so I would say that if you look in the HWZ SC smoothness paper, they essentially provide you many, many building blocks. Essentially, I would say all the essential building blocks that you need to prove that transition maps are essentially SC smooth in an appropriate sense. They give you the building blocks. And those difficult things, like the analogy of sort of how do you show that a cutoff function is classically C infinity, they have these sorts of arguments these sorts of building block type functions are SC smooth. And these building block functions are the ones that always show up in pre-gluing maps, say, for instance. And so lots of complicated, lots of transition maps, for instance, which should be, say, very complicated to prove are SC smooth, once you break things apart into bits and pieces, you see that all the bits and pieces have been shown to be smooth by HWZ already. So making use of this stuff is actually not as difficult as it might seem a priori. And I was going to provide a list, but as I'm almost 10 minutes over, I'll simply say that that list of properties and references in the literature are in my lecture notes, which are online on the website that I stated at the very beginning of class. And with that, I'll end. Five minutes fast, but maybe I shouldn't have pointed that out. But I kept track of the time that I started, and I'm eight minutes over based on the time that I'm started. The shift function shows up a lot. So any questions for the speaker? Yeah. Well, when you said the definition of derivative, and you said that if you take one norm of H, and that means that H is in E1, and then the map, the differential maps E0 to F. So why do you need to have it on the whole E0 space, not just E1 space, or is it the matter of the normal? I don't see. I mean, you're always using the term H, which is basically E1. But I would say that the point is that if you look at this estimate, this estimate has to hold in the 0 level. So it's sort of like saying, right, I've got a bounded linear map, and I'm sort of defining it on some sort of dense set, for instance. But the bounds have to be corresponding to the 0 level, not the 1 level. Pardon? The estimate is in F. You're taking one norm in F, so. I'm taking one. But of the numerator here, I'm taking the 0 norm. But in F, I mean, the fx of H is in F0, and not E0. I'm not sure I understand that. I think that the point is that the arguments that you're feeding into F are in E1. Yes, so the H is in E1, OK. And the fx of H. No, no, the H is in that formula in H, P, E0. Oh, how many we can take the E1 norm of H? That's the definition. It's a linear operator from the 0 level to E0 to F0. And you feed in, you can feed in H in E0. Yes, right, but the df in 1, yes, that's correct. There is H on the 1 level. But the operator itself goes from the 0 level to the 0 level. So df of H, the H is on E0 makes sense. Because it's a linear operator from the 0 level to the 0. Well, OK, I get that it makes sense. I put it because of the incident. Maybe that the thing has a continuous extension. So maybe the question is why do we want this to go to the whole E0, and quite the... You are interested. Well, you wouldn't get a chain rule if you would not, for example, acquire that df of X goes from 0 to F0. Oh, OK. There's no chain rule. I mean, the chain rule is a little bit of a miracle because you go by one step down with the first map, and then you go another step down with the second map. So the chain rule says you actually only have to go down by one step. And that is only true because the embeddings from EK plus 1 into EK and the same for F are compact. If you have just continuous embeddings, it's one, but you don't have a chain rule. So you can't really relax anything of this condition. Yeah, all right. So maybe the answer is already in the package one. So we'll be short of the synchronization with C2 actually, all right? And the quotient is based on, again, a C, C, the main one, right? Well, how do you prove that? And the way you prove that, I would say, is by taking different cross slices and then showing transition maps are smooth. Like in the second example, I think so. Yes, yeah, I think so, yeah. And a naive question is like, so since it's like you're mowing out the reprimandizations and giving this space a certain structure that one can look at, is it possible to, like, doing something like a Morse theory in this space of strings using this technology? Maybe it's kind of a very general question. Yeah, I don't have an answer to that. But in general, I think that the SC calculus really hasn't been fully exploited. I mean, I think right now everything that we're going to talk about certainly in the first week here is really what is the analytic foundations that HWZ needed in order to do everything for SFT? So you say, this is sort of what you need here. And based on the way pieces sort of fit together and the elements, you can sort of say, OK, well, I really wanted to do fukai category stuff or you wanted to do relative SFT or you want to start or you want to sort of attach gradient flow lines to things. There's a lot of things which actually extend and fit into this nice framework. But even that, I think, doesn't fully exploit what HWZ have done here. I mean, I think that there are a lot of theories maybe outside of symplectic geometry where this sort of stuff can actually be applicable. But that's not being looked at so strongly at the moment because SFT first and then everything else. All right, so I'll recommend that any further questions be asked later this evening or tomorrow. Let's thank Jane.