 My name is Nate if you haven't met me already. A couple of announcements, there are solutions to the homework exercises which will be posted online I think later today. There are no official office hours, but I'll hang out after Michael Hutchings lecture today in case you have any questions. All right. So welcome to the polypholes discussion. The primary purpose is that I suspect that if you hadn't seen polypholes before Monday, then you're bewildered about at least part of what's been said. You know, like some piece of terminology that doesn't make sense, like maybe you think what the heck is this partial quadrant for or whatever else. So I'd like to take your questions and clarify whatever questions you have. So without further ado, let me turn the floor over to you. What questions do you have? It's not a question, but can we show some examples of any polypholes which are real? Yes. Yes. And yeah, I can do that right now. So here's a question for my quals. Where'd it go? Sorry. These are not my notes from quals. Okay, here we go. So this is a retract which is locally finite dimensional, but where the dimension varies within connected components. This is something that Joel alluded to earlier today. And this is also homework question number five. So the fact is that there exists a retract which is homeomorphic to the following subset of R2. So this is supposed to be the right half plane union with the x-axis, not including the y-axis. Does this, thank you, right? So does this satisfy your definition of weird umut? I can ask for weird, I think. This thing is like built up out of the union of things that are themselves magical. Fine, I mean it. Are there examples which do not perhaps like that? Can we first look at this all at the same time? This is the term when I need to say something. So if you built this from finite dimensional manifolds, you would take the real line and glue it into half space with this equivalence relation over the positive real line. And this is exactly what happens when you throw it into coronetial regularization. The topology, the quotient topology that you then get on this space is not good. It is not even first comfortable. In fact, so it is not metrizable. Whereas the topology that polyfoil theory puts on this is as a subset of a scale-bunner space, so it's perfectly metrizable. And it's absolutely key that we have a metrizable topology plane. So I must have been mistaken when I said homomorphic to this, or I guess. No, no, no, when it's homomorphic to that subset of R2. Okay, and you're saying that, for instance, coronetial structures would have put a different topology on it, which would have not been... The topology, again, is not the subset. All right, so the idea of this is that we're gonna fix some function, some bump function beta, and for T a real number, we're gonna write down a retraction on a function space, which is just projection to beta, but shifted by a certain amount depending on T. And I should have said, I'm sorry, for T greater than zero. And then for T less than or equal to zero, just consider the zero map. And so the funny thing that's going on here is that, so T is living in the x-axis, and as you come in from the right, so T's getting smaller and smaller, and E to the one over T is shooting off to infinity. So you're projecting onto this series of bump functions which are shooting off to infinity, and the interesting thing about scale smoothness here is that these maps behave sort of smoothly at zero. So something funny happens at zero, but we're gonna get a scale-smooth map. No, what is a bump function? I'm very convinced that your bump function is going from R to... You've got to try it. Where? I'll write it down now, but it's gonna go from R to R. But then where is this bracket thing? So this is the subspace in L2, R comma R spanned by that shifted bump function. Oh, so that's sitting inside. Yes, you didn't explain where that's sitting inside. That's sitting inside this infinite dimensional space, L2 of R, R. Yes, and I'll write that down. I'll write a precise definition down now. So let's set E to be the scale-bonic space, which is, well, the kth level is hk delta k functions from R to R, where we've chosen an increase in sequence of deltas starting from zero. And now the retraction or the candidate retraction, little R, goes from R times C to R times, sorry, R times E to R times E. And it's defined by sending the pair t, little e, to the projection onto beta t, when t is greater than zero. I'll write it in a second. Beta t means the shifted bump function. What do you mean, in the sort of t? It depends on the little bit. See the subtleties. See the subtleties. See the subtleties. But as either, is it? Oh, you have two e's. You've got two e's. It's not the exponential. But what is it the exponential? Oh, I'm sorry. I should call the function f, excuse me. Oh, that's inner product of the L2 inner product of that. Oh, it's not the exponential. So we've now set a world record for bad notation, so I'm sorry. Give yourself an R coordinate also. I'll show you the one that looks right now. Thank you, yes. So this is supposed to be mapped to R comma L2. So happy, good. OK, any other objections? Great. And so I think it's clear that the image of little R is homeomorphic, or maybe you don't believe the continuousness, but it's plausibly homeomorphic to the subset I drew above, where the x-axis is little t. And well, so the fiber living above any particular positive in the fixed point set is going to be the subspace in h0 is spanned by beta t. And the fiber over anything non-positive is just 0. Sorry, Nate. I'm going to be really, really annoying, and I'm sorry. But could you rewrite what you wrote in the bracket there? You mean right here? Yeah, did you wrote something just decided, I think? It has to be a retraction, so it's just the first coordinate to be 0. Yeah, so I accidentally written only the second component of this retraction. This is supposed to be t comma the projection of f onto the subspace spanned by beta t. It's the, yes, it's the L2 in a product. All right, so now before I move on to proving some amount of smoothness, any confusions about this definition? What does beta look like, did you mention? Yes, well, it doesn't really matter. It's a bump function with the h0 norm equal to 1. So why don't we just say it's, and let's say it's in c infinity. Do you need delta 0 to be 0 or kind of positive? I guess the delta 0 was positive as well. Do you really need it to be 0? I think probably if it was positive, and you also made this h0 comma delta 0, then it would probably work. The question is, if delta 0 were positive and not 0, would this still work? Delta 0 is the. There is a shift here. What? You can see that the h0 norm is not actually k. No, but if you replace to that by that. It's compact. So it doesn't, you can't forget the first one. You don't see the decay anywhere. Oh, but I don't know if beta doesn't 0. It should not be delta 0 if you work on anything that should be 0. Anyway, it really doesn't matter because it's an example. So let's just say that delta 0 is 0. OK. And let me just, before I prove that this thing is somewhat smooth, is everyone happy about this claim that the image of little r is this set that I drew in r2? I'll take your word for it. All right. So I'm not going to prove that little r is like sc infinity anywhere because, or everywhere, because I would take forever. But I can at least show that the difference quotient condition is satisfied for proving sc1-ness at points of the form 0, f. I mean, but before, how would you even dream that this would be shown? You draw this, and how can you tell if it's a, is there a criterion to say it's a polyphonic, what type of polyphonic? How do you think about this? That smells really good. Does it come up somewhere? Yeah. I mean, this is actually an important example because it's not exactly the same, but it's related to the plus gluing. Because in plus gluing, well, you're not projecting onto some bump function, but you're multiplying by bump functions where something is shooting off to infinity. And you need smoothness of the retraction that corresponds to that. So the same reason that this is sc smooth is at least part of why the retraction, which is projection onto the kernel of minus gluing, is sc smooth. What's the significance of the axis or negative axis in that? Yeah, I guess it's not exactly analogous, but the idea is that in that projection to minus gluing, when a is equal to 0, you're doing nothing if you project to the kernel of minus gluing. Yeah, I would have to think about whether it's exactly the same, but there's something drastic happening at a is equal to 0. In that case, and when t goes non-positive in this case. You could, of course, take, yes, it's heavily beta 2. You could take identity minus beta 2, and then you would probably call it where you use one and it would be up to everything, the whole space, and then suddenly the hyper space. Great. So it turns out to be true that R is sc infinity, and therefore a scale smooth retract with the differential defined like so. So in the case that t is positive, this will be sigma, comma, negative sigma 1 over t squared x of 1 over t, then the quantity f times beta prime beta plus f times beta, and then plus var phi beta beta. Yes. I claim that it's trivial for t negative, but it's not trivial for t positive. But I think that the most interesting thing is happening at t equals 0. OK. So anyway, this looks horrible, but it turns out that it's not easy to come up with that candidate. But anyway, I'm not going to use this because I'm only going to be looking at what happens at t equals 0. So let's just show that if we write that R is equal to t, comma, p of tf, so p is just the second component of R, the component mapping to e, then p is sc1 at a point of the form 0, comma, f. So let's show that the difference quotient for that statement indeed goes to 0. Yes, sorry. So the difference quotient is uninteresting when sigma is negative because then it's just identically 0. So what you end up needing to prove is that if you look at the limit as sigma decreases to 0 and phi goes to 0 in h1 delta 1. And then the thing you're looking at is f beta sigma beta sigma plus var phi beta sigma beta sigma. And then you're doing h0 divided by sigma plus the h1 delta 1 norm of var phi. You claim that this limit is equal to 0. So what this thing is, is it's just p evaluated at sigma, comma, t plus f. And then the other two terms vanish. And then we'll say that this is less than or equal to the limit of following two quantities summed. Super good for you. Why does that pull up the expression you wrote at the top, not here, down here? It's because the differential that appears in here is the differential at t is equal to 0. Is there some way to calculate the differential in general? For example, can you actually differentiate something and gain the expression above? Or did you have to come up with this? You're asking, how did I get that candidate for the differential? I claim that if you pretend to be a calculus student who believes that everything is true, if you do the Taylor series approximation, then this candidate will easily pop out. If you just write down what the differential is supposed to approximate. If you're tired of this example, I promise that I'm almost done. OK, so we need to show that this limit is equal to 0. And let's just prove this for one of these factors, and then the other one is really similar. So let's look at this guy here. So the numerator is the product of f and beta sigma, and then beta sigma taking h0 norm. But the h0 norm of beta sigma is 1. So we just get the absolute value of this inner product. But because the h0 norm of beta sigma is 1, that's bounded by the h0 norm of f. So that means that this expression here is bounded by the h0 norm over absolute value of sigma. But then, right, we assume that, oh, I'm sorry. And when I say h0, I mean h0 only. Sorry, the only thing I'll use about the inner product is that beta sigma is a bump function on e to the negative 1 over sigma comma e to the negative 1 over sigma plus 1. And now we can bound this by some exponential. Yes, f restricted to this. I mean, only take the h0 norm of f on this interval. I see. So that goes with the h0 on that. So you're restricting after that sub-involve and taking some of this, and this, and this, and this. Right, right. So now we're using the fact that f is on the h1 level and therefore on the e1 level, and therefore it's an h1 comma delta 1. So you can bound this by e to the negative delta 1 times e to the 1 over sigma minus 1 fh1 delta 1 over absolute value of sigma. And then you know that this is just some constant. We're not, you know, this is fixed before we take the limit. And this quantity goes to 0 faster than sigma does. So therefore, this quantity is quite interesting. You said a bunch of stuff about that last inequality and sigma other than sigma. OK. Yeah, so what I was saying is that beta sigma is supported on this interval, right? I'm OK with the second and last one. Adjust this one here. If you write down the definition of h1 comma delta 1, then this just pops out immediately. Because this norm is the h1 norm of ask xp negative delta 1 mod s times f. So if you just write down the definition of this weighted so-called of space, this pops out. I think you have to say that f has to be, in order to take the differential on the first level. Yes, that's right. So I didn't write it down. But in order to, we only need this difference question to go to 0 for f in the first level. And therefore, this norm is finite. OK. And this sort of raises the importance of the fact that we chose this particular gluing profile that our functions were shooting off to infinity like xp1 over sigma. If they've been going off to infinity much slower than that, this limit might not have ended up being 0. Faster is OK for this limit, yeah. Does it remain OK for all future limits? Or just to have? If you choose some gluing profile going to infinity much faster than exponential, would there ever be a problem? Well, it shouldn't us. I mean, I think if it oscillates a lot, maybe it's not so good. But if it grows rather fast in an oscillatory level in all derivatives, it's fine. So the worst case is minus 1 over 2 pi over root of sigma. That's the worst case, which unfortunately is precisely what he would use in gluing and delayed momfads. But that is, in some sense, in the general theory. And so, of course, from that point on. OK. Next question. Actually, could you reiterate the information about the typology of this thing? Because I think that is an important point. I mean, that's something, yeah. Yeah. I don't think I'll be able to say anything sensible about why this retract should actually be homeomorphic to the subspace of R2. But if you can, then. All right. Well, I mean, just like about, I'm not sure I could say anything, of course. But I mean, what's the typology of retract? It's like, it comes from? Well, by definition, it's inherited from the topology of the ambient scale, Bonach space, the zeroth level of that. But I don't know if that's the sort of thing you had in mind. Yeah. So presumably, it's some fairly easy exercise to show that this is homeomorphic to that subspace of R2. All right. No, so before we move on, what about three legs? Oh, you're asking. You're asking, could you get this as? No problem. I don't know. I can try that. So then I wonder, so the thing is look at retracts where the image should be in the smooth part, so it has a tangent space everywhere. Remember, that's the filtration. So look for a retract where the image is in the smooth part, which usually happens for low dimensional things. And so then you have a tangent space everywhere. And assume that the tangent space everywhere is one dimensional. So a one dimensional manifold would be the example. But it seems to be possible that things where you have bifurcating off branches sort of in this same derivative. So that could be like two things coming together, then have the same tangent space when they come together. So then I don't know. So there could be interactions which have this as an image. The exercise would be, I don't know, to take this thing and embed it into some bonus space and see if you can retract it. So it's possible that branched, something like branched one dimensional manifolds are actually retracts. But then it wouldn't be a problem still. Because now d squared equals 0 wouldn't be ruined by this, for example. Because in the end, we will actually get better results. So the thing is that the solutions of our Freton problems are strong retracts. Maybe the retraction goes into one level higher is a retraction. So the retraction which you get go from E into E1. And then there are actually really many problems. So if you would, yes, you're right. You could presumably do a Freton theory where you get, perhaps, this kind of things out. But then you would have this kind of issue. Unless you have some other structure like weights or whatnot. But the Freton theory, which Joel told you about, actually produces real manifolds. So that branching phenomenon doesn't exist. As the zero set of a transverse Freton. Maybe the last thing I can mention about this example is that it's a good exercise to compute the tangent space to this scale retract. In particular, you might wonder, what is the dimension of the tangent space at the origin? So I'll leave that as a mystery. Yeah, so any other questions? What are some things that are definitely not hand-pulled? What are some things that are definitely not hand-pulled? Non-measurable cells. I would catch yourself if that was hand-pulled. I mean it hasn't been really explored because in order to do SFT or so, the world isn't that bad. But I think it's bad enough, but it's not bad. But I think as far as the structure is concerned, I think there can be really weird stuff. So I mean you can have polyforge which look at some point, they look like a two. And the next point, they are zero dimension. Do you want to state what a splicing is in the retraction line page and how that guarantees that you can generally have a better structure in terms of what your own polyforge is? I don't know if that's helped in this case, but at least. So what Joel's referring to is that this particular retract that I've written down has this special structure, which is that it's a family of linear retractions. And there are certain things that are true for these special kinds of retracts which are called scale splicings that aren't true for general scale retracts. In all applications. Yeah, and in all applications currently, scale splicings are what appear. And I think my philosophical point of view, correct me if I'm wrong, Helmut, is that perhaps there are truly awful scale retracts like the Cantor said, I don't know. But I don't think we should worry too much about them. We should worry about what do we need for our applications and our polyfolds flexible to flexible enough to encompass them. So I think this one is relevant because it's really similar to the retract that you get coming from plus glue. And I don't need a great tangent space for the Cantor set. Is it not? Is there more something to settle on? I'm just saying something stupid to be asked. Or the mystery is whether the dimension of the tangent space at the origin of that space is it not just one mucus? If I ignore that they're really all pasted together, the origin is coming from the real line only. That's correct. And I guess you can actually see it very easily from the formula there. OK, if there are no questions, I can say something hopefully useful. If you ask sort of semi-continuity of local dimension, can it only go up, only go down, or it knows that property is true? In any rich direction, you're coming. You're always on those requests. You know of tangent space. Well, so OK, here's the answer to the dimension of the tangent space. For every point in there with t less than or equal to 0, it's 1. And for every point to the right, it's 2. So then so you could conjecture that you have what is it? Semi-continuity going down. So you can jump down. So I don't know the answer to that question. But there's some of this complementary solution. Right. Yeah, OK. But that is. Yeah. So the example that DUSA mentioned where you replace beta by 1 minus beta, of course, the dimensions would be infinite. But you can see that you'd sort of have the opposite kind of semi-continuity with that. All right. So last chance for questions or else I'll say something for the remaining five minute sub-lecture or discussion. Can I ask? I don't know. Maybe this is completely stupid. But could polyphones, you think, just think, go beyond SFT, like, use them for something else? I think that that's a hope of HW and Z. But I don't know whether there are any particular examples they have in mind. Well, I'd be happy when I'm done with SFT for the moment. What are the ground species? Are we talking about J-hole? Well, I mean, I don't know if you would include Lagrangian flur theory in whatever notion of SFT you have in mind. But I interpret it as, can you construct things that are not modulized spaces of J-hole morphic curves as we're using? Right. Could you stand on its own, satisfy? Sure. I mean, it's actually, for a lot of bubbling up, you know, geometric flows and so on. It's precisely this kind of language which we need, if you want to describe it this way. So I think there are a lot of problems in nonlinear analysis where they have this bubbling up, but where they haven't actually, I think, because of the questions there are some steps, not develop a good language for a lot of the things. They are also non-generic. And you would like to do for some, some moxie. This minimal cell faces and such things hasn't been carried out. And all only under certain conditions are more geometrically and so on. Whereas I think there was, they never developed any abstract message to the top, which actually would at least allow you to count certain things. So I think there's actually a lot of possibilities for application of something like that. I guess in a strict sense, the answer to your question is yes, because Morse theory certainly could be framed in terms of polyfolds. You have a question? OK. All right, so the last thing I wanted to say is I thought that it might be mysterious to some people what the point of partial quadrants are. So I thought I'd say just a couple of words about that. So what's the point of looking at spaces like? Can everyone see this right here? And this answer to the question of why do we consider partial quadrants is what happens if we're trying to construct a modular space of j-hole morphic curves, and we want to construct a local chart near something with nodes? Yes, boundary nodes. So say we're doing Lagrangian flow theories. We're looking at holomorphic curves with boundary on Lagrangians. And we try to construct a local chart near something with the boundary node. So something looking like, and then the local chart near a map like this is going to be of the form 0 comma 1 times some space of functions. And the reason that this is the right local chart is that we, of course, want nearby maps to this guy to include where we've smoothed out the node. And smoothing out the node is a one parameter operation corresponding to how much it gets pinched. And when the gluing parameter, which lives in here, goes to 0, you get actual nodal maps. So anyway, so that's where the partial quadrant definition is coming from, or at least it's one place that it comes from. And as a closing question for the audience, if you haven't thought about this question before Monday, let me ask you the following question. So say I take some map with an interior node. So something that looks something like, so a map like this is going to have degeneracy index 1 just because this is the local chart. And when you go all the way to 0, you get a nodal map. But the question is, what should the degeneracy, or what's the expected degeneracy index of a map like this? The question is, if you construct a local polyfold chart near a map like that, then what do you think the degeneracy index of this map is? And let me remind you that there are two gluing parameters, one necklake parameter, and one angular twisting parameter for gluing at that node. I'm going to take a vote between 0, 1, and 2. Yeah. 0? You can't vote if you knew it before Monday. 1 and 2. Do we all know which degeneracy index it is? I think here is the degeneracy index quickly, or is it not in the code system? Yes, so if you take a local chart, then for this particular chart, the degeneracy index corresponds to how much of the parameters living in the 0, 1, to the k are 0. And then to get the degeneracy index of a point in a polyfold, you take the minimum over all charts. Anyway, the answer is 0. And the reason that it's 0 and not 2 is that the gluing parameters for this node live in a disk. So the distinguished point at 0 corresponds to nodal maps. And if you move away from 0, then you're gluing the node. So in fact, this is an interior point of the disk, so the degeneracy index will be 0. Well, actually, also, we were just busy putting a chart there, a single chart. It shouldn't have any, right? So let me, with our retrack that we did with the plus gluing, we're putting a single chart on which had no quadric structure at all. Yes. Yeah. Indeed. All right. So I have a question as well. So when you find the degeneracy index, you take the minimum, right? Because you can have different charts, so it's the point that belongs to a different basic one or whatever. So even in this case, if you have a hypothetical chart which looked like, I don't know, 0, 1, 2, or something, then you can just say that the degeneracy index is at most completely correct. Yeah. So you would need to make some kind of proof in a particular situation that there wouldn't be some other chart where you'd have a lower degeneracy index. Sure. OK. I will stop there.