 Was this like when I was in, I forget where it was, with some country, I wanted to get a Diet Coke. And this refrigerator was Diet Coke's right there, but I'm not allowed to touch it, so I have to ask this one person who also is not allowed to touch it, but that person can then call in a supervisor or something because it's actually allowed to get it out and then bring it, it's a very complicated process. Right, anyway. Okay, so today I'm gonna finish with the Morse theory and then try to get onto some things which involve holomorphic curves and are related to the polyfold foundations of SFT, which we'll be hearing about next week. So the Morse theory example I had was you have FTGT is a one parameter family of pairs of a function and a metric on a manifold and a time t equals zero, this fails to be more snail. So a time t equals zero, there is a index zero flow line from a critical point q to a critical point r, so these both have index i. We'll call this thing u zero, but we'll assume that it's a sort of generic situation when I say the word generic, that means satisfying various conditions which holds for data in accountable intersection of open dense sets and the set of conditions may increase as I go along. So here the, so the linearized operator for this u zero is not surjective, but in generic case, its co-kernel is one dimensional. And the other generic thing is you can look at the derivative of the gradient flow equation with respect to t on u zero. So if I look at sort of d dt of the gradient flow equation is ds minus v on u zero, then this derivative at t equals zero should have a non-zero projection onto this co-kernel. So this is an isomorphism. So this is the generic way in which an index zero thing can appear in a one parameter family. And then for t non-zero but small, this will actually be more snail. Yeah, it could be real valued or circle valued. It doesn't really make a difference for this discussion. So if you're more comfortable with real value, let's say it's real value. That's an assumption. What's an assumption? It's the derivative. Yeah, so these are assumptions. So this whole board is assumptions. And then what I want to analyze is what is the change in the chain complex as we go from negative t to positive t. So in particular, if I have, are you assuming that there's only one flow line from p to r? Yeah, so just this one failure of more smaleness arises at time zero, which is this one index zero flow line appears. Okay, and let's say you have another flow line p of index i plus one, excuse me, another critical point p of index i plus one and a flow line u plus from p to q. Then we have a configuration like this at time zero. And I can say, well, for, so we expect that this can be glued to an actual flow line from p to r for a time small, maybe positive, maybe negative. So you want to analyze when that can happen. So in the last time, we wrote down the equations for this. So we have gluing parameters r, which is very large and t, which where the absolute value of t is, well, t is non-zero, but its absolute value is small. And what we do is we translate these flow lines. So my maps are all parameterized. So u plus is a map from r to x and u zero is a map from r to x. So the first thing we do is we translate these. So we move u plus up by r over two, we move u zero down by r over two and to the great confusion of every one last time, I continue to denote u plus and u zero by the same letters. So I'm translating them, but denoting them by the same letters. So we translate u plus up, u zero down. Up and down, u zero is close to q or far from q? The image of zero will be. The image of zero will be far from q. So this is breaking. I mean, the inverse of gluing is breaking. And what's breaking? So you have a flow line like this for some non-zero time. And when it's close to breaking, what it does, well, actually I'm gonna have my things going up. So it goes up from r and it's near q and then it just sits there for a really long time and then it goes up to p. So it's like a camel with two humps where there's some action happening and these two humps are getting ripped apart. That's all you talk about breaking. It's very dramatic. What? The camel knows what to do. It's a tofu camel. No animals are harmed in this talk. Okay. All right, so we translate by this and then we're gonna change the time to t, okay? So I mean with the equation, I guess I should put a t here. So vt means the upward gradient vector field for time t. And then we have psi plus and psi zero. So these are perturbations of the u plus and u minus. So psi plus is a section of u plus star tx and psi zero is a section of u zero star tx. And then we had a, we looked at beta plus times u plus plus psi plus plus beta zero times u zero plus psi zero where beta plus and beta zero are cut off functions. These derivatives have order one over r and this expression really means you start with u plus and you apply the exponential map to psi plus. And then I choose some coordinate chart in neighborhood of q such that where both of these cut off functions are non-zero, it makes sense to add them like this. And then the gluing equation, so if we could call this whole thing, let's call this capital U. So this is, so this is some map from r to x. And then we have the equation for this to be a flow line. So we have ds plus vt of u has the form beta plus times theta plus psi plus psi minus t plus beta zero times theta zero of psi, sorry, this is not minus the zero, psi plus psi zero t, where, did that go to the right place? You can't see it. So then where theta plus is the linearized operator for u plus apply to psi plus. Plus d beta zero ds times u zero plus psi zero plus some other stuff which doesn't really matter, so I'm not gonna write it. And then theta zero, well, I guess I could write this here although it's not gonna matter, but we have sort of the plus the derivative of, well, plus t times the derivative of v with respect to t because the equation's changing because the vector field is changing. Well, I guess it's ds minus v, so I should put a minus sign here. And then there's some sort of error terms. And then theta zero is d zero psi zero plus d beta plus ds u plus plus psi plus minus t times partial vt partial t plus some error terms. And then there were some lemmas. So the lemma, so let's fix r and t where r is sufficiently large and the absolute value of t is sufficiently small. And then there exists a unique psi plus and psi zero such that psi plus is l two, and this is unique in, you have to work in some bonus space where these things are required to decay. And then it's unique such that psi plus is l two orthogonal to the kernel of d plus. Psi zero is l two orthogonal to the kernel of d zero. And theta plus is equal to zero. And theta zero, I can't quite get it to be zero, but I can get it to be l two orthogonal to the image of the operator d zero. So if I could actually get theta zero equal to zero, then I would have a gluing because that whole expression up there would be equal to zero. I can't quite do it. There's a failure which is this theta zero. So you can think of theta zero as living in the co-kernel of d zero, which is a one-dimensional vector space. So I have an obstruction to gluing which lives in a one-dimensional vector space. So when that obstruction is zero, this construction will work and I can glue. And some people asked me last time, why do we want theta plus and theta zero both to equal zero? We just need this linear combination of them to equal zero. And the answer is because then you get this uniqueness of psi plus and psi zero. And as Helmut pointed out to me after the talk, this is quite analogous to the anti-gluing in polyfold. You wanna make the gluing unique or make it a niceomorphism by including this keeping track of this anti-gluing. So by making both of these equal to zero, it's like analogous to saying that I'm making the anti-gluing equal to zero. Okay, so now we have an obstruction bundle picture. So this is the general picture. So in our particular case, we have a bundle over the set of all pairs R and T. It's a trivial bundle. It's just that the fiber over any point is the co-kernel that operator d zero. So this is someone dimensional vector space. And we have a section of this bundle. So S of RT is defined to be theta zero, the theta zero provided by this lemma. So we try to glue, maybe it doesn't quite work. So as I learned, the way mathematicians can be successful is when you turn failure to success by calling it an obstruction and writing a paper about obstruction theory. So that's the obstruction. And then this construction defines a map from the zero set of S to the set of gluings, the set of flow lines that are close to breaking. And then a non-trivial fact, which I won't attempt to explain here, that this is a homeomorphism. So all gluings can be obtained by this construction and this does a bijection. So then, well that's nice, but then we have to understand what is this section. And then the trick for that is we're going to approximate it by a different section. So let's try to understand what's going on here. So let's look at this expression for theta zero. Now, so S of RT, to be a little more precise, is the projection of theta zero onto the co-kernel of D zero, right? So this first term is in the image of D zero, so it doesn't come up at all. And I've got this term, which measures the derivative of the equation, that's going to stay there. This psi plus, it turns out this psi plus is very small. It's much smaller than everything else. And the reason why, so I didn't draw the picture of the cut-off functions, but the cut-off functions look like this. So here is, you know, S, here's R over two, here's minus R over two. So beta zero is sort of going like that. And beta plus is going like this. So we could make it so that the region where it's actually getting cut off, let's say this goes from, say, R over six to R over three. So this is minus R over six and minus R over three. Okay, and then if you look at this equation, the other equation for theta plus, this thing is actually equal to zero. And the upshot is that, away from this region, where the derivative of beta zero is non-zero, psi plus satisfies an equation which forces it to exponentially decay as you go this way, right? So psi plus is something over here, but then it has time R over three, which is some very large amount of time to exponentially decay before it gets over there. So the psi plus that you see here, it's heavily exponentially decayed, so it sort of doesn't matter much. So this, let's ignore that. There's some other stuff here and let's ignore that too. And then, what? This is great for math. Yeah, so then we have an approximate section and then this U plus, we can also simplify that a little bit because we have to think about the asymptotics of U plus. So before translating for T sort of very negative, U plus, sorry, not T, S. What is U plus of S? Well, it's approximately E to the minus, in which way is it going to go? Success, not going to be for the section. This has nothing, I'm just talking about more theory right now. Oh, that, oh shit. This is supposed to be a fracture, S. Actually, if I do this, it's going to give me an error and it's going to tell me I have to use a math frack. Yeah, sorry. It's not enough letters. You'd think with a Greek financial crisis they could sell us some letters for cheap. Except it seems like they already did that and all the other letters look the same as Roman letters so we're out of luck. I have to import some letters from somewhere else. Okay, so this is going to look like minus. This is, I have my reasons for writing it like this. So this were so generically, so we're adding conditions to the generically. So lambda minus is the smallest positive eigenvalue of the Hessian at the point P. I put a minus here because it's associated to the negative end of a flow line. I guess I don't need an absolute value sign because it's positive and eta is some eigenfunction. And I'm going to assume also it just add to my assumptions that let's assume that the eigenvalues of the Hessian are all distinct. Q, h of q, so this is q, not P. Okay, so that's what it looks like. And then after translating, well, okay, so now we have to pair this with a co-kernel but we've translated by total translation distance of r. So the upshot is that this d beta plus ds times u plus, when I project this to the co-kernel, this is approximately e to the minus lambda minus r times some number. So I can think of eta as a number. Well, it's some fixed element in the co-kernel. So it decays like this, right? Yeah. Is that eigenfunction except the one corresponding to the eigenvalue or? I shouldn't have said eigenfunction because it's just an eigenvector. This is a freaking vector space here. Okay, I'm sorry. So eta is that eigenvector corresponding to this eigenvalue. Sorry, I said eigenfunction because I'm really thinking about holomorphic curves. This is just a model case for holomorphic curves. All right, and then I'm gonna, to make things, the equation's a little simpler. Let's, I know the derivative of the equation at u zero is an isomorphism for the co-kernel and let's use this map to identify the co-kernel with r. So this eta is actually just a real number now and generically it's gonna be non-zero. And then, so I can define this, what's in the paper with Taubs, it's called the linearized section. The linearized is not really a very apt word because it has nothing to do with linear or non-linear. It's just a sort of leading order approximation to the section and I'll call it S zero. So S zero of RT is, it's now e to the minus lambda minus r times eta. Minus T, that's my section. And then the next fact you need, well let's put it this way. So for a fixed T, the number of zeros of the actual section counted with say Z mod two or counted with signs is equal to the number of zeros of this linearized section. Another fact which I, if you wanna do this carefully there are a billion technical things you need to check. So another fact you can check is that S is smooth and generically S is transverse to the zero section. So if I wanna, so the upshot is that if I wanna count the number of gluings, I have to count the number of zeros of this thing. But this is now quite simple. This is just an exponential times a constant minus T, not very complicated. So we can put together the conclusion from this. So the conclusion is that, so if eta and T have opposite signs, well then you're in big trouble because there's no way this can ever be zero. So then you can't glue. And if eta and T have the same sign then there's a unique solution R. So to the equation S zero of RT equals zero. So the number of gluings is equal to one. And probably with a little more work you could show that there's actually just one gluing. Then there's no cancellation, right? So this is sort of what we expected to get. We expected that you would be able to glue this configuration for either positive time or negative time or not both. And that's what we found. And the analysis actually tells us whether T will be positive or negative. To figure it out, you have to look at the asymptotics of U plus and you have to pair that with a, well with the derivative of the equation. So it tells us completely explicitly for which sign of T you'll get a gluing. Any questions about this? It's about to make it harder. Can you say a little bit more about what goes into this second fact? You've got the number of zeros is the same for yourself. Well basically you need to show that the things that I casually crossed off are small. So basically there's some, you're sort of restricting attention to some set of R and T. And you want to show that as you deform S to S zero, no zeros of the section can escape outside of the boundary of this region. To do that you need to show the boundary of the region. The S zero is very big. And the stuff that I threw away is very small so it can never vanish. How much more complicated is it that if you have a curve from if you're in the S one? What are we about to get to that? There is another generosity condition because you're assuming that the eigenvalues of H Q somehow project the top, the eigenvector corresponding to the eigenvalue has a non-zero projection to the coca, right? Because you're assuming that the projection of eta- Is there a follow from this as a method? A follow? Yes, okay. So there's something I forgot to say. This is not a generosity assumption. It turns out that if you look at a co-kernel element, you can identify it with an element, you can identify it with the element of the kernel of the formal adjoint. And if you look at the asymptotics of an element of the kernel of the formal adjoint, it has exactly the same asymptotics as this flow line U plus. So the leading eigen... Well, I guess that's something one needs to check. But I think one can just prove that probably. Well, anyway, if you look at the asymptotics of the co-kernel element, it has a leading term which has the same form as this leading term over here. So I guess it might be a generosity assumption to say that that's non-zero. In the Holmorf-Kerf case, it actually... Yeah, you're right. We maybe need to assume that. I'm sure it works exactly the same. What? I'm sure it works exactly the same in the Holmorf-Kerf case. So, yeah. So it's an additional generosity assumption, which probably most people in the audience got lost and don't know what I'm talking about. So there's some additional generosity assumption in there. So it's basically saying that the flow line is approaching the critical point with the slowest possible asymptotic material. But I was assuming that about U plus. But I'm also assuming something about the co-kernel of U0. Yeah, because you're assuming that the projection of the co-kernel of U0 is not zero, right? So you get this number, so U2. Yes. So you'll see that. Right. So that'll be true if the sort of leading asymptotics behaves in a generic way. Right. Right. I think I'm going to not go explain that point more at us, too. That's fine. But, yeah, you're absolutely right. I forgot to mention that. All right. Other questions about this? Yeah. There's some more orientation of the co-kernel because if you identify it with power, and then the sign matters. Right. So I'm choosing the identification in which this thing is identified with one. So the derivative of the equation at U0 is some non-zero element of the co-kernel. I'm just going to call it one. So I did that just to simplify the equations. Otherwise, I'd have to sort of, instead of thinking of these as real numbers, I'd have to think of them as co-kernel elements and use more notation. All right. Well, let's up the difficulty level just a little bit. Where's the hook? It's hiding. I need a hook to get the hook. Come on. And I probably planned this poorly. It's too complicated. I was trying to do it so that the boards wouldn't have the shadow on them. So the next example is, let's say we're in this S1 valued case. So there can be an index 0 flow line from q to itself. So here p is index i plus 1. q is index i at time t equals 0. There's a flow line from q to itself. So I already know how to glue this, because it's the same analysis I just did. But maybe what if we want to glue to two copies of the flow line from q to itself? So at time t equals 0, I have this configuration. And we could say, if t is non-zero, but close to 0, does there exist a flow line which is obtained by gluing these three things together? So maybe there is, maybe there isn't. We can use a similar analysis. So in this situation, there are three gluing parameters. Because t is, as before, I'm just going to replace time 0 with time t. And then there's going to be an r1 and r2. So what I'm going to do is I'm going to translate everything. So I'm going to, the total translation distance between the upper two pieces will be r1. And the total translation distance between the lower two pieces will be r2. So I translate everything. I then do the same business. Let's call this u plus and u0. And the gluing equations are going to look like this. So you're going to be able to glue up to two elements of the kernel of u0. So you're going to have like a theta for here, which you can get to be 0. A theta here, which lives in a one-dimensional vector space. And a theta there, which lives in a one-dimensional vector space. They continue to identify this one-dimensional vector space with r. And the gluing equations are going to look like, so I'm going to have e to the minus lambda minus r plus e to the minus, sorry, r1, plus e to the minus lambda plus r, sorry, this will be the same eta as before, plus e to the minus lambda plus r2 eta plus minus t equals 0. And then e to the minus lambda minus r2 eta minus minus t equals 0. So where does this all come from? So what's lambda plus? So lambda plus is the largest negative eigenvalue of the Hessian at Q. And eta plus or minus are determined by asymptotics of u0 at the positive and negative ends of u0. So the second equation is sort of the same kind of thing I had before as the equation you get for this. So here we're looking at the co-kernel of this. And there's a term coming from the asymptotic, the negative asymptotics of u0. And then negative asymptotics is measured by this eta minus. And then this exponential here comes from the fact that I'm stretching these two things apart by distance r2. And then this gluing equation, this eta is determined by the negative asymptotics of u plus. Then this eta plus is determined by the positive asymptotics of u0 over here. So those are the equations you get. So these are simple equations. And now you can just analyze them. Why is there a thing with two exponential terms before and now and it wasn't before, which is, I didn't understand that. OK, so the first equation comes from this first piece, this middle piece here. The thing is this middle piece, we're gluing things to the middle piece on both sides. And that's why there are two exponential terms. And the second equation comes from this lower piece. We're only gluing one thing to it, which is why there is one term. So in general, each piece has a gluing equation where for everything that's glued to it, there is something involving the asymptotics of that other thing. So like if we're gluing these three chalkboards together, so this chalkboard, the gluing equation for this chalkboard has a term for the asymptotics of this chalkboard and a term from the asymptotics of that chalkboard. Confused as to what exactly the structuring bundle in this section are. OK, so in this case, it's a bundle over the set of all pairs R1, all triples R1, R2, and T. And the fiber over a point is the co-colonial of D0 direct sum the co-colonial of D0. Because now we're gluing two pieces which have co-colonial. So for each of those pieces, you have a sum end. Oh, yeah, it gets much worse. But I won't do the worst part. So we're going to see later that an example of this bundle is not trivial anymore. It's actually a real interesting vector bundle. Anyway, with these equations, maybe it was a little confusing how I got to them. But now that I've written them down, you can just solve them. It's completely elementary. So you can solve the gluing equations if and only if all of the following hold. So first of all, A to minus and T better have the same sign. That's just looking at the second equation. If A to minus and T. So we're sort of fixing T. And the question is, can we find R1 and R2 solving these equations? So the second one can be solved if and only if these things have the same sign, and then there's a unique R2. Then you have to look at the first equation, which is a little more confusing. So we're saying that that's one. Doesn't that make d dt of vt minus 1, which is what we're multiplying T by md equation? This reminds me of some nightmare I had once. I was like, I had some math nightmare about this, and it started reminding me about this. So I've sort of distracted everything out. So this minus sign is sort of long disappeared. So I mean, you could just forget about this. Let's just call this like F or something. So this is the equation that I'm trying to solve. And then I'm identifying this with a co-kernel. And then everything. So I sort of trivialized the co-kernel, identified everything with 1. And then these are some real numbers determined by the asymptotics. I don't know if that helps or not. A priori, I have no idea what the signs of these numbers are. It depends on the asymptotics of the flow lines. So basically, you're saying, am I approaching the critical point from this side or from that side? And then in some obscure way, by that identification over there, one of them is actually declared to be positive and the other is declared to be negative. Anyway, so you can figure this out. So well, maybe it's not necessary to go through the whole thing. But there's some more conditions to be able to solve the first equation. So in this case, and then when you look at this first equation, it's actually going to matter. So it's actually going to matter which of these two eigenvalues is bigger than the other. So let me spare you the whole thing because I'm just going to confuse myself trying to do it. But there are more conditions. And in some cases, these depend on the sign of the difference between these two eigenvalues, which I'll add to my list of generic generosity assumptions that this difference is not 0. So the question of whether or not you can clue, it depends on the relative signs of eta, eta minus eta plus and t, and also on the sign of this difference of eigenvalues. So it's like four different signs. So it's sort of 16 different cases. And you can just sort of go through and check each one by elementary methods. I worked out all the details in this blog posting. July 2, 2014, yeah. Or at least I worked out the algebraic details, the analytic details. If anybody likes analysis and wants to understand this stuff better, it could be a good exercise to do the analytic detail. So sort of to justify all these facts and work it all out. So we did this in the paper with Tauss for holomorphic curves, but not in this Morse theory case. But I should also mention that the thesis of Jiali does some polyfold version of this for certain holomorphic curves. I haven't actually seen the thesis, but there's some related polyfold thing by Jiali. OK, so anyway, the upshot is we have this configuration we want to glue. We reduce the question of whether we can glue these to solving some elementary equations involving real numbers. So when you can just solve them. So you may or may not be able to solve them. In general, I can tell you what happens. So the sort of three possible outcomes is just kind of curious. So let's suppose I want to glue two k copies of the flow line from q to itself. So case one is you can glue only for k equals 1 and only for sort of one of the possible signs of t. That's the simplest case. Case two is you can glue for all k for, say, for one of the signs and none for the opposite. And case three is you can glue for k equals 1 for one sign and k equals 2 for the other sign. Those are the three possible outcomes. Remember, we were supposed to get a polynomial, which supposed to be 1 plus t to the plus or minus 1. So in case one, you get the, well, I mean, you're sort of seeing a 1 plus t there. In case two, you're sort of seeing a 1 plus t plus t squared, dot, dot, dot, which is the inverse of 1 plus t. And in case three, you're seeing something like 1 plus t squared divided by 1 plus t, which in Z amount to coefficients is 1 plus t. So you are getting the correct polynomial. Well, the third case is quite weird, but it can happen depending on what these signs are. And you get this. So which of these three cases you get may depend on what flow line you're trying to, what flow line you plus, you're trying to glue, but you always get the same polynomial, the same power series. So I didn't explain, I didn't go through all the calculations, but I hope I explained the basic idea of how you sort of, how you try to glue these things and you reduce it to some elementary equations, which you can then solve. Are there questions about this? So I think I have 10 more minutes. Is that correct? Yeah. All right. Let's just forget all this and start over. So if you got totally lost, we'll just start over. We'll just forget about all this. I guess there's one question. Oh, yeah. So you said it depends on whether the thing comes from one side or the other side of the question. What does that mean? How does the critical point have sides? Well, so here's the critical point Q. Here is an eigenspace, or this is an eigenspace of the Hessian. And if this is sort of the smallest possible eigen, smallest positive eigenvalue of the Hessian, then the flow line U plus, generically, is going to either come in like this or it's going to come in like that. So we'll come in along this eigenspace plus some exponentially decaying error terms, so that those are the two sides. This is a little counter to your usual intuition and Morse theory because usually you think of a stable or unstable manifold, which has dimension bigger than 1. But when these eigenvalues are distinct, there's a preferred direction, a preferred eigenspace, a preferred line along which you approach the critical point. So that's actually the generic situation. Well, usually you want to assume the opposite in Morse theory. Usually you want to assume all the eigenvalues are the same. It makes it easier. But it's not a generic at all. What else are you going to do? It just seems like you'd have a lot of algebra and equations to solve when you're sure. Oh, but I knew that d squared equals 0 because it's the same as cyberwitten. d squared equals 0 in cyberwitten. I mean, sometimes you know something's true, even though you don't have a proof. Then you just have to work out the proof and it comes out. Whoops. OK, so now let me let's just switch gears and talk about the holomorphic curve problem I care about. So I'm going to introduce the problem and then we'll talk about how to solve it tomorrow. So we're going to look at a three-dimensional contact manifold. So y is a three-manifold. lambda is a non-degenerate contact form. r will denote the ray vector field. And we'll choose j will be a suitably generic, meaning satisfying an ever-increasing list of desired conditions, almost complex structure. Desirable, almost complex structure. Well, desired might not be generic, but you have to make sure your desires are achievable. What? Come here, you know. I can imagine doing it like a dating website. I'm looking for a Comeyger partner. See how many responses you get to that. OK, right. Anyway, so we satisfy the usual conditions, which this is a special case of Chris Wendell's talk, but I want that the j sends the derivative of the s direction to the ray vector field. j sends the contact structure to itself, rotating positively with respect to the orientation on it. j is r invariant. OK, then we're looking at holomorphic curves. Again, a special case for Chris Wendell's talk. And these will, so sigma is a punctured compact Riemann surface. And the n's are asymptotic to ray orbits, with s going to plus or minus infinity. So these are the kinds of things that one wants to count in the SFT differential. In my case, I can count them in the embedded contact tomology differential, but I'm not going to assume any knowledge about either of those things. I just want to talk about gluing these things in a particular situation. Do you have a reward to generate the tip on it? Yeah, I put that here. That's OK. Sorry? Make a dating search for theorems and proofs. Oh, that's a good one. I could put a bunch of entries on that one. What? Theorms. I could put a bunch of theorems on that one looking for proofs. I don't really have so many proofs looking for theorems. OK, so here's the problem. Here's the problem we've got. So as to glue, along a branch cover of a trivial cylinder. So a trivial cylinder means r cross a ray of orbit. So you could have a holomorphic curve like this. So this has index equal to 1. So it lives in a one-dimensional modulized space, which means that after you might up by the r translation, it lives in a zero-dimensional modulized space, assuming transversality. So the usual kind of gluing would be two of these things might break like this, along some ray of orbit, and you want to glue them. Or maybe they break along multiple ray of orbits like this, and you want to glue them. And that's OK. You can glue these. But the problem that can come up, so you can glue these two things to get an end of the modulized space of index two curves. And the problem that can come up is you could have a sort of three-part curve, where you have an index one curve on the top. You have an index one curve on the bottom. So this index one curve has the simplest case. It would have a negative end at gamma squared, where gamma squared means the double cover of some simple ray of orbit gamma. And this lower curve has two positive ends, both with the ray of orbit gamma. And this curve is index one. So it looks like you can't glue these because they have different kinds of ends. So there's no way to put them together. However, you can insert in here a pair of pants, which is a 2 to 1 branched cover of the cylinder R cross gamma. And it turns out that 50% of the time, depending on some conditions, which I'll tell you later, this pair of pants has index 0. In general, it has index either 0 or 2. So sometimes it has index 0. So this whole configuration is index 2. And you could say, can this whole configuration be glued to an end of the index 2 modulized space? And the answer is, yes, it can. And you could say, how many ways can you do it? In this particular example, the answer is one way. And you can see that by looking at an instruction bundle. So I'm out of time for today. So what I'm going to do next time is I'm going to discuss just this simple example of how to glue this using an instruction bundle, which in that case will actually be a non-trivial bundle, where it's not just the same fiber over every point. It's actually interesting. And then I'll mention the issues that this raises when you think about the definition of the SFT differential. Because this means that certain configurations involving branch covers of trivial cylinders must contribute something to the SFT differential. So it'll be interesting to compare when we see the definition of the SFT differential next week. All right, so next time I'll talk about how to glue this. That'll be my last example. Thanks.