 I mean, thank you very much for the invitation to speak here and for organizing a summer school in my living room. It's super convenient, excellent guys. Okay, so the title is maybe a little bit cryptic, but I, right, so maybe I will convince you hopefully that the result is not so much. Okay, so let's try to get there. So first of all, of course, well, that's why I prepared the first few things, which I want to say because they're all obvious because you've been listening for more than five minutes. So I will, what I want to talk about is some results in motivic homotopy theory. So I'm interested in this category of motivic spaces. What's a motivic space? Well, it's a special kind of pre-sheaf valued in spaces. And so a pre-sheaf on the category of smooth varieties and such a pre-sheaf, which happens to be a sheaf. So I will always be dealing with misnivage sheafs. As in which topology is some sort of topology, which is somehow relevant. It's the correct one for various reasons. It doesn't matter hugely right now what it is. And then of course, there's gonna be some extra condition because it would be silly to have like two names for the same thing. And the extra condition is this so-called A1 invariance, right? So you just, you look at all those sheafs F such that if you evaluate it on some smooth variety X or if you evaluate it on X times A1, you get the same thing. And this is the so-called category of motivic spaces. So there are of course, supposed to be lots of motivic spaces because, well, supposedly studying these categories will try to ask good things about the words by which I mean polynomial equations. Right, so for example, if I take any smooth scheme whatsoever, then I can try to, well, it will not usually as appreciative. Of course, it will live here, but it may not be here. But you can just sort of brutally move it into this category because this inclusion has a right adjoint which is called the motivic localization. And that's the obvious thing to do. But usually I will not even write this L mode and I will not write this Unita embedding thing. I will just say view X as a, oh, well, this is not what I was gonna say. But what I want to say is I want to view X as a motivic space. And this just means that you should do this localization which in general is of course, highly non-trivial. I want you to please imagine doing it. And then there's another class of examples which is gonna be very important for my talk which is let's say you take some sheaf of a B in groups. So in this category here, you take some of the simplest possible objects. Well, of those which are, I mean, you write the not sheaves of spaces, you now make them zero truncated but you give them some extra structure. There's a B in group structure. So that seems like a pretty reasonable thing. And then you could ask, is it the case that this sheaf actually lives in the category of motivic spaces? And I mean, that's not always true. There's a condition and I mean, it's just this condition here but maybe for a sheaf of a B in groups, this looks a little bit more familiar. It just says that if you take F, you evaluate it on X or you evaluate it on X times A1 and you should just get the same a B in group. Okay, so this maybe was not very exciting. So what else can you do with a B in groups? Well, you can look at their Albert McLean spaces or in this case, I can look at the Albert McLean sheaf KNF, right? So this is gonna define some sheaf of spaces by definition now. And then I can ask again, does it live in this category of motivic spaces? And well, you need this homotopy in various property. And what does it mean to evaluate some Albert McLean sheaf on a smooth variety? You'll get some space, the homotopy groups of which are the homology groups. So in this case, the condition is that this homology HI of X times A1 with coefficients in F should be the same thing as the homology X with coefficients in F and this should hold for all I less than or equal to. And so you see in principle, as you make this n bigger, you get some more interesting object and you get a more stringent condition. And then it's somehow natural to single out those pre-sheafs or those sheaves of a being groups where I can do this for all n at the same time. And those are called strictly A1 invariant. Right, so let's just leave it at that. Okay, so now we come somehow to something which is close to the heart of the whole thing, which of course, classically, you have this notion of connectivity of a space. I already implicitly used it by saying that an Albert McLean space is somehow particularly simple. And what this has to do with is that you're some sort of sphere and you build how you can attach spheres to your spaces, for example. And more typically, we somehow we have two spheres. Again, I'm sure you know this, right? So that's the usual sphere, usual circle if you want, circle. And then there's somehow some algebraic circle, which is denoted GM. And I mean, it's just you take the complement of zero in the affine line. And let's say you point at one. And then these spheres, they define your point of motive spaces, which, well, they're a bit like spheres somehow. For example, I mean, the complex points, of course, of GM is just C minus the origin, which has the homotopy type of S1. So it looks like it's just the same thing. But somehow this is a very topological observation and algebraically they're different. And then what this tells you is that you somehow get two notions of connectivity out of this whole thing, right? You can measure connectivity somehow in terms of S1 or you can measure it in terms of GM. And I mean, there's no reason for this to agree in general. Now I just said there's no reason for them to agree. So let me contradict myself. So here's an important example. If I take a smooth scheme and I take some closed up scheme, but it doesn't have to be smooth. And, but I will assume that it has a co-dimension at least D everywhere. And if you do this, well, I mean, whether you do this or not, then you can look at sort of the formal tubular neighborhood of C and X, right? So I do this thing, X mod X minus Z. Right, so X is a smooth scheme. So it defines a motivator homotopy type and X minus Z is a smooth scheme, defines a homotopy type. So I can take the co-fiber of the inclusion, just define some motivator homotopy type. And yeah, I mean, if Z was smooth, then you should think of this as some sort of algebraic version of a tubular neighborhood. And if Z is not smooth, then I don't know, it's still some algebraic version of a tubular neighborhood, but maybe a bit harder to imagine. And it turns out that this guy, I mean, if I view this as a pointed space, this is D minus one connected. D minus, that's not a very good D. And, well, it's a D minus one connected in the S1 direction or in the GM direction. It turns out in both directions. Okay, so how do you see this? Well, first is let's say that the Z is smooth and then there's something called the homotopy purity theorem, which basically says that this really does behave like a tubular neighborhood. And it tells you that it looks basically like a tomespace of some, I mean, like the tomespace of some vector bundle. And so it means that locally, somehow it looks like this sort of thing, A1 mod GM, which is P1 and which is somehow, which is S1 smash GM. So the point is locally, it looks like it's connected, or I mean, if it's dimension D, then it's gonna be S1 to the, I mean, it's SD smash GM to the D. And so locally, it looks connected in both directions. And then somehow the way the topology works at this neighborhood topology works and what we come to clear is that this also implies that it's globally connected in this sense. And so what if Z is not smooth, it turns out that there's some sort of filtration argument and you find that this result is still true. Okay. So this will somehow play a big role in what is to follow. And now we can look again at our favorite strictly homotopy invariant sheeps, right? So abstractly homotopy invariant, right? So then my F defines a particularly simple type of motivic space. And then what happens is that this space is discreet in the S1 direction. Okay, so it looks like the simplest possible thing which you could deal with in this direction, but it doesn't have to be in the GM direction. So somehow there's some structure going on, which is masquerading, but yeah, so it's not immediately obvious, but it turns out that there's some more things going on. Okay, but not in general in the GM direction. And I mean, I'm not sure that I have anything. That's a question for you, Tom. Yes, I saw the question, is there a nice reference for this filtration argument when Z is not smooth? Probably, but not off the top of my head. So I mean, also when I make this claim, I'm saying that it's gonna happen in the category of S1 spectra, if you know like these sorts of details, I'm not claiming this in the sort of completely unstable sense if I want to be very precise. So I'm sure I can find you one. If you email me, I can find something, but not right now. If I clearly remember something is at the end of Röndig's erstwhile advances mass paper, but at the end of the paper, this filtration argument. Okay, thank you. Where was I? Yes, okay, so I was trying to explain why it's not always the case that these guys, right, so if I take one of these guys, why is it not always somehow connected in the GM direction? And I mean, I think just because there's no reason for it to happen, right? There are these two directions and you picked something which by definition was sort of discreet in the one direction. Why would it be in the other direction also? So you can just work out some examples and you see that this happened. So let me introduce some notation. So in general, let's see sort of a spectrum, for example, it would be discreet if and only if it's loops vanish, right? So it seems important to study the GM loops of this guy. So if this is always zero, then we would learn that these guys are always somehow discreet in the GM direction, but it's not. So let's give you the name. This is called F minus one, that's why Wojcicki's contraction construction. And the point is that this need not be zero. And right, so now let me just give you an example. So what you can do is you can work out, this is not totally trivial, that this loop space is again just a sheaf and it's section over, let's say K is given by F of A1 minus zero mod F of K. And I mean, you can put X instead of K and that's an obvious modification of this formula. And then you can just, that there are some strictly homotopy invariant sheaves which we know, for example, the witch heave. So you take the pre-sheaf which assigns to some, let's say a smooth variety. It's a wit group or wit ring. And then you take the associated sheave in the same as topology. And this turns out to be strictly homotopy invariant, not obvious. And what you can compute, what one can manage to compute is that if you do this contraction business, you just get the wit guy again. And this is not zero. So, and also another very famous example is these KN Milner-Witt. So they're un-ramified Milner-Witt K theory. And also if you contract it down, they're never zero and they form like this internet GM loop sheave. Okay, so this is just something which happens and which may be surprising at the beginning, but okay, so we got to live with this. Now I want to put on my my topologist's hat for a little bit and think about loop space theory. And so one slogan, which I have picked up is that somehow taking loops increases structure. Let me just throw this out here, increases structure. So what do I mean by this? Well, I mean, if X is a topological space, let's say pointed, right? And then I take the usual loop space. Now this has more structure, right? Because loops can be composed. So in other words, this is some sort of monoid. And if you want to be fancy, this is an E1 monoid or A infinity. Okay, so this is somehow the beginning of classical finite loop space theory. So if you take higher loops, right? So if I put an N here, then I make this an EN monoid and somehow the point is that in fact, this is all the, this extra structure recovers everything. So that's somehow the crux of the finite loop space theory or the most basic result. I'm sure there are very difficult things which one can study. So now what I want to claim is that if I do the same thing with my strictly homotopy invariant chief F and I do this in the GM direction, I still get more structure. So let's take F strictly homotopy invariant. Right now, let me look at the first loop. So I will denote it F minus one for reasons of tradition. And now what's clear, well, what's almost clear is that this is going to be a module over somehow stable maps from GM to GM. And the point is this thing has been computed by Morel to be the growth and de-quittering of K. Okay, so what you learn is that this F, I mean it was just some arbitrary, yes, the F, it was just some strictly homotopy invariant chief which someone has given you and you contract it down once. And now suddenly everything becomes a module over the growth and de-quittering, which can be a moderately complicated ring. It's just some extra structure which somehow pumps out of the sky and maybe it will be useful. And there's this, there's also more. So there's also something called, well, I would call it monogenetic transfers and I don't want to go into too much detail exactly what this is. So what it does is if you have some fields, let's say finitely generate over K and then you have some finite monogenetic extension. Right, so this is a finite extension and you've chosen a generator then this yields a transfer map sort of tau X from F evaluated K of X goes to F evaluated K. Right, so this is somehow some fancy way of adding things together in some special way. And then we will see later some other incarnation of this. But all I'm saying is that, right? So also in our motivic setting, it turns out that if you do take GM loops, you do find more structure. And so what I was saying here is that, classically, if you recover enough structure, then somehow this loops, then you can reverse this loop operation, right? So I do this omega n thing and it goes, let's say, from n minus one connected spaces. And then I go to en monoids, monoids, as I said, this is somehow some way of encoding the fact that the first loop space has a multiplication. The second loop space, the multiplication becomes commutative and then higher loop spaces becomes more and more commutative. This is actually in equivalence. So if you give me only the en monoid, which was the loops on your space, then I can actually reconstruct this space. And I mean, you just gave me everything, okay? And so what this tells you, for example, is that, oops, why is this not, okay? Right, so if you have some, if x is n connected, I guess n minus one connected, then and y is arbitrary, then if you do maps from x into y, first of all, this only depends on the n minus one connected cover of x, of y, only on x and, well, sort of the n minus one connected cover of y, but then of course I can, I've already learned that n minus one connected cover only depends on the loops. So I can put an omega n here and I can forget about the covering, whatever doesn't change anything as an en monoid. Okay, so this is maybe a bit of an esoteric observation, but if you do make this observation, you could, I mean, right, so now we have a motivic homotopy theory and how does this work? Well, usually something which works classically has an analog, hopefully, emotively. And then sometimes, if you're very lucky, you can prove this analogous thing and then it gives some interesting result about algebraic varieties. That's somehow secretly the plan of how motivic homotopy theory is supposed to be useful. So now let's try to think about this analog while you might guess the following thing based on that. All right, so I put some motivic space here, which is highly connected. What do I take? Well, I take my x mod x minus z, right, from the first example, the co-dimension of z greater than or equal to z. And now I'm gonna map it into something else. And let me put k and f here, right? And so now remember, this guy here is d, so it should be a d, right? So this guy here is d connected in both directions. So if you believe that this analog somehow should told, it tells me that this set of map, I mean, this homotopy classes of maps thing, it should only depend on the default loop space of this guy in both directions with its extra structure remembered, okay? So it only depends on, well, if I take the default loops in the S1 direction here, I just get back f and I remember that it's the chief of a billion groups. And then if I detect the default loops in the GM direction, I get f minus d, f minus d with its extra structure. Okay, so we have to somehow figure out what we think is all the possible extra structure and what we think is all the structure, i.e. the transverse plus GW module structure. I mean, of course, we don't know this. Maybe there's more, which nobody has discovered, but right, unless you provide me with more structure, which could possibly depend on, I will just guess maybe that's all there is. And it turns out that that's true. So we do not have motivic finite loop space theory. Certainly we do not have this, but I mean, of course, this is an extremely special case and you might imagine tackling this by other means and you can. So this follows from a theorem of Morale. Ah, so let me, right, so before I do this, let me observe that this set of homotopic classes here, this actually has a more sort of classically algebraic interpretation because it's just the same thing as taking cohomology, right? So mapping into this guy here is taking cohomology. So this is some HD of something with coefficients in F. And now I'm mapping out of X mod X minus C, which means I take cohomology of X with support in C. Okay, and so now this topological statement here or this statement coming from topological integration, it says that this cohomology group somehow only depends on the contraction of it. And when you get there and you know some things about motivic homotopic theory, you will easily see that this is true. So Morale, well, I mean, you will see this is true because of some difficult results. So Morale says or proves that the so-called Rothschmidt complex and how we get to the title. So it looks a bit like this, this is some, you take your F and you view this as a sheaf on the, this name which side of X and then you can resolve it. So you see zero X at C one of X and it keeps going. And it keeps going. And then this computes cohomology. Okay, so this is not super happy yet. I will try to answer your question in a second, Sean, because I mean, I need to tell you something about what these CIs are. But so the point is to this computes cohomology and so roughly what are these CIs? Roughly what I want to say is that the CD of XF, this is going to be the sum over all the points of co-dimension D and X of the default contraction evaluated at X. So this is not literally true, but it is morally true. And this is right. So, and then there's a differential and it uses transfers and stuff. But the point is you can check explicitly, right? So now what does it mean to take cohomology with co-efficient with support in Z? It just means that I have to sum here not over things which are of co-dimension D, but also inside Z, right? And the effect is just that you chop off the first D minus one term. So this one goes away, this one goes away and so on and so forth. And so you find some complex and the terms, well, it starts with F minus D of something and then you take F minus D minus one of something and so on and so forth. And so you can easily convince yourself that also this differential, which has some explicit form, it says, well, you do something, you put it back to here and you transfer and you multiply and it's a big mess, but you never use anything which is not already encoded in F minus D. And well, that's how you see that this topological guess here it turns out to be true. So now what was Sean's question? Can we view the extra structures kind of? So I don't think so. So it's important that you do this with support. So it comes from the fact that somehow you're supported in high co-dimension or the extra structure on F minus D. I still don't see how to get this out of the fact that HD with coefficients in F is a group. I mean, it comes from the fact, no, I mean, it comes from the fact that GM somehow has special, right, so special maps of GM give you a special structure on the loop space. So I don't see how this is really related with them with comology. So I feel like this would more give you extra structure in the sort of S1 direction but we're dealing with sheaves of a billion groups so it has already all the extra structure in the S1 direction. Okay, so now we had this topological guess here and I explained to you that maybe for some esoteric reason you want to convince yourself that this is true and then more has proved that this is true. But I mean, categorically minded people that we are, of course we expect all of this stuff to be functorial. All right, so if I change X to some other variety then obviously this isomorph, I mean, then there's going to be some pullback map and clearly this also only has to depend on the contraction. So that seems obvious enough. So this is sort of this obvious question but what about pullback? Right, so let me amplify this, what do I mean? Right, so we let some, I don't know, let F from Y to X be some morphism of smooth schemes and let Z contained in X closed. Whoa, closed, sorry. Co-dimension greater than or equal to D and then I have to assume of course that the pullback also has co-dimension greater than or equal to D. This is not automatic and I let my F be one strictly homotopy invariant chief. Okay, so then there's this pullback map. Okay, I mean, that's because it's homogenous support it's functorial by definition. It's whatever abstract definition you use when like injective resolution or whatever. And so this group here only depends on the contraction, on F minus, clearly obviously this map only depends on F minus D with the extra structure. Okay, so this only depends on F minus D plus transfers. Let me just, right? So this is just some shortcut for all this extra structure which we discovered. And right, so when I came to this little exercise about the topological expectations and I noticed that Fabian has already proved it. I was very happy because, well this would allow me to solve some interesting problem. And I assumed that surely just souping it up a little bit whatever we've done so far and it should give you this. And so I will tell you in a little bit what I was hoping to do but it turned out that I spent many hours, days and nights trying to do this obvious thing. And I could not do it or it took me very, very, very long time. So this turns out somehow either I'm a bit dumb or this is much harder or equally hard as proving this. It does not at all follow obviously. So let me note this theorem. And this is what, this is the theorem which the title of the talk is about, right? So the pullbacks for the Rothschmidt complex exactly is supposed to say, well, how do you want to, what do you want to think is that there's some map which you do on this Rothschmidt complex and that should tell you how to do the pullback and then you should easily deduce this result here. And it's true. At least this is let's say two then there's somehow a map which you can write down on this partial Rothschmidt complex which you think should be the pullback and it would have the desired property but it's very difficult somehow to prove that this map which you do write down indeed is the map which you're supposed to write down. And basically that's what the theorem is about. Okay, so I hope the statement makes sense. So let me interject for a little bit. So who the hell cares? I feel like that would be a very reasonable question. I mean, to some extent, of course you're just testing the waters of a motivic finite loop space theory. I think by itself this is a reasonable thing but maybe also it would be fair to say it's a little esoteric but so let me give you one corollary which one kind of obtained from this. So this is in joint work with Maria Jacobson and it says the following. So we're working over perfect few k. I should have said this from the beginning. So k is always perfect for a lot of the results. Okay, so let me fix some motivic space. I'm pointing motivic space. Now what can I do? What I can do is I can try to stabilize this guy, right? So spaces are hard, let's make it simpler and how do you make it simpler basically by somehow is by smashing with P1, right? By smashing with both the directions of connectivity with S1 smashed here. So what I can do is I take my X and then I map it to Omega P1 Sigma P1 X. Okay, and so in some sense, classically what happens is that this sort of Freudian tile suspension theorem which tells you that somehow this map here depending on maybe the connectivity of X it will induce an isomorphism on some homotopy groups and this is the sort of stabilization phenomenon. And once you observe this, in fact, what you do is you do this a bunch of times, let's say this n plus one times here and I do it n times here. And basically you want to take the co-limit of this whole system, right? And then this is gonna be the, right? So this co-limit here, this is gonna be the stabilization and this space at the end somehow, this is supposed to be the simpler version. And so the point of the Freudian tiles suspension theorem is that if you just do this, right? You're starting with X and then you're doing a bit more and a bit more. And the point is that somehow these homotopy groups, they will stabilize. So there is this nicer stable answer at the end in the infinitely far, but actually you already reach it at the finite stage. So that's, I would say a very important result in classical topology, which unfortunately we do not have an equivalent of motivically. And so this esoteric result here, it can be used to prove that this map here, well Freudian tiles says that you should get an isomorphism on the first couple of homotopy groups depending on how high up you go. And so we can do this now on pi zero, which of course is much, much weaker than what you would hope for. But it says that this is an isomorphism if n is greater than equal to three, right? So you can imagine you have your space, you have X and then you have loops, sigma X and you have loops squared, sigma squared X and so on and so forth. And then you reach the stabilization here, sigma infinity X somehow. And pi zero of sigma infinity X is gonna be the same thing as pi zero of sigma three, omega three sigma three X. So this stabilization, this Freudian tile thing, which is supposed of course to happen for all homotopy shifts somehow, at least on pi zero it does happen and it happens at three steps. So I would like to believe that this is quite a nice result and that hopefully it would justify expanding energy trying to prove this. Okay, so now the rest of my time I would like to spend trying to indicate to you how to prove this theorem. I mean how to prove this theorem. So I will not indicate how you get, right? I won't do this because well, that would be another half hour but so I will try to do this. And at least for my taste, this is basically some sort of pretty hard core algebraic geometry, but okay, so let's try. So how do we prove the theorem? Well, I mean it's gonna be a struggle, we have to fight. Now, first of all, there's some easy cases. So there's an easy case, which is when F is smooth or more generally is flat. But somehow it doesn't, flatness somehow does not help. This is, I don't believe this is known to fail. I mean, maybe it's known not to like work in the most optimal bounds which you could guess but I don't think anyone knows these things to fail but we can't prove it. That was a question if this stabilization result which I proved for pi zero, will it also hold, is it known to be false for higher homotopy shifts? And I believe it's not known to be false. I'm sure everyone believes it to be true but we don't know how. Okay, so the easy cases is if F is smooth. And the reason is that in this case, there's a pullback map on the entire Rostromit complex without supports and it's compatible with the pullback in F because I mean basically you build it to do that and then the universal property sort of what the resolution means immediately tells you that everything works. So smooth maps, super easy. There's another easy case which I don't wanna treat which is if your F is what's called the homotopy module or an infinite loop sheet. And the reason is that in this case, I told you there's this fantasy formula which you want to write down but it uses transfers and then you have a problem because you don't have transfers on C zero so you cannot make this map. But if you have an infinite loop sheet then you have transfers on C zero. And again, you can then use this construction of Rost to write down the pullback map and you can check that this does everything that you want. And so basically the problem in general is that you cannot somehow write down this map on the Rostromit complex because there's somehow a degree zero you don't know what to do. But you can write it down into higher degrees but then you have to somehow argue that it's still correct. So here's a key observation. So I'm not gonna use the fact about homotopy modules because somehow the whole point of this result is to prove it for all three key homotopy range years. And they're definitely not all homotopy modules. So the key one key observation is as follows. Very simple. So let's say I have my Z contained in X with dimension greater than equal to D. And then what I do is I look at the generic points of Z and I only look at those let's say of co-dimension D on X. Right, so every generic, I mean, every point of Z has co-dimension at least D on X. So I'm looking at those which have like the smallest possible co-dimension. Not every generic point needs to do this because I mean it could be stupid things like Z is a union of two things and one is much smaller so it has higher co-dimension. Okay, so let's typically X is some smooth guy, Z is some close integral guy and then there's just one generic point and whatever. And then what you do is you look at this map, HD with support in Z, XF. And then I map it to this HD is point in Z. Okay, so what I do is I hand stylize my X in this generic point. So that's, it need not be a close point of X at all. So this is maybe some sort of slightly if the algebraic thing to do, but okay. So that's the beauty of algebraic geometry. We can do some slightly if the things. And okay, so here the support Z should maybe Z intersected with X, H, C, I, but I mean, it's gonna get really annoying to write. And so the point is two-fold. One is that this map is an injection, right? And the reason is just that you look at the Ros-Schmidt complex here, right? So what if I do comodity with support in Z? It means I chop off the first couple of terms namely the first D minus one terms. So it's going to inject into this thing here. And what do I see here? Well, I see exactly those points of co-dimension D which lie in Z and I do the default contraction. And so since the hand stylization does not change the residue field, this map is in an injection. And the other point is that this map from the hand stylization to X is basically pro etal. So up to some pretension, pretending it's etal. So in particular, it's basically one of these smooth maps which we understand. So pullback is understood. Okay, so what this means is that in general whenever I have to somehow pull, I write I have to pull back along some arbitrary map I can at least sort of shrink this target in some etal neighborhood of certain points. And this hopefully will allow me to make it a simpler. That's what I want to say or how I want to summarize is that somehow this problem is local in some specific sense, okay? So now we come to the real thing. So I will want to explain. So I want to sketch the proof of the following lemma. This assesses follow. So I assume that the field has characteristics zero and I let Y and X essentially smooth. So this is just some trickery which allows me to look at something like the hand stylization, it's not quite a smooth scheme but it's reasonably close to one. And I give myself some map from Y to X which is in fact a closed immersion of co-dimension one. And then of course I have to give myself a Z contained in X co-dimension greater than or equal to D and I have to assume that if I intersect it with Y contained in Y still has co-dimension greater than or equal to D and I give myself some F scripty homotopy invariant and then I have to prove. And this is what I want to prove. Then this pullback map I have a star HD with Z X F HD is 14 Z in the sec Y I suppose Y F and I will call this thing here W. This only depends on F minus D. Last transfers, that's it. Okay, so this is the special case of the main theorem. It's a special case A because I'm assuming that the characteristic is zero and B because I'm assuming that this map F is the regular immersion of co-dimension. A closed immersion of co-dimension one. So this is not a huge deal because we've already dealt with smooth map so I can reduce to write every map composers can be written as a composite of the smooth map and the regular immersion. So then you have to deal with regular immersions and the problem is also local. So I can factor it locally as a composite of the co-dimension one immersions. So you reduce to this co-dimension one case easily. The characteristic zero assumption is somehow serious. So I'm gonna say some things and that they don't quite work in positive characteristic. You have to argue more carefully but something along the same lines also works except you have to deal with annoying things like well, regular not being the same as smooth and blah, blah, blah. So this is, yeah. So I have only a finite amount of time and energy to explain things to you. And I think focusing in character to zero somehow it's already complicated enough to argument. Okay, so we have 15 minutes. I hope that we can, that I will be able to convey some ideas to you. So what do we do? So the first step we reduce to the case where X has dimension D plus one. Then of course Y has dimension D and Z has dimension one. And of course then W has dimension zero and I want to assume that it's just one point and it's a rational point. Okay, so how do you, how do you do this? Somehow this is some sort of standard trickery. So what you do is you replace X by the Henselization, right? So I mean what I do is of course I pick, right? So I pick my W in W a point of co-dimension D and Y, right? Because the problem was local somehow in Y because of this business here. The problem is local on points of co-dimension D and Y and W. So I picked this point and I just have to look somehow locally around this point and I replace X by the Henselization in this point. And of course I also replaced Y by the Henselization in this point and Z by the Henselization in this point. Okay, and then it's a sort of standard fact that this inclusion here, then it admits in retraction. Okay, so this is not immediately obvious, but I mean there are algebraic ways of seeing this geometric ways of seeing this. Just please believe me that this is the case. And now of course, because we're in character six zero this guy is regular and this guy is any field whatsoever. So this map here is going to be smooth. Oh, wow, essentially smooth. Okay, so now what have I done? Well, I have localized in this point. So you will see that these dimensions things happen automatically and I've, right? And I replace K by of course the residue field of W. And so by this trickery, I have assumed that I can now assume that this W is indeed a rational point. It's just, I mean, the statement is so general and it's gonna be hard enough. So we make our lives a bit more reasonable. Okay, so now I still want some further reductions. So I can assume that actually X and Y are smooth. Right, so in the assumptions and also because I did the centralization business here, I only said it's essentially smooth. So it's some co-filtered limit of smooth schemes. But I mean, all of these things they're gonna happen sort of at the finite stage in this co-filtered limit. And so you can always, and then there's some continuity thing that the value on F also will be then the co-limit. And so you can always things happen at some finite stage. So you can, you can assume that X and Y actually proper, I mean, not proper, but like really bonafide as smooth schemes. And then I can assume this Z in principle could intersect Y in many points, but I can as well, I mean, I can just throw out all the others. So I can assume that C intersect Y is just W. I guess it's already included here. And I can assume that Z is smooth away from Y, away from W. Right, because I mean, Z is a curve in characteristic zero. It's only gonna have finitely many singularities. And I just throw out all of them except for W. So then Z will be smooth, no problem. So now this Y into X is a regular immersion of co-dimensional one. So it's locally principles. Again, by working locally around this point W, I can assume that Y is the vanishing locus of a single function. And finally, I can assume that X is affine. Again, all of this because it's somehow, it's a local problem. Okay, so I will admit that maybe it's not quite clear where we're going, but let me try. So I'm gonna now make two claims which basically are the heart of the proof and which I'm not going to prove because, well, time, attention span, various reasons. So the first claim is the following. I claim that there exists a function U bar from Z to A1, having a bunch of good properties. So first of all, I want it to not be zero at W. Secondly, I want this product U bar F, right? So it's also a function from Z to A1. And this is gonna be finite, okay? And also I want U bar not to have double roots. Z for us. Okay, so now how do we do this? It's basically a Riemann-Roch argument plus using the fact, right? So you can use Riemann-Roch somehow because it's mostly a smooth curve and this allows you to basically get all of this. And then how do you make sure it doesn't have double roots? It uses the fact that fields of characteristics zero are infinite, right? Because I could always add some constant on it and then generically, no double roots. Okay, so let's believe this for now. And then what I do is I put phi one equal to this product. Well, so I do, sorry, let's do that. All right, so I use the fact that this X is affine. So I pick some U from X to A1 extending U bar and then I put phi one equals this U F from X to A1. Okay, and now here's the second claim. The second claim is that there exists more functions. So there exists phi two, phi three and so on up to phi D plus one from X to A1. So I set the following holds. So I write phi for all of them together, phi one to phi D plus one, oh no, oh no. So this is now a map from X to A to the D plus one and it will satisfy a whole bunch of properties. So phi of W equals zero and phi is a tall at all points of phi one inverse of zero intersected with Z. And then the very crucial thing is there exists an open neighborhood U contained in A1 over D such that if I base change U to there and I map it via phi to U times A1, this is gonna be a closed immersion. And so this maybe, if you know about these things, it looks like Gabber's lemma or like part of Gabber's lemma and indeed that's where I got it from until you can prove this because of an infinite field by a general projection argument. Okay, so now I'm sure you all remember all the 15 functions and schemes and everything that we had so far but just in case that you don't and also so that I feel like I'm telling you something let me try to give you an artistic impression of what's going on. So I hope this is going to work and help but let's try. So what had we done, right? So we had our scheme X or smooth scheme X. Now I only have two dimensions so everything is only gonna be two dimensional. And so X is just gonna be some blob here, okay? So that's X. Now inside there, we had the Y, right? This was from smooth scheme of co-dimension one. So it's gonna be some sort of curvy thing here. So this is my Y, okay? But that's it, we have, well, we had the Z, right? So we had some closed sub-scheme also of whatever, co-dimension D but I mean the only choice I have is also gonna be some one-dimensional guy but it could have singularities and it turns out that the problem mainly arises if the singularities of Z are meet Y. So this is sort of the kind of thing which is gonna happen. So this is gonna be Z. And then we had the intersection with Y which we call W. And the big deal is that our W is this point here which is the singularity of Z. So this is what makes everything difficult. Okay, so this is what we had started with. Okay, so what else did we do? Well, the first thing which we did was, right? We cooked up this U. So let's see, let me draw this in here. So the U, this will give me some other guy, right? So this is gonna be the vanishing locus of U. And this Y thing is, by the way, the vanishing locus of F, okay? And then we had like lots of maps, the phi one up to phi D plus one. So there's some map which I call phi equals phi one up to phi D plus one. And so this now goes to A N, of course, A D plus one. So let me try to draw this. Again, I only have two dimensions. So this is somehow an A one here on the coordinate X one. And there's an A one here on the coordinate X D plus one. And then there are some more coordinates in the middle, of course, but I cannot draw them. And so what happened here? Well, what happened is that right to the phi one was the product of U and F. So this X is here is the preimage of that is going to be the union of the blue bits, right? So it's gonna be the union of Y and some extra thing. And so what else? We had the Z. Now the Z is going to do something. All right, let's try to draw this. Maybe like this, something like this. Okay, so what's going on here? Well, so the W, the W is still here and still this singular point, we couldn't do anything. And but also what can happen is that this U, right? So this thing here, this has some, could have some new intersections with Z, right? So for example, here. So this we have, should I use some color? Let's use black, right? So there's here, there's some new intersection, maybe I call it little Y one. So this I cannot avoid. So this is some new intersection, but I had asked that it should not have double roots. So this is some new transverse intersection. And I said at the beginning, that the problem is basically when the Z has a singularity on the intersection. So basically what I'm telling you is there's a new intersection, but it's good, we can understand it, right? And then also what we have said is that on some open neighborhood of this guy here, the Z should be a closed immersion, right? So now my Z, what was it? It was this sort of curve here and it turned into that sort of curve here, but now it has a new double point, right? So there's a problem here. So that's one of my bad points somehow, X problem, but that's fine, right? So what this is saying here is that it should be a closed immersion locally in this coordinate. So I just throw out this point, right? So then I have my U. So this is just gonna be a D, which in this case is A1, I suppose. And I'm removing this finitely many bad guys, no problem. And then away from that, it's supposed to be a closed immersion. So maybe that's what I'm drawing. And so somehow morally what has happened here is that we managed to straighten out, right? So the X and the Y, they were some arbitrary smooth schemes and now we've somehow managed to straighten them out into a full A1. That's somehow what the GABA lemma always does. It gives you somehow enough room to get a full A1. So at the expense of having sort of this new guy here and these new guys here, which we now have to deal with, but if you take nothing away from my talk, then please somehow remember that the point of GABA's lemma is to straighten out into an actual A1, whatever that means. So now I have four minutes to try and finish the proof. So what do I do? One thing which I do is I let U0 be the intersection of U with the thing where the first one is zero. So in our case, that's just this point here, but in general there could be more dimensions. And then it turns out that there's some open subset here which will be well-chosen and I will not tell you what it does, the choice. And then the following thing is going to happen. So what do I do? Okay, so maybe, yeah. So maybe I will abbreviate this and what I will say is, so more steps. We'll tell you it suffices to understand the pullback somehow like this, HDZU A1 over UF goes to HDZU 0, A1 over UF goes to HDZU 0, A1 over U0 with coefficients in F, right? So the whole exercise of this game was that I replace the situation on the left by the situation on the right. So how is this any better? So one thing which we can observe is that this Z, it is finite over U, right? So this is some proper morphism. So what I can do is ZU goes to U finite, means that I can embed A1 of course into P1 and then it will remain closed in P1. And so I can instead, let's say do this, right? So I can instead do this P1 and it's also enough to understand this. And now what can I do? Okay, so I have this HDZU P1 over U with coefficients in F and I'm supposed to pull it back here to HDZU 0, P1 over U0 with coefficients in F. However, right, so this ZU 0, this is basically just finitely many points. So this group here, I can work out using the wash with complex, right? So that's just M minus D of W, so that's this point. And then maybe there's some stupid other points which we have to deal with, plus, so a whole bunch of points, some of I minus D of YI, so these annoying new points. And now it's where the transfer comes. So there is a transfer map here and it, I mean, yes, let me write it. So it goes to HDP of CU by which I mean just the image under the projection in this direction with U with coefficients in F minus one. Right, so if you ignore the supports here, oh, that's it, of course, I did it wrong, yeah. Right, so if you ignore the supports here, this is just saying what is HD P1 over something coefficients in F, but you just use the fact that P1 is S1 smash GM, so you should remove one from the D and you should remove one from the F and that's how I got this. And you can check that you can play with the supports here. And this, it turns out, is basically the nature of the transfer, this is what the transfer is. And I can do the same thing, then I can put back here, HD minus one P of CU zero, U zero, coefficients in F and there's another transfer here. And this diagram commutes, it's very elementary. And I can also work out this thing here, so this is M minus D of W, again, because it's a rational point, plus maybe some other things. Okay, and so now I'm out of time, but also I'm at the end, right? Because you had this, right, so what does the transfer do? I told you that this abstractly defined map which has something to do with the transfer, and well, you can see here in this simple case, right? So this is some sum of M minus D, evaluated at some points, and then they map down to some other points, and so what you need to do is that you need to find some kind of transfer. And I mean, that's, this construction here gives you a transfer, and that's exactly what this is. But now the point is we had designed this to be, right? So this W here is the same thing, was a rational point, so there's no field extension. So this is actually an isomorphism. Okay, so now I'm done, right? So I had something here, and I supposed to figure out what this is, it's image here. So I need to figure out what it is here and here. Now, suppose that I know this map here, right? So I had something here, I know it here, I know it here, so I know everything here. And also, I had told you that these new points, they're somehow easy. So also, I know everything here. So basically, in this big group here, I know everything except for this value here. But I also know what happens if I transfer it down all the way to here, okay? But now, well, if I just pretended zero here for a bit, transfer all of these things down and subtract it off from what it's supposed to be, and then invert this map here, I will find that I figure out what the last component is. And so then, what have we done? Well, it suffices to understand this map. And now we're done because by induction, right? Because I have now managed to reduce the D, and so now you can just go again, and eventually you get to D equals zero and D equals zero is trivial. Yes, so I'm sorry for running over time, but I hope I have given you some idea of how algebraic geometry proves this interesting result in multiple commutations here. Okay, thanks a lot for a wonderful talk, Tom. That's it here. Let's see, are there any questions? Maybe I can start with a question in the corollary, the result with Maria Jakersi. There was a, I think there was a number three showing up. Yes. Is it possible to somehow explain why it's number three? Yes, so this number three has something to do with the following thing. Yeah, okay, so what about this three? So I'm not sure if it's optimal. So what is this, right? So it has to do with the following thing. You take F minus one has some transfers and F minus two has transfers, they all have transfers. And some feeling is that the transfers, this guy has transfers, better transfers, better, better transfers, right? So just like how an EN, right? So if you take iterated loop spaces, classically you have like group structure, a BN group structure, E2, E4, whatever. And so the feeling is that eventually, so the F minus infinity should have frame transfers. Now actually, right, so it actually turns out that already F minus three does F, right? And F minus two, if characteristic is equal to zero. And the conjecture is that F minus one has frame transfers. But so, okay, so now that was a lot of waffling, but the point is that somehow the contractions give you more and more structure and we believe that you somehow get all structure already after three steps or after two steps or maybe after one steps. And how many steps you need, this is the number, this is where this number comes from, right? So this number three is because you can prove that after three steps you're somehow at the full structure. Okay, thanks. Then there's a question from Sean Tilson. He asks, is there extra structure on Omov that you have now from this scroll rate like more than the Abelian group structure? Yes, I mean, yes. I mean, so one thing which you do learn is that yes, if you do pi zero of omega p one cubed sigma p one cubed F, right, this is actually a homotopy module. So this has all the structure which you could possibly ask for. In fact, in the proof, we learned this, right? Because I mean, this is like, if you take a three-fold loop space and then you have good structure, this is not like super exciting thing. But actually the point of the proof is that this already, right? Even without taking loops, you already have a lot of extra structure. So the answer is yes, we definitely do get more structures on various things and this is basically the heart of the proof. Is there an, there's a question, is there an analog of maze recognition principle for multiple loop spaces? Well, in the world, does there exist one? I hope so. Can we prove it? I believe not right now. So we have, right. So there's some, there's sort of S1 loop space theory which says, can you deloop it in the S1 direction? And I would argue that this is probably reasonably well understood. I'm not sure if it's like written down in this language, but this sort of thing we can probably do. But the delooping in the GM direction is very, very hard. So we do not have, yeah. So we do not have recognition principle, Rack, Rack, Cork, recognition principle for even something like P1 loop spaces. I mean, you could ask for GM loop spaces, but the problem is that somehow, because GM is not connected itself, these things are sort of, it seems likely to me that you might have something like for P1 or P1 at the smash two, that for this you might have it, but we do not have any of these. We do have a recognition principle for motivic infinite loop spaces, which is, right. So like saying that E infinity monoid, group like infinity monoid are infinite loop spaces. So we have something like that, group like spaces with frame transfers, but we definitely do not have this at finite stages. If we did, then my result would be much easier. Or I mean, as easy as proving this recognition principle. Yeah, please work on that and prove it. That would be great. You used characteristic zero to prove your theorem, but it characteristics zero doesn't appear in a hypothesis of the theorem. Yes, no, I mean, I did not, I used characteristics zero to make the already probably long and hard to follow argument, a little less annoying. So in positive characteristic, you do something which is roughly the similar story, but this reduction in the beginning. So this point here, this then becomes a few pages to argue that you can still do something like this in positive characteristic. But yeah, so this is only, I did it to simplify the exposition and only prove a special case. I have another question. Can you make a comment on the zero and pi zero of your, with your corollary, with Yak or some? Well, what can I comment on what zero means? Sorry, I'm not quite following. Well, could you change it to pi one? No, I would like to believe that, but right, so what we do very much is confined to pi zero. So I think that, yeah, so I mean, I would conjecture or whatever, you would guess that I can do something like pi i here and then maybe n greater than, I don't know, something three i or three plus i, I don't know where it goes. So that would be what you think or what I would think, but I suspect that this probably requires different, different kinds of attacks, but I don't know. So I definitely cannot, I wish I could, but I can't. Can you explain where the zero comes up in going from the transfer result to the corollary? Yes, I can, I think do that. Well, so how does this work? The way this works is that I look at somehow, right, so I look at the category of S one spectrum and then I have the category of motivic spectrum, right? So I can go to here and I can go to here and basically I want to figure out what this composition is and so I can imagine this is happening somehow in spaces, right? So I can first smash it with GM and go to somehow to this S one, okay, one and then I go to two and I keep going and I can always factor it like this, right? I mean, I'm just saying that if you do an infinite iteration of GM loops and GM suspensions, you can do one and one and one and then eventually you always have to infinitely many and what the proof eventually shows is that if you look at the hearts here that, right? So these categories in some sense, right? So the zeroes one and the first one and so on and so forth in a way which I find difficult to make precise they approximate this category. Well, I guess I can take some limit in the category in period or something, but whatever, right? And so what we prove is that S H S one of K N heart, right? So it has this natural functor S H of K effective heart and this is an equivalent for N greater than equal to three and then eventually you'll get the corollary from that by some form of manipulations and the way this works is that basically we know very explicitly what this guy is and so we have to somehow describe these things very explicitly and while we just fight our way through and eventually we see that what are the objects in here? They're like these sheaves and they have some extra structure and then eventually we argue that this is all the structure and so if you want to do it for let's say pipe one, right? So then you don't have to look at the heart but you have to look at something like this, right? S H of S one, okay? So the concentrated degrees zero and one so I don't know some sort of motivate one types and you could try to do the same analysis but it's gonna get much more complicated, right? And then you have to do it with three types and yeah, so that's my attempt at answering your question. Okay, great. Are there other questions? Yeah, yeah, just to understand more precisely. So you don't build pullback maps on the ROS complex but instead you prove, that's it, instead you prove this independence results. Right, is that right? Yes, well, I haven't really thought about that. So, or another detail, yes. So maybe I should say, right? Well, if you have this thing, you have to say C zero of X F goes to C one of X F goes to C two of X F goes to and so on and so forth, right? Now you can do the same thing with Y, okay? And then the dream would be that you have some maps here which make everything work. But the problem is that there is no transfer here and then you cannot write it down. So that's the problem, okay? So, but instead, I mean, you can do the support in Z thing, okay? And then, I mean, this guy just goes away. I mean, this just doesn't exist and maybe some more don't exist. And then the point is there's this formula which ROS has written down how to do it. So there exists a map here and there exists one here and everything commutes, right? And so the easy, I mean, the ideal thing would be to prove that this is the correct thing. And I think what my result shows but I have to think about that is that sort of the lowest terms or if these are all zeros, right? Then this map here, it does the correct thing. So that there's this map, this fantasy map which you write down, but it will actually give you the thing which comes sort of implicitly out of what I'm doing with a transfer or whatever. And my feeling is that you might be able to soup this up to learn that by some induction thing that it does actually the correct thing, right? So that these maps here, which you can write down that they all use the correct map and co-homology but they have not actually tried to do that. And I think it would be maybe annoying but it's not out of the question. So I'm not sure. But yeah, it says pullbacks for the rest of it complex and I definitely don't do this in general. I see it. Okay, thanks, thanks. Okay, anybody else? I think that concludes the question. So thanks again, Tom for a wonderful talk. And the next question is that's Dylan. John also asked the negative stable homotopy sheaves and the answer is no. Okay, thank you. Thank you.