 It's a honor and pleasure to give a talk at this conference, so thank you for much for inviting me. And yes, my talk will be about logarithmic aspects of resolution and singularities, both the Yons resolution and classical. Okay, so yeah, maybe I should also mention that I sent a link to Rust slides on my web page. I sent a link in the chat. Maybe it's possible to see it where you can go for some back here and not be stuck with the page. So it might be convenient during the talk to see the slides separately. Okay, so let's start. I was lucky in the sense that my first project where I seriously used the logarithmic structures of Fontaine Lucie was actually a joint project with Lucie Lucie. So I could, yeah, study things from Lucie himself. And our project was about Gabber's version of the Yons altitude resolution. I'll discuss it a bit later. And the intuition of log geometry and confidence in log geometry which I got during this project was very helpful for recent advances. My main part of the talk will be about recent advances in classical kinomics resolution. And it was important, yeah, so it seems not related. I'll try to explain a bit about this. So the recent advances are a completely joint project with Dana-Brunch and Janet Bodarsik. Yeah, and the extended classical canonical factorial resolution to morphisms. Canonical semi-stable reduction type theorems. And also we obtained a much faster and simpler resolution for algorithm for resolution of singularities we call it dream algorithm. It will be just tangential in this talk. I'll mention it a bit, but the main part will be about logarithmic aspects. Ironically, this dream algorithm does not use log geometry at all. Yeah, it was discovered because of log geometry. It does not use it at all. This is one of the reasons why we won't concentrate on the dream algorithm during this talk. But it has a log variant developed by a student of Abramovich, so it also can be done in a logarithmic setting. Good. Now the plan. So we'll talk a bit about altered resolutions. And I also mentioned our joint project we worked on with Luke. And after that, main body of the lecture will be about logarithmic resolution, just for motivation formulations. When I'll describe you, Sharonakis approach. And after that, I'll explain logarithmic piece one has to do to the classical approach. Okay, let's start with altered resolutions. So to be brief, I just formulate one more or less resolved, which is general as many things about altered resolutions. So we need the notion of alteration of the morphism. What does it mean? So even a dominant morphism, Y2X, F from Y2X, of integral log schemes or schemes, maybe schemes with trivial log structure, went by an alteration, we mean a morphism F prime from Y prime to X prime, where both Y and X were altered. So this compatible pair, Y prime to Y and X prime to X, which are proper generically finite and rank or degree of these alterations is not divisible by any L, which is invertible on X. So ideally, we would like it to be one, but best possible which we can do now is to be prime to any L invertible on X. And the theorem from 17, altered resolution of morphisms says that if you are given a finite type morphism Y2X, between integral FS log schemes and generically trivial log structures, and also we have to assume that X is sort of universally resolved by classical meaning. For example, it's a point, or maybe it's a curve, or maybe it's even a quite excellent surface because there is classical resolution for surfaces. When there is a log smooth X alteration from Y prime to X prime, but is given any such F, we can alter Y and X and get a log smooth morph. So we can result morphism in log category. So a bit of about history. Altered resolution was first discovered by Dion in 1995. He considered the case where the dimension of X is at most one, mainly point or a three. So resolution varieties, awesome example reduction over a three. And also he proved an equivariate version of this group action. After that, Dion Cabrera, in 96, proved this result in characteristic zero. And X is the point. So we actually resolved varieties in characteristic zero by a completely new approach. We proved that Dion's approach is also able to resolve varieties in characteristics zero. Cabrera announced around 2005 that one can also control in positive characteristic, one can control the degree of alterations, at least at a single prime L. One can get prime to L alteration, dimension still less equal one. And in our project with Illuzzi, in 2014, we actually walked out Cabrera's program. It was not that easy, but we managed and we proved moreover, but actually one can take any X and not only X bounded by one, dimension bounded by one. This required a slightly different deduction scheme, but we used many ingredients of Gabbers program. Good. So, so far, and in 17, few more valuation theoretic techniques were used to strengthen this method. So is this equation in Paris? Yes. So that's okay. When you write integral, it's slightly ambiguous because you don't probably don't need mean integral in the sense of log geometry, but integral in the sense of scheme theory. That's correct. That's correct, but all my log structures will be FS. Okay. Even though, yeah, but you're right. Yeah. Okay. Here means just integral on the level of varieties. Yeah. And then when you want to make it nicer using alteration, using alteration, then the X prime and Y prime are again supposed to be integral or there could be several irreducible components and just the degrees of the sum. In this case, we assume to be integral. Yeah. Okay. So this is a, okay. Another question? Is there a version of this theorem where you don't have log structures, but the altered morphism is literally semi-stable? Soon. Okay. Soon. In a couple of slides. Now the method. So the proof of all these results, yeah, what was found by Dion, runs by direct induction on dimension. So morphism of dimension D is split it to D curves, relative curves, and the result when one by one, we start with X0, which can be resolved. It's a small dimension or by some inductive assumption. And then the result of one, and we get X1, which is log smooth. And then we resolve F2 and we get X2, which is log smooth. And a bunch of alterations is collected during this process, but the idea is very simple. Just resolve dimension by dimension one by one. This requires to resolve the morphism of relative dimension one. But the role of log geometry here is crystal clear. Relative curve can be resolved only in log category. You cannot make this morphism smooth by any alteration, only log smooth, or semi-stable in the best possible case. The proof of resolution of morphisms is classical more or less. It's based on properness of MGM bar, one of proofs. Now we're a few by way. And on semi-stable reduction of dilemma fourth, which actually is the first relative resolution result which was discovered. Okay. And control on the rank is done by quotients. So we resolve something equivalently, and we divide back so that log smoothness is preserved. This happens if action is so-called toroidal, or Gabber calls them very thin. Observation. Classical context worked with regular schemes and log structures given by SNC devices. But everything works even easier if we generalize to log smooth or log regular log schemes. Moreover, this generality is critical when we want to divide by the toroidal action, because making action toroidal, so-called torification theorems, work only in the general context of log regular log schemes. By way, it was discovered by Dion and Abramovich in way of walking 96. And the word torification is just a joke. When it was discovered and we saw what it works, Abramovich wrote an email to Dion, which is terrific. Terrific, terrific. So it's just a play of words. Okay. Good. Now, what can we use from this? A sort of principle, which I think works often, is that once log structures are used, there is no reason to be stuck with smooth and SNC. You should better go to the general context of log smooth or log regular schemes and morphisms. In a sense, from the point of view of log geometry, all FS monoliths are equal, like all animals are equal. And if needed, you can, after that, improve and combinatorially by a separate routine. And here is a theorem I was asked about, a theorem vis-a-vis deprosita in Liu, where it's a stable reduction for morphisms. If in the ultra resolution theory, which I formulated two slides before, you can, in addition, achieve with white prime and X prime are regular and log structures are given by SNC devices. So you can achieve more. Literally, this is the best possible resolution of morphisms. Locally, parameters on X are just products on X prime, are products of parameters on white prime. And it is deduced from the theorem on two slides before by hard combinatorial methods. All you have to do is to improve monoids by blowing up by subdivisions. But it's really difficult combinatorial method. It's sort of relative version of the main combinatorial result of KKMS. And let's use polytops, which is also difficult. Okay. And now, that's all what I wanted to tell about Titered Resolution. We have one principle to take with us. And let's see how we walked to classical resolution. So the rest of the talk is about joint project with Abramovich and Vodar Sheik on resolution singularities over a field K of characteristic zero. For simplicity, we always walk with varieties, finite type over field K. We can deal with larger generality, but for lecture purposes, we stick with this. Our goal is to resolve morphisms, log varieties, and a bit I'll tell about a dream algorithm. References for the talk. So logarithmic resolution is done in two papers. First of all, we resolved logarithmic varieties in 17. This is already published. And now there is submitted paper about extension to morphisms. In addition, there are two papers on dream algorithms, paper without log structures and a paper with log structures by Kvek student. Okay. And now motivation for this project. Main motivation is as follows. We wanted to improve this result about resolution of morphisms, which in characteristic zero is due to Abramovich and Karol. Deon's method is not canonical. And even if I give an amorphism with large smooth locus, we have no control on smooth locus. It can be destroyed because we have to choose this vibration. It's not canonical and we have no control with. So main goals of a new project were first of all, resolve morphisms so that log smooth locus is preserved in particular proof, stable reduction over non-discrete variation. Heranaka's theorem implies stable reduction over discrete variation. It's sort of excellent. But for non-discrete, the only thing you can do is to spread out, get a family over a high dimensional base and try to resolve where. And you want your genetic fiber to be which is smooth to be preserved. So one needs to use something new. Second, do this as fonturially as possible. Try to do it canonically. Try to do it compatible with base extensions. Heranaka's semi-stable reduction is not compatible with ramified extension of the tray and our method will be. So fonturality. Third, clarify the role of log geometry in classical resolution. Okay. Just a minute I'll explain what it means. Now the only hope was to use Heranaka's embedded resolution method. Why? Because this is the only canonical method we have. As I explained, mainly there are two methods to prove resolution in any dimension. Dion's method and Heranaka's method. And Dion's is not canonical for sure. So we hoped to use Heranaka's method but for log smooth ambient varieties and not for smooth ambient varieties. So just shift completely to log geometry of Heranaka's method. And why we hoped that this is possible? Yeah, not only because we do not have any other tool. We had some indication expectation. And this was because in Heranaka's approach, there are signatures of log geometry. I will point where. And the hope was that due to this monoidal democracy principle, if there is log geometry in Heranaka's method, it should work for general log smooth stuff. So this was actually, this principle was, you know, it gave us some hope to start it. Okay. And now a couple of words about classical resolutions. So classical resolution aims to take an integral variety z. This time just variety, not what variety. So it's not no confusion as possible. And it wants to find a modification z res to z v is smooth z res. Heranaka in 64 proved that it exists and got fields made up of this. And then many people tried to understand what Heranaka did and simplify and Heranaka himself also worked on this a lot. In 70s, Heranaka Giro found a notion of maximal contact, which will be important later. Willow-Major and Girsten Milman independently in 70s, in 80s and 90s constructed an algorithm, not just existence. They constructed an algorithm how to resolve canonical singularities. And since when actually the only algorithm which was available was this algorithm or Girsten Milman. It's essentially the same. Many different proofs were given or constructions, but the algorithm is the same. So our logarithmic algorithm was sort of the first one which is really new. And in 2005 proved that the algorithm in fact satisfies stronger property, not only canonical, it's funtorial for all smooth morphisms. If z prime over z is smooth, when the resolution of z prime is pullback of resolution of z. And this is stronger claim and it is easier to prove as often happens with inductive arguments. And also it proves a covariance resolution, so it's useful for applications. Now about our results. So in 70s, we constructed an analog of classical algorithm in a logarithmic world. So if we want to resolve morphisms, it's clear that you should go to a logarithmic world. And I also gave a few more reasons why to do it. Now morphisms are complicated things. So if you want to do some logarithmic, start with varieties, just develop something. So here on Hironaka, theory results what varieties? Just resolve variety, resolve the divisor given to ROIDO structure and you get the resolution. But we constructed an algorithm which is not only logarithmic, it's funtorial with respect to all log smooth morphisms. This fontorality is completely out of reach for Hironaka. It's something new and it's important. In logarithmic world, you must walk logarithmically. So fontorality also is much stronger here. This was the main novelty and the method itself. And then in the next paper in SQL, we proved that this algorithm developed in 17 actually works for morphisms. Just the same algorithm works for morphisms. It constructs the modification of x, so that x rest to b is log smooth, but it may fail if the dimension of b is larger than 1. But it fails for good reason. It fails for the reason that sometimes you have also to modify b if the dimension of b is larger than 1. It can be possible that you have also to modify b. So a new ingredient was to prove that there exists a modification of b. So what after modification, the base change already can be resolved by the algorithm of 17. So when you modify b enough, you can resolve. Moreover, this will be compatible with any further base change. So it's completely up to existence. It's independent of the base. It's compatible with base changes. And so far in the archive version, H is not canonical. So resolution is canonical only relatively. Once you choose some b, but we are working on canonical modification of b2. It will be done. So we are in the middle of this book, but it's clear what it will be done. So these are the new things about algorithm. And now I formulate it. I give motivation. I give formulations. Now I'll describe classical algorithm and in the end, I'll explain how it can be twisted to the algorithmic version. So all canonical methods before our work actually constructed essentially the same algorithm, you can work locally because you're building something canonical. So if you do it locally, then it glues automatically. The resolution is embedded. One locally embeds x into a manifold. By manifold, I always mean a smooth variety in this talk. And then works with a pair. So one looks for blow-ups of the embed manifold so that MRes is smooth and some transform, certain transform of x, which is the pullback minus few copies of exceptional device. So transform of x is the resolution of x. Fantorial embedded resolution implies fantorial non-embedded because embedding is essentially unique. I will not stop on this question, but this reduction from non-embedded to embedded is simple. Main choices. It turns out that this classical algorithm makes a lot of choices, which looks so natural that people just are not aware what they really are done. So first choice. The most natural one is that we only blow-up smooth centers. Why? Because we want this ambient space m to be smooth throughout our algorithm. So we constructed sequence of blow-ups, mi is blown up at smooth center vi and we get a smooth mi plus one. So this will be the notation. Transforms. And by way, I want to say that already one is a decision. And in our algorithm, its centers will be different. And in Dream Algorithm, its centers are different. So one can play even with this choice. It's essential. Transforms. Well, in Herodox approach, you pull back x and subtract the multiple of exceptional devices. The most natural thing you can do. If you pull back completely, you definitely get something which cannot be smooth because it has few components. It has copies of exceptional devices. So at least you must remove some copies of exception. Choice of centers. There is an invariant in the algorithm. I'll describe it a bit later. But the main component of this invariant is the order of ideal defining x. So the order I'll explain a bit later what it is, but it's something very natural you can imagine. It's a very crude primary invariant. History. In addition, the usual algorithm will run into a loop if you just use this primary invariant. I'll give example in a couple of minutes. And because of this, it has to use history. It cannot work without history. And history is given by exceptional s and c, divisor e. And the number of components at the point will be another primary invariant. And finally, induction. This algorithm also runs by induction. But it's not induction of dimension and induction by vibration. It's induction of co-dimension. It's induction on hypersurface and then hypersurface and hypersurface and so on. So in this ambient manifold, we'll choose a maximal contact hypersurface so that the problem can be restricted to it. And so on. So this is the mechanism of induction. So actual invariant will be d1, s1. And then d2, s2, the invariance on the maximal contact. And then the invariance of the next maximal contact and so on. So it's a sequence of two and invariant. Okay. Good. And now history. Classical algorithm in addition to subtle inductive structure, it must encode history. And with our choices, no history does not exist. And here's an example of no progress. Let's take ambient manifold a4. Let's take hypersurface given by vanishing of x square minus y z t. Yeah. It's a hypersurface with singularity locals, which is just union of three coordinate lines. x, y, y axis, z axis. Okay. It contains of a union of three lines. It consists of union of three lines. And there is a symmetry by permuting y z t as free symmetry. And in this singular, singular locus, the only S3 covariance sub scheme containing zero is zero. So if we want to find something canonical, it must blow up a covariance centers. And then it can only blow up zero. If we blow up zero and consider a chart of this blow up, when the pullback looks like y prime square times the same expression and new coordinates. The total pullback of x consists of something which is x prime. It just looks like x and two copies of exceptional divisor. So after removing exceptional divisor, we just are stuck with the same equation. It does not improve. And if we have no memory, we'll do the same and we'll never stop. And a similar computation shows that even with Yambrela, when you blow up pinch point, you again get a pinch point. So, Hironaka's algorithm must use history. But using weighted blow ups and not just blow ups, we have constructed in 19 and dream algorithm, which is just as simple as possible. It defines invariant. It says which center to choose with center with maximum invariant, you blow it up and then invariant drops. And there is no history. Okay, good. And because there is no history, actually one does not have to consider exceptional divisor in this algorithm. And it works. Now about the boundary. So why history is encoded in the boundary in Hironaka's approach? It's very simple. Once we blow up m and get some n prime, any point x on the exceptional divisor has a God-given coordinate t. It is unique after unique. And it comes from the history of the resolution. So if we want to make some less choices and remember history, we should use this coordinate always in all of our computations. And this is what Hironaka does. So inductively for a sequence of some manifold blow ups, we define a total boundary to be premature of the ice boundary and the new boundary. And we call it the accumulated boundary of m, of m i plus one. We always walk this coordinate system t1, tn, such that both new center and the boundary at this stage can be expressed in these coordinates. In such case, one actually says that EI and VI have simple normal crossings. This means that VI lies in few components of exceptional divisor. And it's transversal to the union of the other components. We call the boundary coordinates exceptional or monomial and even denote them differently, m1 up to mr. So our coordinate system has some usual coordinates where we have choices and exceptional coordinates which are God-given up to units. And in this even blows up only such VI, it's automatically that the boundary will be simple normal crossing at any stage. And if I would blow up a smooth center which is not transversal, when it can happen that I destroy the boundary and the next boundary would be non-smooth. So it's sort of must. If we want to use boundary as an S&C divisor, we must blow up something like this. So this restricts our center smooth. Now, the role of the boundary, good news, is that once we use monomial coordinates, we have less choices. This is what we wanted. We avoid loops. And also boundary can accumulate part of I. So we'll split I in the signal as I monomial, where I monomial is maximal monomial invertible ideal and IPO, which cannot be divided by monomials. This splitting will be essential just in a minute. Bend use. In fact, another side of the same point. First of all, we must treat E and monomial coordinates with a special care. And less possibilities for coordinates. So sometimes it's also a problem. Okay, good. Now, many technical complications of the classical algorithm actually are caused by the fact that we badly separate regular and exceptional coordinates. And I'll point out where this happens. So, and first of all, in the definition of order, we have two classes of coordinates, but in Hironaka's approach, they are mixed. And in our approach after, we'll separate them completely as you see. Good. Now, principalization. So this idea of splitting I as monomial part and pure part actually is reflected as follows. By principalization problem, we mean we follow all algorithm of embedded resolution. Do the following things. First of all, once we embed X into M, replace it by ideal I on M and only work with ideal. From now on, we ignore geometry of X completely. We just work with ideal. Yeah, geometry of M and ideal on it. And we solve the following principalization problem. Find the sequence of blowups as above of manifolds with boundary such that the pullback of I to M is invertible and monomial. So just it becomes what I wrote I mon and I pure is completely killed. No pure part. It means that it's just supported on the end. It looks like a different problem, but it turns out to be equivalent to better resolution. If you are given embedded resolution, you can pass to okay. No, no, sorry. It's stronger. In theory, it's stronger. So the magic is that the last non-empty strict transform of X, let's denote it XL in ML, actually is a component of the L. And because of this, it must be smooth and transversal. The L is smooth and have simple normal process. So the magic is that if you can solve principalization problem, then you automatically solve the embedded resolution problem. So from now on, we'll discuss principalization problem. So we replace geometric problem by multibright problem about eight years. So moreover, principalization not only solves XL, it solves also the history divisor. It achieves since XL and EL have simple normal crossings, the restriction of L to XL is SNC. So we wanted to solve one problem and we solved a stronger logarithmic problem. This gives a strong smell of log geometry. And this was one of indications that log geometry is lurking behind Chironaka support. Great profit. Working with ideal provides a lot of flexibility as we'll immediately see. Okay, order reduction. So many variants of algorithm, as I told, is the order. The order of pure part because model part is sort of our friend. Yeah, pure part is our enemy. We want to decrease the pure part. So the order is defined as minimal order of elements of the ideal. And it's as natural as you can imagine at the origin, the order of X square minus Y z square is two. Yeah, it's given by this monomer. And the order of such a guy is five because of this monomer. Okay. And in addition, one works not just with ideal, one works with so-called weighted or marked ideals, I comma D, where D is the number. And this number indicates what types of transforms we want to do. D says that we want to remove D copies of exception. So we only use blowups along with centers, which are contained in the locals where the order of five is at least D. So we call such a locals I comma D singular. It's singular support of remarked ideal I comma D. And if we blow up such a center, it's automatically when we can update I by pulling it back and dividing by this power of exceptional divisor. So this guarantees that we can divide by this power. If we blow up the locals where the order was at least D, when we'll get at least D copies of exception and we can subtract. For example, we only already saw such example, we blow blew up X square minus Y z t and we removed two copies of exception. All reduction finds a sequence of blowups with boundaries. Yeah, just to save space, I did not put here the boundaries, which are I comma D admissible in the following sense. In this sense, yeah, in the sense of blowing up only such centers. And order reduction not only blows up such centers, it finds a sequence. So with I am comma D singular is empty. So it managed to get I am so that its order is at any point is less strictly less than D. Yeah, so we blow up points where order was moving D and we drop below D. So we sort of reduce the order of I below D. Now, in principle, existence of such thing immediately implies principalization, just take D equal one, yeah, just start with ideal and kill it completely. Using such transforms and factoring out monomial parts at each step. And remark, the main case actually is not D equal one. The main case is D equal to order of IPO is the most natural thing. Our invariant says that the maximal problem happens where the order is maximum. So try to reduce the maximal order and then next and so on. So the main case is maximal order. But for inductive reasons, we also have to deal with the case when D is not the order of your part, but something small. So it's sort of bad karma inherited on maximal contact from the general problem. Okay, good. And now we go to a concrete part. Just one or two slides about, you know, concrete work. So maximal contact, the miracle which enables induction on dimension and the miracle which only happens in characteristic zero, we have no idea what to do in characteristic P, no analog of such a phenomenon is that in the maximal order case, yeah, in the case when D is just the order of I, the other reduction of I, D is equivalent to the other reduction of so-called coefficient ideal, CI, restricted to a hyper surface H of maximal contact with order D factorial. So any blowup sequence which reduces order of CI on H gives rise to a blowup sequence which reduces the order of I, D just blow up something in H and then blow up something in strict transform of H and so on. So just the same sequence, it uses a sequence of lots of them and many more. CI is, yeah, as I said, coefficient ideal and H is hyper surface of maximal. Now the main example, how we look let's assume that I is just given by a single equation, so hyper surface and in such case, we can always choose coordinates t equal to t1 and up to tn. So what this element will look like t to the d plus a2t that d minus 2 and so on plus ad where ai depends on t2 up to t at least formally local. And then H is very simple, it's just vanishing locus of t and CI also is something very simple, it's just the ideal generated by coefficients, hence the name coefficient ideal coefficients, but with correct weights. We want a2 to be a weight 2 and ad to be a weight d. So we take integral weights which put them in the same gradient. And remarks, why such a definition, why coefficient ideal? The reason is we've fallen, if I try to take just i in restricted to H, when I just keep ad restricted to H, this loses a lot of information, no way that it will be equivalent to my original problem. I want to restrict all coefficients to H, but when I kill t, I must somehow keep information, what was the degree of each coefficient and it's clear that there should be all the weights which I wrote here. So it's just a way to keep all information about this equation on H. And a1 equals zero, this is the place where we really use characteristic zero assumption. Otherwise it's not possible to kill the coefficient of t2 d minus one and it will immediately be clear why this is so important. Okay, good. Now I'll give you example, which completely illustrates the main mechanism of value, but it has choices, a lot of choices. I just chose some coordinates. So the question is if it's possible to do this without choices? Yes, it's done by use of derivations. So main tool for a choice-free description is to consider derivation ideal of i, denoted di, is generated by i and by all derivations of its elements. And iterated derivation will be denoted dn of i. And I'll note that derivation decreases the order of ideal just by one. Yeah, there is at least one partial derivation which will decrease the order, it's obvious. So because of this derivation provides conceptual way to define all basic ingredients. And the order is just minimal d such that this derivation of the ideal is trivial one, order zero. Maximal contact. See, if I derive my ideal d minus one times, it's order becomes one. So where is element of order one? Element of order one defines a smooth hyper surface. Any such smooth hyper surface is the maximal contact hyper surface. In this example, when we have no a1, t itself, if I derive d minus one time, I kill all these parts and I only have t. So maximal contact also is defined using this derivation ideal. And coefficient ideal, again, is just weighted sum of derivations. So more or less the same as we had before. Remark, the only serious difficulty in proving independence of choices now is independence of choice of this t. There might be few maximal contacts. One must prove independence. It's a headache of the algorithm. It's the most subtle point. I will not discuss it in this talk, but there is something to do. Up to choice of this maximal contact, I more or less describe all ingredients. Okay, good. Now complications of the classical algorithm. So it has two complications and this is related to use of usual derivations instead of logarithmic ones. So model of logarithmic derivations is spanned by logarithmic derivations mj, delta mj and delta ti for regular ti's. Now these are precisely the derivations which preserve the exceptional divisor. Take its maximal ideal itself. For almost all needs, it's easier, more conceptual, easier for computation, whatever you want to work with logarithmic derivations. Once we want to keep e in the picture. But we cannot compute the other using logarithmic derivations. This is the problem. We must use all derivations. And because of this Heronakis approach runs into two complications, two following complications. First, it says us how to choose h. This maximal contact is chosen using the derivation ideal. This derivation ideal has no idea what is exceptional divisor. Just no relation. Because of this, it might happen that e is not transversal to h. In such case, I cannot restrict e on to h by getting something s and c. I can restrict this log scheme, but yeah, it won't be log smooth. And because of this, we have no control on transversality to e. So the algorithm we can run on h will not be transversal to e and we destroy all our inductive scheme. So how one resolves this? It turns out that all new, except if we start to blow up h, all new components will be transversal to h. So the problem is only with the old boundary. So because of this, the solution is to work with stratification of h by the old boundary, by the number of components of old boundaries. So we define a secondary invariant or second primary invariant, s old, the number of old components of the boundary and the first four work where this s old is maximal and then where it's next and so on. I'll not go to details because in our algorithm, we get rid of all this mess, but this is existing. It's a headache of usual algorithm. And this is the reason why the initial invariant is not just d. It's the order and the number of components because at this stage, the e is our enemy and we have somehow to bypass this complication. And second complication is that it can happen that the order of i is larger than d, but the order of pure part is smaller than d because monomial coordinates contribute to the order. And in such case, we cannot proceed just by looking only at the pure part. We cannot just say, okay, let's take pure part and reduce because it's already reduced below d. In such case, we have to take into account the order of i monomial and we'll have to work with stratification where the order of monomial is large enough. Again, we'll have to stratify our picture and to run something different. And there is a solution outlined here. I will not discuss it because again, it's not essential for our new algorithm. Maybe I'll only mention that even when IPO is empty, still for active reasons, you have to get rid of monomial part. It's done by purely combinatorial step, but again, something should be done. And this combinatorial step actually, we have an end look, but much simpler in our new algorithm. Okay, good. So we are done with classical algorithm and now we have about 10, 15 minutes to discuss the logarithmic twist and logarithmic algorithm. So what is the boundary? Before we go further, let's really understand what is the boundary? Because so far, I only hinted that in Hieronaca's algorithm, there are some logarithmic ingredients. Sometimes they help, sometimes they are against us, but there are some. Now, so let's think about boundary. Typically, and this was what my thoughts before I started, I was familiar with logarithmic geometry, I thought that this is a divisor. And I think now that this is wrong to view boundary as a divisor. Unlike embedded scheme X, you should not think of E as a sub scheme. Even because there is no map of pairs M prime, E prime to M E, when you blow up, you increase the pullback of your boundaries. So E prime is not mapped to E. It may happen that E is empty and the new boundary is not empty. So it's not map of pairs of schemes. Just even by funturality, E is not a sub scheme. It's not good to view it as a sub scheme. But if you view this guy as a morphism of log schemes, this makes perfect sense. It's just a morphism of log schemes where we consider log structure associated to this S and C divisor. Moreover, this is excellent log scheme. It's log smooth log scheme. And yeah, and moreover, the shift of monomials, which are invertible outside of E, yeah, this log structure is precisely what we need from E. In Theronacus algorithm, we just factor the ideal to monomial parts and non-monomial and to factor out monomial parts, we just use this shift of monomials. So in a sense, Theronac invented in this particular case the notion of log scheme, yeah, in very particular case. Okay, and logarithmic parameters. So we'll work with log smooth log varieties. For shortness, I'll just say toroidal varieties. And it's the same, yeah, just classical toroidal varieties are the same as log smooth log varieties. And locally, where of the form spec of K, bracket M, bracket T1TL, where T1TL are regular parameters, and M is just sharp FS monoid. Okay. And the view Ti is the regular coordinates, and all elements of M will be monomial coordinates at the origin of T. So now we don't have good monomials and bad monomials, because this M can be complicated. Also, logarithmic derivations, yeah, differential, differentials of T comma M, yeah, it means logarithmic differentials. This model is freely generated by differentials of T1TL and delta MJ, yeah, DMJ over MJ. Where MJ now can be any basis of MGP. Yeah, I don't care if this is a basis of M or not, M does not have to be free. And just any basis of MGP is good for me. Please pay attention, I'm in characteristic zero. Yeah, so this is the reason I can take any basis. And even though MGP tensile Q I can take. And this fact I prefer to say as a principle of monomial democracy will come to it a bit later. From now on M does not have to be free. There is no canonical base of MGP and all monomials for us are equal. Yeah, like all FS monoids are equal and all monomials inside such monoid are equal. Remark. The most interesting feature of the new algorithm is fun reality with respect to kumar logital covers. Yeah, I told that it's compatible with any log smooth morphism, but kumar logital is probably the most surprising, the most interesting one. Because in usual situation they look like cremified covers. They are not smooth, so why should you expect any compatibility? So for example, if we extract roots of monomial coordinates in classical setting, our resolution is compatible with such operation. And here on echo obviously not. Or in the case of semi-stable reduction, we can extract your roots of uniformizer of the base. We can consider ground field extension, which is remefined. And still this is compatible with our algorithm. And it's out of reach and also unnatural for classical algorithm, but it's very natural for algorithmic life. Well, now main results about algorithmic covers. So ignoring the orbital aspect, which I hinted at in the beginning and in the last slide, we'll discuss it a bit. If we ignore it, when log principalization says that given a toroidal variety t and an ideal on t, we can find a sequence of admissible blowing up of toroidal varieties. I'll say you later which admissibility this time tn to t, such that the pullback of phi to tn is monomial. So it's just direct generalization to logarithmic setting of principalization. And this sequence is compatible with log smooth morphisms. Again, log smooth funturality is essential. And as this implies, within classical situation, as in classical, given any integral logarithmic variety x, where exist a modification x-rays to x, such that x-rays is log smooth. This is funtorial again in a strong sense. Yeah, the main novelty is strong. And also, as I mentioned, both principalization and log resolution, we saw work also in relative situation for morphisms. Yeah, good. Now about the method. And please pay attention. We have something like seven minutes. We have just four slides. But after we've worked, we have done now. It's really, we'll be very simple. So in brief, we want to log just all parts of classical algorithm. But this we want to put log at any place we can. Okay, so I won't. I just was confused about your in log principalization ideal. Maybe I didn't quite understand what the toroidal varieties are. Normal and invertible. It should be also invertible. I forgot to say invertible. Yeah, but the ideal, so the toroidal variety has a log structure in you. Yes. And the ideal is any current ideal or is related to the log structure? No, any ideal. Any idea. But then if you want to, okay, you didn't explain all that, Mr. I did not. You'll see, you'll see, you'll see. I increase log structures, you can imagine. In brief, we want to log a just all parts. So how we do log order of I is the minimal D such that D log D of the ideal is trivial. So we just replaced D by D log. Maximum contact is any hypersurface given by vanishing of T where T is regular coordinate. And it means it's coordinate whose log order is one. So in D log D minus one, there are elements of order one, take any of them, it defines you a maximal contact. Such maximal contact is automatically toroidal. If I take vanishing locus of monomial coordinate, I will not get something toroidal. But here if I take vanishing locus of such guy, it's always toroidal. Coefficient idea again, weighted sum of logarithmic derivations. The only new thing is what does it mean to have I, D admissible blow up. So we allow this time to blow up any J such that first of all, I is contained in this power of J. This is the admissibility. If I is contained in J, D, when the public of five can be divided by this power of public of J, that is, I can remove D times the exceptional device. So this is just to be able to remove D times. And second J is generated by few regular coordinates in few monomials. And I don't care for monomials. It's democracy. You can take any set of monomials. So any monomial ideal can be blown up. And obviously this destroys smoothness, but this preserves log smoothness. So in log smooth context, I'm allowed to do such a thing. I have more possibilities for blow ups. And in fact, I just blow up what we call sub-monomial ideal. It's monomial ideal on logarithmic sub-manifolds given by vanishing of T1, TN. And after blowing up such a thing, I add its exceptional divisor to the monomial structure and increase monomial structure as in the classical algorithm. Good. Now infinite log order. So a strange thing, which happens, a new thing, is that log order of TI is Y. But log order of monomials is infinite by this definition. Because when I take the log of monomial, I kept multiple of the same monomials, values of the log eigenfunctions. So we behave like zero where log order is infinite. And this is the main novelty. And this is the novelty which allows for the reality with respect to extracting roots of monomials, kumar covers. Because on a kumar cover, my monomial, which was, for example, M, becomes square of something else. But its order must be the same if my algorithm is compatible with kumar covers. All invariance must be compatible. The only way to be compatible is to say that its order is infinite. Derivations are not able to treat monomials, and you should give up and not insist as in Fironaca's approach. So as a prize, we have to do something special when the log order of I is infinite. But this something special is very simple. And in fact, it was discovered by colors by color a few years ago, before our work. And it just says that you should consider ideal I mon, minimal ideal, which contains I. For example, if I is given by Venetian of elements sum of M i t to i, we just take the ideal generated by coefficients, monomial coefficients, blow this up, and divide by this pullback. What you get, you kill one of these coefficients. So on the pullback, the order becomes finite. So where it's very simple, completely combinatorial blow up, monomial blow up, which makes the order finite. And after that, you proceed as usual. You take maximal contact, and run induction on the dimension. Our algorithm is C-plug. It avoids both complications I mentioned. Maximum contact always is given by a regular coordinate. So it's always transversal to the monomial structure. It's always throw it automatically on the nose. And in a sense, we completely separate dealing with regular coordinates while order and deal with monomial coordinates, which is done by combinatorics, by toroidal blow up, by monomial blow up. And the invariant now also is much simpler. It's just three of others, d1 up to dn is di, just natural numbers. And the last one is a zero infinity, just like infinity or just something. Okay. And it's always so elementary where is the cheating? And I said that there is a cheating and cheating is that a drawback of monomial democracy is that the algorithm has no idea when monomial is a power of another monomial. And sometimes because of weights, it insists to blow up a fractional power of monomial. We call it kumar monomial. It's monomial on kubar cover, but not on, yeah, it's monomial in kumar locker. How can we blow up such a thing? Well, we can try to walk logital kumar locker. We can pass to the Galois cover where this root exists, blow up where, and then divide by the Galois group. Excellent idea, but it, yeah, and we did not expect complication here because of log font reality, but it turns out that this action after blow up becomes not toroidal. So when we divide back, we get something which is not log smooth because of this, we must divide back as a stack. So called non-representable modification, which we call kumar blow up blow up of kumar ideal, which contains, which is ideal in kumar topology. And it can be made principle invertible, but only by non-representable kumar block. And this is okay for applications because, after that, we can remove stakey structure by a torrification algorithm. The same algorithm as used in Gabor approach and by Abramovich Dionk and others by torrification or just stakeification, we can actually remove stakey structure and, but the last step with torrification will be compatible only with respect to smooth morphetics. So in order to be log factorial, we must also work with stacks with non-representable modifications. So the stage which is log smooth factorial, in fact, only work in the world of stacks. So we must enlarge our context to log smooth and then also to stacks. And now last slide, there is an example where I show the difference between classical and non-classical situation and show when non-representable kumar blow up is needed. And I'll not stop here because I'm out of time. And last remark, this way to blow ups, which we discovered here, we blow up T1 up to Tn and the mv is weighed D. It can be done more generally for, once we discovered weighted blow ups in stack theoretic context, we asked what can be done to classical algorithm. It turned out that the usual weighted blow ups of Tis, of coordinates on AM, these weights D1 up to DR is in fact core space of a non-representable modification, which is smooth. And E1 walks with weighted blow ups and considers just usual centers which are predicted by Hironaka, just maximal contact centers, these were weights, but you do the correct blow up when you get the dream algorithm I talked about. So actually, it's also, it was always hidden in Hironaka's approach, but just people did not know what are the correct tools to work with. One have to work with correct weights, one have to work with stacks, and then it's possible to get a simplest algorithm, can I mention. Thank you for your attention. Thank you. There's one question from Q&A from Darko. So which paper is cited as T17, if any? 17 is the paper in Jim's. It's, yeah, 17 is principalization of ideals on logarithmic orbitals. Just T17. No, it was, so it's just, Lodarsik is everywhere in all these walks, yeah, so okay, non-intentionally. It was ATW17. So there are other questions? I wanted to make some questions. Sorry, just a minute, just maybe I was, just a minute, let me see. This was asked already at 435, at 1135. Sorry, say it again. No, at 1035, sorry. Never saying this, this question was asked already on at 1035 a.m. Okay, well, okay, so, so over here, ask the questions. Just concerning some comments in your talk related to my work, so you mentioned the clarification, which you said this is used in my work. And in fact, as far as I remember, you mentioned it in 2012 as a possible simplification. So what I did is I used the canonical desingularization of some portion and then you suggested to use clarification. It actually works, but it was not done in the book. It was, I don't know if this is what you meant about clarification related to my. Yeah, I meant, I meant, yeah, I meant that there are few algorithms for clarification. Your algorithm indeed used the resolution. Initial algorithm of Dion Kabramovich used some other trick, but in all, in all arguments I know, you must go to log smooth setting. You cannot do it only with smooth and S and C. And it will, it would be possible also in your approach to use clarification of Abramovich, but okay, it wasn't. But graph you, graph you, and now also graph works on the so-called desiccification generalization of these two stacks. It's also similar algorithm graph you graph you versions. But all of them somehow must work smooth and not just. Okay, now concerning the classical desingularization one. So you have Villa Maio. Yes. And then Wilson-Bilma and I think there was another paper by Sinus and Villa Maio. So I was, I think I read some time ago in the master views of some of this, that the algorithm is not exactly the same. Sometimes they are different, different, all the steps. It's not, it's not exactly the same. Let's say so. Yeah. In, in, in a talk I allow to myself to put something not important under the carpet. Yeah, just to save time and also to, to make it simpler for listeners. But you are right. You can do combinatorics in a stupid way. You can do it more efficient and less efficient and people are playing with it a bit. You have some choices with combinatorics, obviously. Moreover, difference of few algorithms is like program in programming a C program when compiler, there are more effective compilers and less effective. If it's less effective, it just says to processor to stop and wait until it's sure that it can do the next operation. The same happens with this algorithm. In some versions, they're not sure that they can proceed. They do some combinatorial steps much more than needed. So we just blow up divisor, for example, few times, just idle operation. So there are some nuances, but the main machine in gene of algorithm is the same. The choice of maximal context and so on is completely the same. Which is due to Ronak. Yeah, this is in Ronaka. Yeah, but in Ronaka, it was it was implicit and when Ronaka himself worked on it many years to make it simpler and so on. So it took a lot of time. There is this paper on idealistic exponents. I mean, we'll introduce this in 1977. Introduces certain things. It doesn't give the algorithm, but then one can actually get the course where it gave this that I had about this. So this is closely related to this algorithm. Idealistic exponents are marked ideals. This is the ideal, the idea to consider marked ideals and not just ideal. Idealistic exponent is precisely this mark. Yeah, so he had some, there is reduction step that he mentioned, you mentioned the okazin is there. Yeah, and he also wrote the paper later in the early 2000s about this. The question, I'd like to come back to your result about about making a morphism good by modifying the base. I don't remember. Come back to that result, to you have an F and then the F is used from F by some modification of the base and it is good. Is it this slide? The new ingredient is that the reason? I'm not so sure. Look, there was no log involved at the beginning maybe and then you... No log. Was it in altered resolution or in classical? Yes, altered resolution maybe. In altered resolution it was. So I don't remember which assumption you have on your X to Y or maybe no. No, it's not so technical. Yeah. So my question was that you have an F from X to Y which is not good and you make Y prime to Y which is a modification but the pullback somehow is a log goodness or maybe also you have to modify a little bit the source. I don't remember. Look, is it this theorem? Maybe yes. Yes. So it's a modification or alteration of both schemes but you also... So you stop effort log schemes or the underlying scheme, is there any assumption there over a field or not? Well, I assume that X is a finite type of a QE surface. Over a QE surface. Why the excellent surface? Why the excellent surface, yes. So I was wondering whether you could use this sort of result or related result to prove. For example, there is a Fabrice's theorem about making generalized upside of F becomes good after a modification of the base. So I wonder whether this sort of result could be. Of course, you have in the theorem of Fabrice, you have a shift but maybe a constant shift is already difficult. Then maybe you have this X to Y and then upside of X, let's say Y is not good but by modifying Y you make it become good. And if the morphism you modify becomes good, then the upside will become of course good. So I was wondering. Okay. I am not prepared to answer but did I hear the talk or I was told what he thought about this? No, no. So anyway, there is an approach in Ogozo paper, he uses a relative dimension one. So it is very, it is closely related to what you're doing with the fabrication in curve and things like that. So no, what I'm saying that instead of you, there is some complicated induction and so vibration and nodal curves but instead if you can somehow prove the same thing I think by reducing to the case where you have a F-S log structure and some on both X and Y and some log smooth may be saturated and of course the stratification, the shift is constructible relative to some stratification comparatively with the log structure and there is some timeless, suitable timeless condition on the shift and then proves that for this class of shifts then you have uniformly that, I mean that the ARPSI, this conversion of this in uniformity is that you have constructibility of ARPSI and also on the stratification. So it's possible to sum up, but of course you have to develop a lot of log scenes to state and then of course to actually check that ARPSI is who, to compute the ARPSI in certain situations and sometimes it is very useful to use test cabinet to use already the result of, so I think, but I think in theory it's possible to do it somehow just by improving the word morphism and that it is kind of useful to me. Are you saying that Michael's result might have an issue? No, no, all I'm speaking about all the ideas that don't use as much as this, just uses the young approach, I mean to alter, but without all this, I mean this is, here you want to control, well I'm satisfying just this log smooth saturated morphism and here he wants to have better and also he wants to control on the degree and he wants to have log, well I don't know, he has more things that he wants. Stop here because it becomes too technical. Yeah, I want to speak, but I also wondered about this theorem, so I asked a question about reservoir X prime and Y prime are integral. Now if you have this, when you entire localize of course the being irreducible is changed, so it's kind of strange that I see what you're saying, but if structure is not the risk, okay yeah maybe you are right, maybe one has to consider, yeah but maybe one has to consider components, you are right, yeah, so I think that's all from, okay, so yeah I think we should