 Thank you very much, Hugo, for the introduction. First of all, I would like to thank the organizers for this great opportunity to give a talk at this workshop and school. And the topic is essentially apart from the general line of mixing and control, but this is about controllability and stabilizability. As it was already mentioned several times, controllability is an important property in control theory, which is also used in proving many fundamental results during this workshop and during this summer school. And therefore, I would like to address another important application of controllability property, namely, how controllability can help for studying long-time behavior of non-linear control systems. By long-time behavior, I would mean stability or asymptotic stability in the sense of Lyapunov. And the structure of my talk is essentially comes from the observation that many non-linear systems are not stabilizable in some regular sense, so it's not possible to achieve Lyapunov stability asymptotically. And some kind of fast oscillating controls are essentially appearing in this situation. So here you see the outline, sorry, here you see the outline of my presentation. I'm going, maybe, not in the direction that I intended, but okay. So what I would talk about, I would talk about that, suppose that we have at least three interesting classes of non-linear systems, which give us challenge in the stabilization problem. Firstly, let us just consider a very simple example coming from non-honomic mechanics. That is essentially an example of a unicycle type in the sense that, just as a naive illustration of what is going on, assume that we have a wheel rotating on the plane, so there is a plane, say, with X1 and X2 coordinates, and there is a wheel rotating on the plane, and there is a rolling without slipping condition. So if we just write this equation in the sense of kinematics or dynamics in a non-honomic system, so you see that if there is no slipping condition, if we denote, say, by X1, X2, the projections of the center line of this wheel onto X1, X2 plane, and if we denote by theta the angle between the plane of this wheel and X1 direction, we get immediately rolling without slipping condition so that the vector with coordinates X1 dot, X2 dot, so the velocity, should be parallel somehow to the direction described by the angle theta. We may write this non-honomic constraint as a path equation in the sense that X1 sin theta minus X2 cosine theta is zero, and we can immediately observe that this constraint is non-honomic in the center, it is not integrable. There is no first integral of this path equation. To become more clear, we may even write this equation as the first equation in this slide, so introduce functions u1, u2, and treat those functions as controls, and then we write the simplest control system which we would like to study in this talk, but we would like also to see some kind of general classes behind this example, so we immediately arrive to this very simple but very useful example of control system to be studied in this talk. So this is essentially a system on R3, and it has two controls. The system is underactuated and the center, the number of controls is less than the number of states, but the system is good in some sense that it is controllable. I would give some precise facts about that later on. For us, this is just a motivation how to study stabilization problem for this system. Of course, there are many other interesting examples. I would like to just mention another finite dimensional example. This is an example coming from the Euler equation in rigid body dynamics. So suppose that we have a rotating rigid body whose coordinates of the angular velocity. Sorry? Yes, yes, yes, sorry. I'm going fast because I intended to denote X3 as theta. Thank you very much. Yes, so it's essentially like here. Yes, yes, so essentially it's a kind of a system with three differential, ordinary differential equations with quadratic right-hand side plus some controls actions which are stay in some two-dimensional space. So this is another example of underactuated system in three-dimensional states with two controls. And you see that there are red terms here in the sense that if you try to use linearization at the region, they would cancel, and you cannot conclude about controllability of this system. You may also think about, I would say, more general class of system which is strongly related to this course in the sense that if you write the Euler on Navier-Stokes equations for incompressible liquid, say, on a two-dimensional torus for simplicity in periodic boundary conditions, then this kind of equation essentially possesses this kind of also quadratic right-hand side and also the linearization doesn't give any information about, I would say, controllability of this system. We may also consider just the case when the viscosity is identically zero, so just think about Euler equation and fluid dynamics as a kind of infinite-dimensional version of finite-dimensional Euler equation in rigid-body dynamics. And I would like to consider the problem about making those systems asymptotically stable in the sense of Lyapunov. More precisely, suppose that we are in the context of finite-dimensional control theory in the sense that we have a system which is controller-fine, but it has drift. This system is denoted by sigma. We will assume that zero is an element of D of domain D and assume that the drift term vanishes at zero, so that there is always a zero equilibrium of the system with zero controls. And we assume that the system is undirected in the sense that the number of controls is strictly less than the number of inputs. A fundamental problem to be studied for system sigma is the problem of stabilization shown here. So this problem is as follows. Our goal is to find a continuous map from D to RM, or more specifically to U if we denote the set of controls, the set of control values by U. In our case, it is just RM, but in this case, so we assume that this function is continuous in D. We assume that it is zero at zero, such that if you substitute the control U as a function k of x into the right-hand side of the system, it would have asymptotically stable zero solution in the sense of a Lyapunov. Such that the solution x equals zero of the closed-loop system, x dot f of x, x would be asymptotically stable. This is our goal. And suppose if we are able to find such a function, we may denote the right-hand side of the corresponding autonomous differential equation as f capital of x. What are fundamental challenges here? Of course, there are many publications in this area, and it's hardly possible to mention all, I would say, significant contributions. Some of them are summarized here. An important fact is that there is a necessary and sufficient description of stabilizability property, but this is definitely not enough for all those problems that I mentioned previously. I would like to give some hints for people from different backgrounds, what is going on there. So of course, what does it mean? Stability in the sense of Lyapunov. It's, I think that this is clear. Maybe I can give the definition, maybe I'm not. What is the suggestion? Okay, everybody knows what is a stability. I mean asymptotic stability. Asymptotic stability of x, x, y, zero. I would say let it, this system be star. And another ingredient that in order to characterize this condition, we may use Lyapunov functions, but in control theory, there is a well-known term called control Lyapunov function. And I would like just to introduce the term control Lyapunov function to claim that control Lyapunov function do not exist for the example that I consider. And we have considered another kind of approach to the stabilization problem. So a function of class C1, v of class C1, control Lyapunov function for system sigma. So sigma is our control affine system. If the two properties are satisfied, the first property that v of x is positive definite in the standard v at zero at zero. And v of x is positive whenever x is in minus zero. And the second is that if we consider this kind of expression like the time derivative with respect to the control system, but the control is up to now three in control system sigma. If we consider inf with respect to all set of controls, that this function should be negative definite. Okay, in principle, you may consider any kind of control set capital U, but you may also consider RM as in a simple situation. So a control Lyapunov function is a function satisfying these two properties. And a fundamental result in this area is that this stabilization problem formulated in the previous board is solvable if and only if the existing control Lyapunov function. And I would like to show you that in many cases, like the ones considered as motivation for our talk, do not satisfy those conditions. So just to be precise, for this purpose, I would formulate Einstein's theorem. So this is a theorem due to the Einstein and it's essentially about equivalence. So system, I would say by a continuous, continuous. Back low, standing here. If and only if there exists a control Lyapunov function. In fact, it may be done infinitely different for even if the original system has just vector fields of class C1. The control Lyapunov function. And this is valid whenever the vector fields of our systems are of class C1 in D. And as I mentioned, it may always be guaranteed that there exists a C infinity, smoothed Lyapunov function because of Converse Lyapunov theorem due to your last code slide. So essentially this implication will have been from famous Converse Lyapunov function by Yaroslav Kurzweil's theorem where he used smooth Lyapunov functions for control systems with C1 vector fields. But why I am using this observation? Just because there is another description of stabilizable system which doesn't use control Lyapunov function but which uses the concept of topological degree for rotation of vector fields. For this purpose, I would like to recall you a famous another result by Krasnowski-Zabreiko. It goes as follows. So suppose that for this system star, we have asymptotic stability of the trivial equilibrium in the sense of Lyapunov. Then if stability is asymptotic, clearly zero is an isolated zero of function capital F. Then SF is continuous. So the original right-hand side is supposed to be of class at least C1 and the K of X is continuous. Then it means that a continuous map F of X has well-defined rotation on each epsilon sphere close to the region. So just to be clear, we may define the rotation of epsilon sphere. So epsilon sphere is defined usually as a set of all X such that the norm center at the zero, such that X is epsilon. And the rotation is an integer number defined by F on S epsilon. An integer number well-defined object which is invariant. So whenever F is continuous and F is non-zero on an epsilon sphere, we may define an integer number called rotation of F on epsilon sphere. And the Krasnosevsky-Zabreiko theorem, which is given in the famous book about geometric methods of non-linear analysis, say that for asymptotically stable systems of form star, this number is equal to minus one to the power N where N is a dimension of the state space. Of course you may ask how to compute, how to compute this kind of descriptions of asymptotically stable system, but there are many good formulas in this area. For instance, if we had just a planar system playing like with two variables X1 and X2, this rotation has a clear geometric meaning in the sense that if we just F and epsilon are like here with usual orientation, and if we just see how the vector which coordinates F of X1, X2, F of F2 of X1, X2 rotates here. So let it be some kind of angle. Then the rotation is just a number of whole rotation that vector F makes when the argument just rotates around epsilon sphere. So it's well known, but I'm just recalling these facts. And then in this case, rotation of F and epsilon is just an integral one over two pi. So the integral comes with epsilon sphere of DF, D phi. And by the way, if we may also complexify our situation to give some kind of nice computation in the sense that define Z is X1 plus I X2 and say phi is F1 plus I F2. Then this formula can be just considered as a logarithmic residual in the sense that this is one over two pi. The integral, the integral is epsilon. DZ, logarithm Z prime DZ. And a consequence of this fact is that if F of Z is a homogeneous function say of degree two, three, et cetera. So if it is non-linear system, then necessary conditions of Krasnowski type force and totic stability is not stabilizable. That's a motivation saying that it's not possible to expect, I would say, Lyapunov asymptotic stability from systems of the form. So equilibrium is not asymptotically stable for terms of the form Z dot say alpha times Z to the power Q whenever Q is not one. So say for systems with quadratic right-hand side because Krasnowski, I'm just giving this introduction to see that if you have a system with quadratic right-hand side or if you have some kind of small perturbation of the system, we may not achieve in many cases asymptotic stability in the sense of Lyapunov. By the way, this is not the main difficulty. So here you see some kind of picture where we have rotation equals one which corresponds to asymptotic stability. In some cases, rotation two, which doesn't correspond to asymptotic stability and some kind of, I would say, constant drift-dominated cases where we don't have asymptotic stability either. We may also deduce a famous necessary condition for stabilizability called broket necessary stabilizability condition in the sense that if it happens that the rotation of the vector field is non-zero, then, and if the right-hand side is continuous, then there is always zero of the right-hand side inside the ball. So just a simple application of this area, we arrive to the well-known broket necessary stabilizability condition in the sense that if the original system sigma would be stabilizable, then the map corresponding to the right-hand side is locally unto. So this is very important observation and I would like to convince you that this is not satisfied for, say, for this system and many other systems. So I would like just, again, repeat that we had this system of ordinary differential equations with controls and with drift lies in D, lies, say, in U, which is under our choices, just RM, F0 at zero is zero, M is undirected in our case. The right-hand side of this system is denoted by F of XU. And if this system sigma is stabilizable, so if this problem would be solvable, we have that the map F defined here, so this is just from D times U, RM is locally unto. But this is not the case of this example, you can easily see it. You can easily see that this is not the case of this example, so there is no hope to find such kind of continuous control dependent on the state only, such that we would have asymptotic stability of zero solution of the system. Okay, this is obvious, but I would like to repeat this very simple argumentation in the following sense, so if we are looking just for very local properties like stabilization of zero solution, then we may restrict ourself to the domain D being just kind of neighborhood of zero or some kind of domain containing of zero. In the sense that suppose that we consider D is a domain in R3 of this form, they were the angle theta is bounded by something like pi over three with minus and pi over three. And then you see that cosine of theta is clearly bigger than one half in D. And therefore, if we denote the right hand side of this control system as f of XU, and if this map f of XU would be onto, it means that, okay, so if f is onto, then for any say vector y with components y1, y2, y3 from R3, it would be possible to solve algebraic equation with respect to theta of the following form, f of XU, y is variable for any y. But you see that it is not the case, why? Because we see that cosine is bounded from zero in this domain. And if we just put, if it should be solvable for any y, we put just y1 equals to zero. And you see that if y1 equals to zero, then the first line would go to zero. It means that u1 has to be zero. And then the equation with y2 nonzero has no solution, okay? I mean the equation of this form. You see that this is not solvable. And therefore the map f of XU is not onto, not locally onto, and therefore the system of unicycle type of this form, which is a simple example of non-holonomic system, is not stabilizable by a state feedback law, by control, which depends on the state only. And of course, there are many more interesting examples. So I am just given just a few examples where some kind of more delicate structure happening. And of course, a famous bracket example, which was the first probably well-known example in this area, which is also a kind of canonical system in R3 with two controls. Here you also see that the map corresponding to f of XU is not locally onto. Also this kind of equation is not solvable in this particular situation. Therefore there is no hope to achieve stabilizability of this system. I already mentioned this unicycle example in this area. And I would like to convince you that this is not a kind of academic situation. It also happens very often in real life applications, like if we have the Euler equation together with orientation equations, like equations of motion of rotating satellites. It is also shown that this kind of equation do not corresponds to the right-hand side map, which is locally onto. Therefore, we should do somehow solve this problem. And we should solve this problem by using the usual assumption that the system is controllable. It was already given in the lecture by Professor Ragerich of the concept of controllability and notion of controllability. So a system is controllable if for any states X0, X1, there exists an open loop control. So a control depending on T, like a measurable function of T, such that the system has trajectory steering two points together. And we would be interested in the problem of looking for a control depending on the state, which gives asymptotic stability of zero solution in the sense of Lyapunov. I already mentioned that for this example, this is not possible. But as a simple survey in this area, this was the first part of my talk. So if the original system would be linearizable or linear, then it would be really the case. So from controllability, we could derive stabilizability even exponentially. But for nonlinear case, as it follows from the simple examples like unicycle, and for many other delicate examples, like Euler equations, it is not the case. Even if we had controllability condition in terms of leap brackets, and even if the system was driftless, then it is not the case. We are not able to achieve this kind of stabilizability. Therefore, as we are interested in stabilizing system, we should somehow change this approach. And we would like to change this approach by using time-varion controls with fast oscillating functions. And therefore, just a pre-condition for this construction, I am citing general result by Jean-Michel Cohen just to convince you that we are somehow on the right direction, and we are somehow in the direction which requires constructions of controls. In the sense that despite the negative result that I was showing you previously, there are at least two general positive results. First result is due to Jean-Michel Cohen stating that if we had a general control system, such that it is small-time, locally controllable at zero, but also some regularity assumption that the right-hand side is smooth and the Lie algebra run condition is satisfied naturally at zero, then there exists a smooth time-varion feedback law. So if we just change our formulation to control-dependent on the state from the pure state feedback law to the control-dependent on both state and control situation could improve. But the main problem is that the current theorem is not constructive. Its proof depends on Whitney embedding theorem and it doesn't allow to construct control functions for any given system. We would like to propose some kind of constructive approach to Jean-Michel theorem, of course under some additional assumptions. And to be honest, there is also another approach that if we try to avoid, I would say continuous controllers, but if we use some kind of discontinuous controls, then it's also somehow possible to stabilize the system. But the problem is that in this case, we have to define solutions to the systems with control, not in the sense of Philippov, like it is well-known discontinuous, in discontinuous theory of differential equation with discontinuous right-hand side, but in the sense of pi trajectory, so pi epsilon solution, which is heavily based on this concept of partition and sampling. I would like just to give an idea of this construction because we also somehow use it in some of our constructions. Okay, so in this slide, you essentially see some kind of ingredients which are looking natural, essentially after several talks in this school. So we start just from fixing an epsilon bigger than zero, some positive number, and just trying to split time intervals into subintervals of length epsilon. Okay, we produce this kind of partition, and therefore if we have a feedback, so the control U depending on time and X, it is called feedback, and if we have a sampling, so if we have division of this interval into epsilon intervals, the epsilon solution starting from X zero is defined as solution to these systems of ordinary differential equations where the measurement of the states is fixed at left interval of each segment of this partition. So we just have X zero and start the solution, arrive to X epsilon, substitute X epsilon into the expression of the controls, and then repeat this process. Thus obtain solution is called P epsilon solution of the system. And this is essentially what is done in a Clarkley-Deiv and Zontoch-Subbotian theorem, they also used H independent of T. We may also use H dependent on T as the definition becomes the same. Okay, and now we are in a position to state the main problem after those sequence of negative results stated in the first part of my talk. The problem goes as follows. We have a control system with drift, control of fine. Our goal is just to construct a sequence of time variant controls of the form C such that zero solution would be asymptotically or even exponentially stable of the system. And we would like also those coefficients of those expansions to be, I would say, good functions in the standard, they are at least piecewise smooth and they are continuous at zero. Okay, that's minimality requirement from our side. Of course, we would also require that all those functions are real. And the goal is just to compute this stuff. It is clear that because of current theorem, this kind of function, periodic function should exist. If asymptotic stability property is a kind of rough property, we can perturb system a little bit and perturb controls. Therefore, we can truncate for your series and therefore it's definitely existing. But the main problem, how to compute those coefficients, how to compute those expansions. And the simple idea here is just to use tools from geometric control theory, from non-linear control theory, in the sense that there should be definitely some relation between expansions of solutions with, I would say, complex exponential or periodic functions with sine and cosine, if you like, and controllability properties. The simplest relation is shown in this slide. It was already visible after the lecture by Professor Ragrachov and these kind of properties already have numerous applications in different control problems, varying from motion planning to optimal control, et cetera. And I would like to give some kind of, first of all, this particular viewpoint, just if we have system without drift, we substitute two controls like sine and cosine functions and compute solution to the system. We get a very nice expression in terms of solution over a time interval from zero to epsilon, moves to direction of the leap brackets of the vector fields plus some high-order term. And this expansion is indeed a kind of, I would say, very particular situation in the sense that if we have just system sigma, and if we try, say, to have a look on observable of this system, where there is a function h. So assume that h is a kind of good function to some, say, rk. And if we define differential operators like if we have function h, we may differentiate function h with respect to direction of any vector field fj. We're done as d. And then, of course, the leap bracket is written already here. Then we may also formally write the leap bracket of any vector fields in terms of this directional derivative as Lf1f2 minus Lf2f1. It is already shown here. And just the conclusion how to obtain this formula and how to obtain many results in, I would say, our computation and any other computation is just to use chain fleece expansions of solutions with, I would say, time varying controls. I am using the terminology chain fleece expansion, but, of course, this is almost the same as saying that this is a Volterra expansion. And in the lecture by Professor Agarachov, it was considered as a kind of Taylor series expansion for time varying control, time varying vector fields. But I'm using this terminology chain fleece because of the book by, at least the book of Neymeyer, Neymeyer's understaffed, a good exposition. So the idea goes as follows. So if you're interested to have a look on the observable at time t, so let we have a control u, which is, of course, say L infinity for our control system. And here, in just very simple situation, we may consider that this is, of course, cn plus one in d. Then we may get this kind of h of x0 plus this kind of expansion like it goes from one to 10. It goes like one. And here we get some kind of iterated like Lf1. So this is an iterated directional derivative of h times the term depending on time. g1, g2 up to g nu, depending on t. And this term is essentially an iterated integral of this time. E, and a very important fact that there is, of course, the remainder of this series expansion, r of t, which is valid formally from zero to t. And the remainder of this expansion admits kind of estimator. This is capital O from t to the power n minus one to u, and I would go, I would be even better write it in this form. So this is, as we consider, system with drift. So this would be maximum from one and L infinity norm to the power n plus one. Yes. Sorry? Ah, okay, you, sorry. Yes, thank you. Thank you very much. Thank you very much. Yeah, this is you. You're totally right. Yeah, so this is a kind of well-known expansion, and our goal is just to play with this expansion and a very special choice of control and try to get some kind of results which are helpful for us in establishing exponential stability property with this family of controls. I would like just to present, first of all, a kind of general construction for control system which has no drift. So it will be called sigma zero. F zero is just identically zero here. And we assume the simplest possible controllability condition in this case, like the vector fields itself together with the set of their first-order leap brackets, together give the whole n-dimensional space, the whole tangent space to our state space. Here are just notation like if S is a set of couple of elements, JL, encoding the leap bracket that we have to take into account. In addition to m original control vector fields, we have to take n minus m elements of S, so m plus cardinality of S is n, such that all together we have at point zero this kind of breaker generating condition. And this is what is known as step two breaker generating condition in the sense that at step one we take just the vector fields F, and step two we just add the mutual vector fields. And for simplicity, we assume for this construction that condition be satisfied, so therefore this because of Chow-Roshevsky theorem of course give us controllability condition of system. And as a good representation of this controllability condition, we can mention already defined unicycle example and also broket example, and also many other interesting example are fitting into this formulation. And now the next step is just we may use a very explicit and elegant representation of controls, so we have a kind of controls, Ui depending on epsilon parameter to be defined later on depending on T through these trigonometric functions and depending on the state because we would like to substitute functions Vi and ajl into this expansion as functions of X, and then after some computation conclude about stability or even exponential stability of the closed loop system. So here we see exact parameterization of controls called C, and now the goal is just to see what is really going on, how we can prove exponential stability in this case. Okay, yes, you see that there are some kind of frequency multipliers here, we should also be careful about that, but from now you see parameterization and I would like to mention that delta's are just chronicle symbols in this notation, and our goal is to define it as just to specify Vi and ajl as a function of X to claim that this is really time-varying feedback control. And to make the construction, I would like to present you a kind of illustration what is going on here in this construction, it goes as follows. Originally we had this kind of system without drift as a model of non-cholonomic system in the sense that if we have a kind of wheel with rolling without slipping condition, it's not possible to steer the wheel perpendicular to its direction. But on the other hand, we already know that because of bracket generating condition, we have some number of lee brackets which give us additional direction in this illustration, they corresponds to probability of this, they corresponds to possibility of this motion perpendicular to the plane, and this is roughly speaking, what is done usually in omniview devices like it is done outside robotics. And then for this extended system, the only problem is how to implement this artificial controls you bar. And those artificial controls you bar are doing by fast oscillation controls in the original system one. So this is a kind of idea what is going on there. We would like to take any, I would say positive definite quadratic form as a Lepunov function candidate for extended system, then do control design here, but knowing that you, JL, do not exist in the original system, we would like somehow to approximate them by using fast oscillation controls shown previously. And now the construction goes on this slide. Okay, the idea is very clear, so the idea was already explained in the sense of extended system, and I would like to give just a couple more comments about this construction in the following way. You saw that controls are parameterized by functions A and V. Those parametrizations are computing by solving the system of algebraic equation sigma A. So once you fix an arbitrary quadratic form, I would say any positive definite function V, you may compute its gradient and consider this system sigma A as a system of N algebraic equation with respect to N unknowns. And I would like to claim later on that the system is always solvable. And the other assumption that we keep in mind is that there were also some kind of integer frequency multipliers in the formulas for the control in the previous slide. We assume that there are no resonances between these numbers, so the magnitudes of distinct, the magnitudes of those numbers with different numbers are mutually distinct. It means that there are no second order resonances in terms of, I would say, usual terminology in non-linear oscillation theory. And the design goes as follows. Once we have this kind of expression, okay, we may take just, we may just an output as an identical map. And we would like to achieve the goal so that if we start from any point x0, and if we evaluate the Loponov function candidate at x0, we would like to achieve the following. So we would like to go from x0 minus the gradient of v alone x0 plus some kind of higher order terms, I would like to. And this expansion is essentially obtained from this, I would say, chain-free series number three. And the controls obtained this kind of chain-free system. There are some kind of technical formalities here, but at the end of the day, the idea is clear. And all together we come to the first result in our series of results. It goes as follows. So suppose that we have a C2 function, v, such that we can solve with the gradient of v, the system of algebraic equation shown previously, and that its solutions depending on each point x and depending on parameter epsilon admit this kind of asymptotics for small x and for small epsilon. Then the conclusion is that there exists a ball in the state space inside domain D, such that for any small enough epsilon, the solution converts exponentially to zero. Okay, that's our result in this area. And the construction of controls was already shown on the slide. Of course, there should be two points answered here. First of all, the system of algebraic equation is solvable, and second, do we always guarantee that this kind of asymptotics is satisfied for solutions for algebraic equations? No, I would like, I consider, well, sorry for using naively this terminology algebraic, but this is really algebraic. This is quadratic equations with respect to a and v. Okay, okay, sorry, it's not important that vector fields are polynomial. Yes, yes, yes, yes, yes, yes, yes, yes, yes, yes. System of quadratic equations, essentially. And we have higher order brackets, we have higher order equation, actually. If we had higher order brackets, we had higher order algebraic equations. Okay, this is the first result, and I would like to claim that this system is always solvable. I am using this kind of non-zero rotation principle to conclude that the system of quadratic equation is solvable, so the implicit function theorem is not directly applyable just because we have quadratic equation. If we linearize it at the origin, we get, I would say, singular, I would say linearization. So therefore, we use some kind of clever formulation of this system, and applications of principle of non-zero rotation, and we are good enough in saying that the asymptotics of these roots is square root of X over square root of epsilon, and all together if we plug it to the requirements of previous theorem, we arrive to this result that whenever we have system satisfying step two breaker generating condition, our procedure works in the sense that the system is solvable and the obtained controllers achieve exponential stability of the equilibrium. That's what we have, and I would like to stress out the idea behind that is that there is no Lyapunov function for the original system as I've shown previously. We have to proceed somehow like a stroboscopic map and the same that we are going to step intervals proportional to epsilon to decrease the value for Lyapunov function V in this way by using oscillating control, and therefore from this representation from plane accurately with smallness of epsilon and other requirements, we get exponential contraction to zero. This works perfectly for model examples like broket example, so you see that we have just two vector fields and this first orderly bracket is enough to span the whole tangent space, and then if we solve the given system of algebraic equation, it just reduced to one algebraic equation, and solution is very simple. We have this explicit parameterization and just plug those functions as coefficients of trigonometric polynomials and conclude about exponential stability. It also looks perfectly good with numerical simulation, also with parameters epsilon which are not small enough, I would say. Our result is that there exist some small epsilon such that we have exponential convergence, but numerical simulation confirmed that is also doable, I would say, with some numbers which are not usually treated as small numbers. Okay, this is naturally extendable to other systems like unicycle and this is naturally extendable to higher degree non-horonomic system when we have to take into account iteratedly brackets. There are examples in this area, but expansion becomes huge. Of course, we have to generate iteratedly brackets here, but at the end of the day, we also arrive to system of algebraic equation, so everything depends on algebraic equation of order three in this case. We can solve it and we can also integrate to convince that it also goes numerically, but this is not the main message of my talk. The main message of my talk is that this scheme also applies to a class of system with drift. I mentioned the Euler equation in such a way that this is a good motivation to extend this method for systems with drift in the standard. Suppose that we have also F zero, which is non-zero, and suppose that we have this kind of rather general controllability condition. Okay, this is step three, but anyway, you see that this kind of goodly brackets are appearing here. This is enough to conclude about small time controllability at zero here. We use also sets of indices corresponding to, I would say, literatedly brackets and literatedly brackets with zero. And then our second result in this area, second in the standard, it is related to drift systems with drift is as follows. If we have some kind of technical assumptions based on smallness of these kind of brackets, I would say that if the right-hand side of our system F zero is quadratic and the control vector fields are constant, it's satisfied naturally, it's satisfied for the Euler equation. And then we can do also some kind of control design with cosine functions like here. And just for this particular assumption, we can just put our analysis to matrix inversion of the matrix composed with these vector fields. And then I would like to show you four-digit body. And just in a couple of minutes, I would show you. Yes, yes, yes, okay. This is just three-dimensional, but we can also do six. Okay, for three, it goes very simple. You see that this is very good to define. You can solve it by hands and you can compare some kind of simulations with, I would say, previous results in this area. And in concluding part of my talk, I would like just to touch a very interesting question which is far from being solved and which is inspired by the work by Agarachov-Sarachev and other people working in this area. I mentioned Euler equation in fluid dynamics, and not occasionally, but because once we are successful in solving stabilization problem for rigid body dynamics, Euler equation, then we will try to solve the similar problem for Galerkin approximation of the Euler equation in fluid mechanics. And up to now, I've just to show you very academic and very preliminary results in this area where we have a two-dimensional flow on a torus. This is a very standard scheme already done in many publications. So a Galerkin equation for this system is a system of ordinary differential equation written as gamma here. So everything comes from the derivation you point in this paper and in other books. And we are interested in just looking for a very specific case. The Brecke generating condition doesn't depend on viscosity up to in our knowledge. We just try to cancel this term, consider the Euler equation, and ask is it possible to stabilize the system by using time-variant controls. And we can rewrite the previous complex notation in some real variables. So this is just x is real variable. I would not touch the whole derivation, of course, but I may do so. And from the simplicity point, we just consider also steps three, Brecke generating condition. This is the easiest case that we can do, but we can do it. Okay, so what do we have here? We have here M control vector fields and we have some kind of iterated brackets with F zero. It was done in these works. And we denote the set of indices corresponding to the vector fields and iterated brackets as one and as two of this form, the structure. And we assume that all those together are enough to have controllability condition in this form B three. Step three, Brecke generating condition. And of course, there are some illustration that it somehow correspond to the idea that we control low frequency modes or big high dimensional system. And after Brecke, I think we also have access to high frequency mode, which is, which would be related to cascade but I would not touch it in this situation. We are mostly concentrated on order equation without viscosity and just very simple situation. And the control design also goes very similar to previous cases that the controllers are decomposed as a sum of two components. We control the vector fields, I would say the original vector fields with auto-stilating and plus we use faster oscillating controls with cosine functions and with this parameterization by using access to iterated liberated. And at the end of the day, we arrive to a system of algebraic equation of this form. If we have a positive definite quadratic form, we can compute its gradient, we can get transposed, we can compute it and we can inverse the met, this, I would say we can solve it with respect to AJ here. And finally, we arrive to the following proposition. So if we use our controls with frequency multipliers, KJ, assuming that there are no resonances of order up to three, and if epsilon parameter is small enough, then just for this very specific situation where the drift is quadratic and the control vector fields are constant, we get asymptotic stability of zero provided that epsilon is small enough. And for this, I would say formulation, we may produce a kind of naive examples, like if we have the Fourier system, we write it in real coordinates. If we have say eight real coordinates and four controls, they are already very simple, I would say illustration that we can control a dimensional system by using four controls with iteratedly bracket of this form, and we can even write them explicitly in this form. And assuming that there are no resonances in this frequency multipliers of order three, we can get asymptotic stability of zero solution. And I see that the time is going to the conclusion, I understand that this work is far from being totally complete in the sense that in order to deal, I would say, multidimensional or even infinite dimensional for the equation, we also have to iterate this procedure. As it was done with least saturate and this inductive procedure. Unfortunately, in order to come to attractable computational approach, we have to compute too many, I would say, lead brackets and huge expansions. Say, I can just give brief results results that if we say write it formally as we have set of words of length one, three, five, et cetera, corresponding to iterations of brackets with this formula, then we may also think about our dream of using controllers as a sum of components corresponding to controls of brackets of length one, three, five, et cetera, what orders. And inductively say, if we have some kind of scales like for this relatively simple example, like we would like to control the five orderly, fifth orderly bracket, we would like to have, I'd say, three time scales. The first time scale epsilon corresponds to the usual requirement that we are increasing our kind of Lyapunov functions over step epsilon. And then we have a couple of fast and slow epsilon parameters so that we generate intermediate lead brackets by using fast frequencies and then generator would say five orderly bracket by using slow parameters. And then for this particular situation, they are related as squares in this situation. And in order to compute the whole expansion for five orderly bracket and say for three vector fields, like in this situation, we have to compute 1,024 iterated integrals of order five and in Maple it takes more than one day, something like that, but at least it's doable in some situations, so at least it's doable in this situation. So I should conclude, so some technical formalities can be found here. Thank you very much for your attention.