 Thanks. Thank you very much. So since this is a seminar on basic notions, I decided to start from the very beginning and I tried to explain in one hour what is the calculus of variations and so what are the tools that we use there. So the calculus of variations is essentially minimizing functional which means fungšenje, kaj je pripravila fungšenje. Tepične izgleda je, da je to posleda fungšenje v tem formu integrale in intervalu od nekaj fungšenje f, u, x, u' x v dx. V tom izgledu je, da je nekaj fungšen z vsej interval A, B, z 3 valjuske. F je vsej vsej vsej vsej vsej vsej, vsej vsej, ko je vsej uvršal, nekaj zvršal, in vsej vsej vsej vsej vsej. Vsej vsej vsej vsej vsej vsej vsej vsej vsej, izvršal, če počet, ki je u vsej vsej vsej vsej, derbeni nadej, bo si bi vsej vsej vsej zelo djič, čo počet nekaj zvršal, nekaj je u vsej, vsej vsej, valjuske, u vsej vsej valjuske, zal vanjo, če počet nekaj, to je vsej vsej, ki spetite očič, ki je še početno, kaj je posimul. Zvom, to je početno izgledan izgled, tudi nekaj inšeljih izgledaj, kaj je, kaj je izgledan izgleda, začetno v njah je n-openset, in tudi in intervala. To je več inšelj, kaj, ki je očič, kako se počutite, to je v tem formu. Zato je začnjel, da omega je vzbuk njenja zrednjem, zelo je začnjel, nekaj je vzbuk v reali valijne, in to je kredjent, nekaj je začnjel u. V ovom različenju je zelo vsef, ki je vsef vsef vsef, kako je vsef vsef v omega, in taj je vsef vsef vsef je svet, ki je vsef vsef vsef, boj, ki je naredil vsef. Vsef, da je vsef vsef izvaj, še je to, da je to, Češenih začakov, češenih začakov na tiste šešice. Zruhnujem, ki empakajan, načekam vrčopnezepustvene. Ma bil benchil, kako je. V sem je zamizijo, In tudi, da imamo nekaj kvrv, y je zelo u x, na vse plene, in tudi, da je u je zelo vse pozitiv. In vse, da bo, je zelo, vse kvrv, vse zelo, tako, nekaj, bo nekaj vse zelo. Da, če je to vse oboječanje? Se je pošlojno nekaj, da je što je vse oboječanje, še je danes, da je nekaj, da je vse oboječanje, da je čas pošlojno, da je rešel in in b, kaj je a b, nekih zelo, kaj je vse oboječanje. In tudi dobro smo pa uvržimo, da je uvržimo vsega vkratenja 1 plus uvržima vsega kratenja. Tako izgledaj sem, da je to vstajno, da je zelo izgledaj, ko je tega geometrična. V koč nu se držem kimi koristimveni dve bolje izdešenje, počke u bljave ma, me pregunta, da, na A je počekati šta naj pomegaje v alpha in začne več, da radija toga sekelja vse predstaje, da je alpha. Hier, na b je radija beta. Genometričnem izdešenju je oba zaspojovati v lozih spiritsi, they have the same simetri axis. And now I am considering all surfaces of rotations that connect this circle to the other circle that I wanna find the one with minimal area. So I mled to a problem that can be written雨 precisely in this form. And so interesting example izvahnih problemov, ki so tudi klasičnih izvahnih. To je tudi tudi držle integral, kaj je integral over omega gradian tudi x square. To je več vse, in zelo se vidimo, da je to vzvečen, da je to vzvečen v harmonijskih fungših, na nekaj minimizacij. to bolje. Integral je smo�ojevac v pozve, ozvo s svetu potatoes. Harmonijna funstana trebovi na tezolivosti. In še zelo je več hvalo kdirečno in v glasbenj delih tudi Rn je zelo vse cubje z nemeljščennem bulkov. Zelo potrebe se je unka suke naomiga na vzpe. Vsih je, da se pravite, kako je vsega občajjava graf? Ta je tudi je z vsega komunizator, kaj je svetulj, 1 plus gradi, in tudi x2. In ma to, da se z vsega komunizator sljute v svoj formu, kaj smo minimizacili... ...zavom, da se samo minimizacili vsega komunizator. Vsi je zrednji v domenju rn. zelo, da priživamo nekaj ječki nekaj ječki, da je nekaj ječki nekaj ječki, da je nekaj ječki nekaj ječki, da smo dojem na domena nekaj. Tudi na nekaj nekaj ječki nekaj nekaj ječki, da smo dojem na nekaj nekaj ječki nekaj ječki. To je grafa, da se predajemo na krv v spesem, In tukaj smo videli, da je tukaj bavljenje in tukaj bavljenje je reprezentati na graf. A za tukaj bavljenje imamo tukaj formula. Tukaj tukaj imamo tukaj, da počekaj smo minimizirati tukaj bavljenje in tukaj bavljenje bavljenje in tukaj bavljenje bavljenje. Je tukaj dovedati vrst očel. Što je tukaj, dobro nekaj dovolj, ko n着ineje? Ispinovalo, to je tukaj, dobro nekaj dovolj. Pa je yača ko je, ko je vzela vzela počekaj. imaliti. So essentially you say, suppose that u is a minimizer, then it must satisfy some precise conditions and this is done by a method, which is the method of variations, which gives the name to the field. So the idea is this. Now I will consider just the simplest case, the case where the function is defined over an interval. And so suppose that u is a minimizer. And then the idea is, I consider a variation of u. Variation means what? That I add to u some other small function, so for this reason I put an epsilon in front of it, then epsilon will go to zero. And this phi will be just a function defined on the same interval with the properties that it vanishes on the boundary. And so now the idea is, u is a minimizer. This function, the variation of u, satisfies the same boundary conditions because phi is zero at the boundary. And so because of the minimality we must have that this inequality is satisfied. And now the idea is that this is satisfied for every epsilon, not just for a single epsilon. And so one can define a function of the real variable epsilon in this way, and this will be, of course, g of zero. And so when you read this, you see that the function g defined in this way has a minimum value at zero, because for every epsilon it is larger than g o zero. And then phi is where we can use the standard methods of calculus. The function g has a minimum value of zero. This means that necessarily the derivative must be equal to zero. But what is this derivative? This derivative can be computed because we have to compute the derivative of this. The functional is written in this form. And so you have to imagine that now I write this expression plus epsilon t, epsilon t prime. And I can take the derivative under the integral sign. And so very quickly I arrive to this expression f, y, x, u, x, u prime, x times phi of x plus fx, x, u, x, u prime, x, phi prime. And so everything here, this is, we know that it must be equal to zero. So now the next step is to integrate by parts. And so I want to put this derivative on this term. And since the function phi is zero at the boundary, there is no finite term when I consider the integration by parts. And so this integral here can be written in this form. It is more convenient. So it is f, y, x, u, u prime minus the derivative with respect to x. So this term. And then everything is multiplied by phi, because I moved the derivative, so here what remains is phi. And this is, as I said before, is equal to zero. But remember that this is true for every phi, which satisfies this condition. So now the fact that the integral vanishes for every choice of phi implies that the term in the curly bracket is zero. And this is good news, because this is an equality between these two terms that I write here, d over dx of fxc computed in x, u, u prime is equal to f, y, x, u, u prime. And this is now an equality that must be satisfied for every point in our interval. So this argument shows that if u is a minimizer, because we started from the u, then you must satisfy this differential equation. And so what one can do then is, since we are looking for minimizers, then the trick is, first try to solve this differential equation. In some cases it will be very difficult, but in some cases it is easy. And then we are stripping the attention just to the solution of this differential equation. So it's like what you do in calculus when you want to minimize some function, you look at the points where the first derivative is zero. This is just a necessary condition, it's not sufficient in general, but then usually you are to consider just a finite number of points. So also here this equation rules out a lot of functions, and then you focus the attention just on the solutions of this, and then you try to prove that what you found as a solution of this equation is actually a minimizer. So just some words about the structure of this equation. So this equation in general is nonlinear. So it is linear when this derivative with respect to c is linear in c, which means that we start from a functional which is quadratic with respect to c, except that case this is nonlinear. But if you develop this derivative, you get the second derivative with respect to c, u prime, u second, and then plus lower order terms. So I mean terms which do not contain the second derivatives. So this is equal to zero. And so you see that the structure of this equation will be that it is linear in the first in the derivative of highest order, but nonlinear with respect to the rest. So it is a quasi linear equation. Also this formula here shows that if this second derivative with respect to c is different from zero, which is something that happens in many interesting examples. For instance, it happens in this example. Then one can write this equation in a better way because one can isolate u second and the equation is written in normal form. And then one can apply all machinery of ordinary differential equations to solve this kind of problem. No, the very same argument will say that if we start with the case, the same problem in dimension n, then we end up with a partial differential equation. n is the dimension of the space, which has this form. You see that it is very similar to the previous one, that there is just some small change due to the fact that now we have n components of the vector c. So we have this expression is equal to f y, x, u gradient, u, and this must be satisfied for every x omega. In this case, our problem, so the minimality leads to a partial differential equation in omega. So this is in general a more difficult problem than this because in a sense there are a lot of methods to solve ordinary differential equations while for partial differential equations the problem is in some cases much harder. So in some sense what happens, this is what happens historically, is that if you want to solve a minimum problem in the case of a problem in dimension one, the best you can do is try to solve the ordinary differential equation because there is a well-developed theory and there are also in many cases a lot of tricks to find explicitly the solution. Then try to show that this solution is a minimizer. On the contrary, in the case of a partial differential equation you can act conversely. So maybe you have some method to find directly that there is a minimizer and then this method will provide you with a solution of the partial differential equation. So now just to have a quick idea of what you get from this method in the one dimensional case, let us consider the special case of the area of surfaces of rotation. So in the case, this equation, I have not mentioned it, is called the Euler equation or sometimes the Euler Lagrange equation. And also its extension to dimension n is called the Euler Lagrange equation of the problem. Then if you have that u is vector value then this becomes a system but let us stay with the easier case where u is a scalar. So, for instance, in this case of the surfaces of revolution, the solutions of the Euler equation are explicit. So the solutions of the Euler Lagrange equation are not only explicit but also very easy to remember are of the form u of x equal a hyperbolic cosine x minus b divided by a. So, more or less, they have all the same shape and they depend. So the shape is the one of a catenally. So this hyperbolic cosine of x then instead of x you have to shift it a little bit and with this a you play with the amplitude of this. So you have to expect that it depends on two parameters because it will be a solution of a second order differential equation that, as I told you, can be written in normal form. So this is exactly what you expect. So, then to make a long story short I can explain what is the solution in the case of this kind of functional because it is clear that there will be some problems. So suppose that we have fixed here a and b and suppose for simplicity a is fixed at zero and the alpha is fixed at one so we fix just to normalize. Then what happens? Then you have your point b beta and you would like to find a solution of the order equation which is written like this that passes through these two points. But now if you try to do this you discover that this is not always possible this is possible only if the point b beta is above some curve that I call g of x. So there is a well defined curve with the property that if this point is above then we have not only one but even two solutions of this. One is like in this picture and the second one is a tangent to this curve at some point. If you are exactly in this curve there is only one solution so there is only one solution of the order equation that passes through these points and if you are below there is no solution. So this maybe is the first surprise because if you say no solution this means that there will be no solution of the minimum problem and indeed this is what happens really because you have to think that when the problem when I formulated the problem in this way I assumed that the function u is strictly positive everywhere but the point is that it might happen that with a function that is strictly positive everywhere you can approximate this curve here in this way so this will be my approximating functions and the limit is not the graph of a function is anyway a curve u shape the curve and then what happens if you rotate this what you get in the limiting case are two disks connected by a segment this is the point but this is not something that you can write in this form unless you give a generalized meaning to what is written here because you cannot compute the area when this is vertical using that formula in some sense in that formulation you neglect this which is limit of functions that can be written in this way so one can suspect that when the point that you have to reach is in this region then the best you can do is to approximate this curve and in this case you see that if you want to approximate it by functions like this you never reach the value that you get in the limit and this is precisely what happens so here you have that the infimum of the area is well defined and it will be given by the area of the two disks this infimum is not attained by a particular function both functions defined in the domain and so the point is we have to admit that there is no solution of the problem of minimizing this expression among regular functions which take the value alpha in one at zero and the beta in the other point so even in this elementary case you have examples of nonexistence now what happens if you are above then you have two possible solutions of the Euler equation and then a more refined argument rules out the one that is tangent so the smaller one and you can prove that the larger one is always a local minimizer so if you perturb this slightly you can only increase the value of the area but you have that it is not always the global minimizer you have to compare the value you have to take the solution U which passes through these two points you take the largest of the two the larger of the two and then you have to compare the value of the functional on this solution with what you can obtain by this construction and what you can obtain with this construction is clearly one half pi no, it's just pi it's pi times one plus pi multiplied by beta square so you have to now compare this with this if it happens that the inequality is the one that I have written so that the area of the two disks is less than the area generated by the catenary then it is more convenient to approximate this function rather than to take this solution and so this is a case where you don't have a solution and on the contrary you have this inequality then the solution is given by this catenary and so one can make this analysis more precise and one divides there is a second function that I call h with these properties if you are here above the second function then you consider the only catenary that joins these two points which does not touch the line y equal gx and this will be the minimizer if you are here there will be no minimizer and the best you can do is to approximate this and if you are even below you have no minimizer in some sense the oil equation is just the starting point of the analysis and then even in this elementary one dimension problem one has two other comparisons so more or less is similar of course more difficult then the case when you have to find the minimum of a function so you are looking for the points where the derivative vanishes but then there is no guarantee that each one of these points will be a minimizer then you have either to consider second order conditions or to do other argument compare these points one with the other one ok, so this was just to give an idea of what you can do if you first solve the equation and then using the fact that you have an explicit solution you find the minimizer I have to add a remark about the case of dimension n so in the case of dimension n if we consider this functional and you do this computation you see that this equation becomes simply the Laplace equation so in the case of the Dirichlet integral the Euler equation is exactly because here when you compute this this is the derivative with respect to i of u and then another derivative with respect to i and then you sum them so you have the Laplace and so what you get is that in that case a minimum point satisfies the Laplace equation in the case of the area of the graph that also we considered then you can write what you get no, I don't write this explicitly but from this then you find the condition that on the surface the mean curve must be equal to zero which is a classical result which says all area minimizing graphs satisfy this geometric condition well now to come to a more recent approach to problems with calculus of variations I have to mention what is called the direct method which is the method that you use if you want to prove directly the existence of a minimizer because here what we do is something a little bit indirect so we first look for sufficient condition of minimality then we solve the corresponding equation and then we try to prove that what we get is a minimizer so this is an indirect approach the advantage is that in many cases you have explicit solutions and then not only you know you also know more or less explicitly how it is done and in so if one wants to prove directly the existence of a minimizer then this is called the direct so historically this Euler equation of course was introduced by Euler in the 18th century probably I think something similar was even used at the end of the 17th century and then Lagrange also used it some years later while the direct method was introduced in the 20th century more or less in 1920 by Leonida Tonelli and here the idea of the direct method are two the tools for the direct method are two notions one is semi continuity so semi continuity means the following that if I have a sequence of functions that converge to some function then I want that f on the limit so my function computed on the limit is less than or equal to the limit of the functional along the sequence of course this notion depends on the topology that you choose in the functional space so when you speak of convergence will be converges with respect to some topology that will be introduced in the space of functions where you want to minimize then the second condition for the direct method is a compactness condition more precisely a compactness of the set of functions u for which f is bounded by some constant so to be precise for every real number t and for every sequence uk such that f of uk is less than or equal to t there exists subsequence such that ukj converges to some function u and when I mean converges with respect to the same topology that was used here for the notion of semi continuity now so these are the two basic notions and now the direct method is very simple so what is to think that these are important notions because now I can even prove it now that if you have these two properties semi continuity and compactness for every t then f has a minimizer and the proof is really easy because u first of all know that there exists a sequence of functions such that the limit of f along this sequence is the infimum of f because this is a property of the infimum so when you have so if there is a minimum then you can take this constantly equal to a minimum point otherwise by the definition of infimum you always have a sequence like this this is called a minimizing sequence then of course when we prove the existence of a minimizer we assume that the infimum is finite because otherwise you have nothing to prove so if the infimum is finite you fix some t larger than the infimum and then you apply this remark to the sequence to the minimizing sequence so since the limit is equal to t which is less than equal to the infimum which is less than t that this will be less than or equal to t at least for k large enough now you use compactness so you have a subsequence which converts in tau to some u and finally you use lower semi continuity so you have that f of u is less than the limit with respect to j of f u k j but now this was a limit and this limit was equal to the infimum and so you have found a function u on which f is less than or equal to the infimum this means that u is a minimum point so this is very easy argument now the difficulty comes when you want to apply it because the problem is what is the choice of the topology because you see that these two conditions in some sense are opposite so in the sense that if you replace this topology by a stronger topology then it is clear that it is easier to get lower semi continuity because because something converts in a strong way then it converts also in a weak way so the inequality remains still valid so in some sense the stronger the topology the easier it is to prove the lower semi continuity on the contrary with compactness you have that the weaker is the topology the easier it is to have compactness because you have more converging sequences so in order to apply this method to functions like this we have to find the topology which satisfies both conditions and I wanted to give an idea of this in the one dimensional case so the function that is on the left and this is a theorem approved by Tonelli suppose that this f satisfies condition like this so from below it is controlled by the square of this and then suppose that it is continuous and with respect to this variable it is convex so these are the hypothesis so continuity with respect to all variables convexity with respect to c and a control from below then the point is what about semi continuity because the first idea that can come to our mind is well this depends on you and the first derivative so the natural topology to choose is something that involves both so for instance the c1 topology something like this this will be even continuous with respect to the c1 convergence because we have convergence of u and convergence of the derivatives and of course this is true but then you are in trouble with compactness because if you have a sequence which satisfies this inequality from this estimate you get that the integral of uk prime square is bounded because you can control this integral by the function of itself so it will be bounded exactly by the constant t and now can I deduce from this compactness in c1 there is no way you can have sequences where this is bounded which have no convergent subsequences in c1 on the contrary you should use ascoli alzela theorem and you play a little bit with herder inequality you see that from this condition you have compactness in the c0 topology so the topology of the uniform convergence on continuous functions and so the problem is well in order to have compactness I should use the topology of uniform convergence and neglect the convergence of the derivatives but if I do this then the risk is that when I pass to the limit the limit function has no derivatives so I cannot write the functional so there is some problem with this and this was solved by tonelli by considering this problem in the space of absolutely continuous functions in the interval a, b because this is an excess is not so difficult if you have a sequence of absolutely continuous functions which satisfy this estimate and suppose that they converge uniformly you can prove that the limit is still absolutely continuous and satisfies the same estimate because absolutely continuous functions not everywhere but almost everywhere but this is enough to give a definition of the functional so if you want to apply the direct method so the idea of tonelli was not only to introduce this argument but then to say that the correct space where you are minimizing the functional should be the space of absolutely continuous functions and the correct topology is the topology of uniform convergence and then a hard part is to prove semi continuity which uses the fact that this is convex and you have a semi continuity with respect to some topology which is weaker than what you would like to have because you don't have convergence of the derivatives and so in order to control in some sense the derivatives you have to use convexity assumption this is an important remark and then the compactness is obtained just by using ascoli as a theorem because from this estimate you find immediately that the sequence is equicontinuous and the boundary condition gives you also that it is equibounded and so then you can obtain compactness so this introduces one typical feature of the direct method so when you want to apply the direct method of the calculus of variations unfortunately you cannot use spaces of smooth functions where the functions that were used when you write the Euler equation but you have to consider functions like this where the derivative is not defined everywhere but just almost everywhere and this is not a big problem because then you can write also for functions that are absolutely continuous the Euler equation and then you can use some regularizing properties of the solution of the ordinary differential equations in order to prove that when you have a minimizer in this space this is really a function of class C infinity which satisfies this equation so the direct method gives you in some sense a weak solution a solution in a very large space but then using this you can prove that it is actually more regular so what you solved is the problem that was proposed at the beginning in the space of smooth functions so this was done in the 20s of the last century and then of course there was immediately the problem how to extend this to the case of dimension n which was particularly important because for instance when you study the Dirichlet integral then you solve the problem for harmonic functions when you study the problem for the area of the graph you solve some problem of minimal area so there was an attempt to extend the notion of absolutely continuous functions two functions defined over domains defined in R2 in R3 but the first attempt was very very complex because they tried to give dimensional definitions along all different directions so it was very complex to try to generalize this to higher dimension but then it turns out that there is a quicker way which is the one of the sobolev spaces so when you want to pass with the dimension n larger than or equal to 2 the best is to study the problem in the sobolev spaces corresponding to the domain omega and the sobolev space is defined by taking a function in the big space lp such that for every i from 1 to n there exist some functions in lp so that you can write the integration by path formula so integral of u di phi in dx is equal to minus integral di phi in dx for every phi with compact support in omega so this is the standard definition of derivative in the sense of distributions but at that time this was completely new so the idea was I consider only those functions for which there exist a function which satisfies this property and then this function vi is by definition the weak derivative of u and then you can write your functional also if u is just in the sobolev space so what happened next then it was possible to prove some lower semi continuity theorems in the case of dimension n and when the function is defined in a sobolev space like this and for instance if you have this kind of hypothesis the same theorem holds also in dimension n the convexity with respect to the variable that then is replaced by the gradient and for instance continuity you can do something better but just to have an idea continuity with respect to the other variables so if you have these conditions and this inequality you have semi continuity considering as tau the weak topology corresponding to this sobolev space you want the strong topology in LP which is more or less almost the same and you also have a compactness so in order to apply the direct method what you have to do is to formulate the problem in a larger space and in that space you are able to find the solution of course there is then the issue of the regularity of the solution but this also can be done because then one has a way to write also the Euler equation even if the function is only in the sobolev space and it is possible to prove that under suitable conditions all solutions of the Euler equation will be indeed smooth so this is of course of dimension n this is a very long story very deep but at the end it is possible to prove there are a lot of steps to prove that the minimizer that you find in this weak formulation if the function f is for instance real analytic in all variables satisfies these assumptions and there are also other assumptions on the derivative that for the moment I don't write then you have the solution you is not only in this space but it is also real analytic so this is a result that in this form was obtained by the Georgi more than 50 years ago so this settles the problem so you have the weak approach and at the same time you can prove the regularity of the solution so now I have still 5 minutes so I see that the new facts will be very compressed because what I told you was at the beginning so the beginning of the calculus of variations that was developed in the 18th century then this fact of the sobolev space in the 20th century regularity is half of the 20th century and now one of the problems that was studied more recently are the so called free discontinuity problems and here I think that I will just give an example and explain what are the new facts so the problems we have considered so far deal with functions that are continuous have a continuous derivative if not there is a generalization of it which is the sobolev space which is more or less the same because functions in the sobolev space cannot be very discontinuous in this case you consider a problem where the unknown is discontinuous and the functional depends also on the discontinuity set and the idea is that you have a function defined on some set omega with values in R suppose that omega is in dimension 2 or 3 and the function is discontinuous along a set that I call ju the set of jumps of u and I think that since I'm lacking time the best is to give the prototype example that was introduced by mom for the and the Shah is a function that is used in the problem of image segmentation when you want to detect the relevant contours in a picture so the idea is this if you have a picture so this means that you have rectangle of pixels and then for every point just suppose that the picture is black and white to simplify the exposition you have a gray level so this means function g of x between say 0 and 1 that says whether it is white, black or in between so the point is that so this is the rough output of the camera but you would like to recognize the objects that are in the picture so the idea in real life you will see several objects that are separated by some contours so the idea is that if you want to make a sketch of what is in the picture the most relevant information is contained exactly in the contours so there should be a method to detect the contours starting from g now the naive approach would be take just the points where g is discontinuous so if here I'm seeing an object that covers another one then of course here there will be a discontinuity in black and white the point is that this is not stable because the image there is a lot of noise in general even if you don't see so you would introduce as contours points that are unrelated to it that are just due to noise so you need something that is robust with respect to noise and so the idea proposed by Manforscha was the following suppose that I want to replace my function g by a function that is called u put in some set and discontinuous in some one dimension set that will be called ju and they proposed to do this by minimizing a function that depends on three parameters and now I write the function and then I make the comments so there is a first term that is this then a second term which is say the length say one dimensional measure of ju and then a third term which is u minus g square so you see that so now this is our functional f u now it depends on u but not like the functions I presented before because they depend in a crucial way on the discontinuity set and for instance in this problem is exactly what you want to look for what is the meaning of these terms so this term here says that out of the discontinuity set which is the set of the contours the function u should be smooth so in some sense you want to present when g does not oscillate very much smoother version of g this says that you don't have to introduce too many of these contours because otherwise the length will be too large and if you try to minimize this you will pay a big cost of this term so what you want to do is to separate the different objects that appear in the picture but in making a very fine subdivision because otherwise if you are allowed to make a very fine you just divide each pixel from the others and then you have no problem to make this equal to zero and the last term is called a fidelity term this is a term which forces u to reproduce g and then of course from the point of view of image analysis it is very important to have these three coefficients and then they have to be tuned very carefully in order to obtain this a good segmentation of the image that you consider so this is the problem and now how was it attacked well now the point is you have to replace the sobolev space by something else because you have to consider functions that have this object finite but also they may jump so this requires function space that contains also functions that are discontinuous along sets of co-dimensional one so this is so the function that was used for this is a variant of the space bv so it is called as bv is a set of functions bounded variation but with particular properties that now cannot enter into many details and well so this in some sense in the direct approach replaces sobolev space then we have to prove a compactness result which says that if I have a sequence of functions for which this is bounded I have a subsequence which converges to an element of this space this also was proved and so in this way it was possible to get a weak solution of this problem and then also there is a regularity theory which says that what you get as weak solution is more regular than a generous function in this space so in particular this set j is a closed set and also it has the property that it might happen even in dimension 2 that there is a point where there is a three triple junction but except for some singular set of a very small size on the rest this set is smooth and if j is analytic it is even analytic so you have this kind of problem where they are known is the set of contours has a solution and a smooth one so I wanted to conclude that this is related in fractured mechanics because in the case of fractured mechanics the functional that one considers is similar to this in the sense that well there is some boundary so there is no term like the fidelity term but you minimize prescribing some boundary conditions and this term here is replaced by the stored elastic energy which is not given simply by this because it is a little bit more complicated because u is a vector value and so on but still it is a quadratic term and there is a term like this which measures so in the fractured mechanics interpretation this set of discontinuity points is interpreted as a fracture that is generated in the interior of the domain because you represent the displacement and so if the displacement is discontinuous along this line this means that along this line there is a crack and so this represents the energy that you have to spend to produce a crack from the mechanical point of view should be proportional to the measure of the crack and so using the techniques that were developed for Manfred Scha now we are also considering this different application of a function of the same kind to a different field because you see fractured mechanics is a part of mechanics while this functional introduced by Manfred Scha was introduced for image analysis so completely different application so with this I stop and thank you here there is a microphone in case where the conditions of the direct method are not satisfied are there hopes to solve the other Lagrange equations like is there possible to solve the other Lagrange equation it really depends on the problem because if you are speaking of the case of dimension n it is usually better to find a solution through the direct method and then to use the direct method in order to prove the existence of a solution but in the case of an ordinary differential equation it is possible under some conditions to solve the equation even without the help of the direct methods so this is for instance if you consider one of the most classical problems or the problem of the Brachy stockron the direct method cannot be applied so directly because there are some problems I didn't say that the direct method works very well when this piece is simply larger than one while if you write problems that are related to area length and so on your people one so there are some troubles so this means that in some cases you first solve the equation and then you prove the minimality while in higher dimension usually you do the opposite the minimality and then using that prove the existence of a solution Is that in dimension one we have always a solution of the earlier Lagrange equation? No, this is not true in the sense there are many difficulties in the sense that first of all you can hope to always have a solution if you can write this in normal form this is not so difficult because in many problems this is different from zero but then the big problem is that you don't have to solve a Cauchy problem but at two point boundary value problem in fact even in the case of minimal area that is fits in this it is quite elementary you have explicit solution you see that there are cases where you fix the two values and there is no solution of the oil equation that satisfies the two conditions this is the problem is that for ordinary differential equation you have very clear easy theory for the Cauchy problem but things become more difficult if you have the two point boundary value problem which is the one that we need here thank you any other comment? question? in Riemanian geometry I believe these methods are used to show that they are the existence of geodesics between any two points are there other applications of these methods in geometry in general? other than say existence of geodesics? no there are for instance we studied these problems surfaces with described curvature usually you reduce the problem to problems of minimization or at least finding critical points so the geodesics is just the most classical one and a lot of the theory of the calculus of variations in the 18th and 19th century was developed around the problem of geodesics but for instance a problem that is connected with the minimal surfaces and then even more because then you have also the problems where you have like a Wilmore problem you study problem you don't have just the area of the surface but sometimes which depends on higher order derivatives so there are a lot of other geometric applications no the beauty of the calculus of variations that there are applications almost everywhere because in some sense it started from physics because if you think what is so historically the first problem that was treated by this method is the problem of the Brachistochron that should be classified as a problem of mechanics of single material point essentially no there were in the ancient Greek time the problems isoperimetic problems but this was solved by different method essentially so one started from a mechanic and then there are of course a lot of applications to geometry geodesics and things like that and then there is the whole the grand dynamics which also interprets as a critical points all motion of system of particles but then what is remarkable is that then there are applications even can be unexpected like this they use this variational approach to image analysis of course there are a lot of other applications to the case to engineering because no of course this was the basics but there are also applications so for instance you can consider shape optimization problems so you have a structure you have to optimize it so this is you cannot be written in this language but it is still a problem of the calculus of variations so he is the same yes he wanted to study also this kind of problems but I think that for a while he moved he did most of his activity on this because I remember that this was just the first proposal then they proposed something else depending also on the curvature so on because then there are a lot of other related problems because when you are you want to analyze an image so you don't have just to separate the different objects to see that also to say which object is in front which object is in the back so there is another function that takes into account also this order and also tries to reconstruct the hidden regions which is also important from the point of view of perception because when you do some tests sometimes you are convinced that you see something that is not in the image just because this is the continuation of what you see in the image and they also proposed a possible variational approach say that the lines you reconstruct should satisfy some conditions connected with the minimization of some power of the kernel actually since you had geometric flavor in your question for example another huge point where all this story would be critical for example Einstein general relativity, Einstein equation can be written as Euler Lagrange equations of the Hilbert functional after a long work and then you can try to see whether you can construct solutions with these methods I mean just we are at ICTP so we don't we should not forget or the point correct or the solution to the point correct for actually but there was another question so in the free discontinuity problem this set JU is uncountable set JU set JU is uncountable because if you are in dimension 2 JU is the set of discontinuity lines of the function and so when you deduce the Euler Lagrange equation for R so you use that function phi so is it any regular function that you substituted U by U plus epsilon phi so that phi is any regular function in the classical approach this is true of course if you want to extend that approach in this case you should consider a phi with its own jump this can be done because in a sense here you have two Euler conditions if you want because you can do the following kind of variation suppose that you have your solution and suppose that this is JU then the first thing that you can do I consider a classical variation of the function but just in the rest out of JU so you take a phi with compact support which does not meet JU and then I perform the usual computation and so in this case when I have the Truman for Shah what I get is that this was fixed I developed I computed the derivative integrated by parts and so on minus alpha Laplacian of U plus U minus G this will be equal to zero on omega minus JU so this is nothing new because this is simply the method of the usual variations applied to the function that is the sum of this and this so I told you that when you have this alone you get the Laplacian this will give size to this one and so what you get is an elliptic equation out of the jump set then there is you have to deal with the jump set and here it is not convenient to consider this kind of variations but you use another form of variation in the sense that you perturb this set through a vector field so you consider one parameter family of defiomorphism you perturb U and also this set and then after some computation you get that the condition will be that the tangential derivative I forgot to say that this will also say that the normal derivative is equal to zero on JU but then you have a condition of the tangential derivatives and you have that the tangential derivative on one side square minus the tangential derivative of the other side is equal to the mean to the curvature in the two dimensional case of the surface and then this is used to regularize the solution because then we have an elliptic equation on unknown domain but on the boundary of this domain so on this continuity set we have this this condition which gives a connection between the derivatives of U and the geometric quantities related to JU and then there is a way to do a sort of bootstrap to obtain that that J is more and more regular and consequently also U is more and more regular because the regularity of U depends on the regularity of the boundary but they play together and then in this way you can prove that they are infinity and even real analytic but of course this you can do but omitting points like triple points and so on because an old conjector done by Manforsha they said that in the two dimensional case the set JU should be done by a network some sense of smooth curves that smooth means even analytic that meet just a triple points and we do 120 degrees angle and if they go to the boundary 90 degree angle and they also conjector that this set of singular points is finite and this is the missing point this was not yet completely proved that the set of singular points we have some estimates on the house of dimension we know that the house of dimension should be zero and so on but so this is still an open problem here but for the rest lot is known that really the arcs are very smooth and so on and sir what are the restrictions on the values of alpha beta and gamma what are the restrictions on the values of alpha beta from the mathematical point of view the three constants must be strictly larger than zero and you see quite immediately that if one of them is zero the problem collapses is completely trivial either you don't have a solution or you have only stupid solutions but from a mathematical point of view as soon as all three are positive you have a solution of course there is no uniqueness of the solution because there is a very high complexity due to this term here uniqueness is not true existence is always true and these have precise interpretation in terms of what you want to obtain when you perform the segmentation using this method because then I don't remember exactly that you have to study because when you consider segmentation you have to decide if g is something like this then there will be a critical slope where you decide that it is better to consider discontinuity instead of gradually increasing function and this is related to the interplay between these three constants and then also another problem is if you have a very thin object can you see two contour lines that are close together or does it disappear so also this depends on the interplay between these three constants but from the point of view of mathematical analysis so existence of a solution regularity and so on they are irrelevant also because rescaling you or rescaling the domain you can make them equal to one so if you look in the papers of analysis or calculus of variation about this the coefficients are always one but of course in the applications it is crucial to choose the good ones so let's thank professor Dalmazo again for this beautiful afternoon and we have a little refreshment outside so if someone