 If m of z is analytic in the complex plane, take away the contour sets, analytic in each of these sectors, m plus of z equals m minus of z times v of z on sigma. Now m plus or minus is the limit as z prime goes to z plus or minus, I'll explain it, of m of z prime. So if I'm sitting here, m of z is defined out here and defined out here, m plus of z will be the limit. As I take z down, z prime down to z from the plus side, and m minus is the limit of m prime of z as z prime goes to z from the left. So m is a solution of the Riemann Hilbert, k by n by k, if it satisfies these. So sometimes people call this a matrix factorization problem. Of course, if you take v here, v will be equal to m minus inverse times m plus, so you've factored the matrix in some way. Now if n equals k, so the matrix m square, we say m solves the normalized sigma v if m of z goes to the identity as z goes to infinity, some normalization. Now I'm going to write down now some of the analytical issues which one faces with Riemann Hilbert problems. There are a ton of them, and there's just no time to go through it in any systematic fashion. We'll just deal with them as they come up. So these are some issues just to get you thinking about it, which arise. How smooth. Any contours that you can put in? Secondly, what is the measure theory or function spaces? Points of self-intersection. So if you've got a point like this, how do you understand what's happening at that point? What do you mean by m plus and m minus and things like that? The sense of limits. So m plus and minus are limits as you come down to the contour from one side. Are they in what senses that limit being taken? And in what senses m of z going to the identity? Does a solution exist? Normalized case is a solution unique. And most importantly, so this is something which is analysts you want to get your hands on also there. So what kind of a problem? I'll tell you what the answer is, and we'll soon see it's considered that a Riemann Hilbert problem really is equivalent to the analysis of singular integral operators on the contour. Singular integral operators, singular, and these will typically be Fredholm operators. So in the end, if you ask what kind of a problem, it means you're looking at singular Fredholm integral operators. So let me spend some time now, or the remainder of the time, explaining how things actually work. How do we see these Riemann Hilbert problems coming up and what we've been speaking about? So we'll go back to panel number two, the example. Panel number two. Now there's a very nice book, brought out. I mentioned our focus. It's in Kapayev and Novak Shainov, who from which you can learn a great deal about the Riemann Hilbert problems associated with Pen-Levey equations. And the references are all written down. So you look at six rays sigma k, which is equal to rho times e to the i k minus 1 times pi divided by 3k running from 1 up to 6. There is sigma 1, sigma 2, sigma 3, 4, sigma 6. And sigma is the union of these. And attach and you orient these curves outwards. And on each of these contours, you're going to put down the jump matrix. 1p1p is going to be a constant. So you've got 101q. Here you've got 101r. Here you've got 101p. 1q01 should be up here. And finally, here you have 101r. So you've got six jump matrices. And they satisfy this relation, p plus q plus r plus pqr is 0. So these are three numbers that satisfy this condition. Now fix x, any complex number, and let xz equal e to the minus i theta, 0, 0, e to the i theta. v of z, e to the i theta, 0, 0, e to the minus i theta. And theta is equal to 4z cubed over 3 plus xz. Thus, for example, on this contour over here, you would have v of xz. So on sigma 3, you would have that vx is equal to 101r e to the 2i theta, for example. Now let mx of z solve the normalized Riemann-Hilbert problem, sigma vx. So m is going to be analytic in six sectors. And that has these jumps across here. Then that means that m of x will look like the identity when z becomes large. And we can write here it's next term. And then the astounding fact is this. If we set y of x or u of x equal to 2i times m of 1 of x, 1, 2, this solves term of A2. So there you begin to get the picture. We've got a collection of lines. We've got jump matrices across there. We find the solution of the normalized Riemann-Hilbert problem for every x. We evaluate the residue term in the solution. We take its 1, 2 entry. We multiply by 2i. And we obtain a solution of pan-panel of A2. So you've got a representation. The issue is what can you do with it? The answer is that there is a version of steepest descent similar to the classical case, which will enable you to evaluate the asymptotics. So let me show you what the asymptotics looks like. So this Riemann-Hilbert problem really goes back to the Japanese school, Sato, Miwa, and Jumbo, and also in this country by Flushka and Newell. And you find the following. Let's take a specific case, 0, and as x goes to minus infinity, ux is equal to root 2 nu over minus x to the quarter times the cosine, cosine of 2 thirds minus x to the 3 upon 2 minus 3 upon 2 nu times the log of minus x plus phi. This is plus order log of minus x upon minus x to 5 quarters. And nu is equal to minus 1 upon 2 pi times the log of 1 minus q squared. And phi is equal to minus 3 nu log 2 plus log of the gamma function of i nu plus pi by 2 sigma q minus pi by 4. So I'm writing this out so that you have a very clear view that you are obtaining similar precision that you obtained in the case of the airy function you now obtained for the solution of this non-linear equation panel of A2. And that's obtained instead of using an integral representation user Riemann-Hilbert problem. And you apply a non-linear version of the classical steepest descent method to obtain this. There's a similar explicit formula when x goes to plus infinity. And you can read off. If you know the behavior plus infinity, you can immediately read off the behavior minus infinity. So this sets the standard for what one wants. Now, I want to say a little bit now again about the modified KDV equation. Maybe I have enough time to actually write down what plus infinity. This is again for the solution of panel of A2. This is equal to q. Very simple. So you see that if we know the behavior plus infinity, that means we know q. Once we know q, we can compute nu. This is a gamma function. And hence, we can compute. We've got the signal view. We've got everything here, which means that we know exactly what the behavior is as x goes to minus infinity. Conversely, if you know the behavior as x goes to minus infinity, it means you know nu, which means you know q squared. But because you know this solution as x goes to minus and you know phi, which means you know also the signal of q, so you can recover q. And hence, you can plug it into that other side and you see what the solution looks like at plus infinity. Let me say a little bit now about the Riemann-Hilbert problem for modified KDV equation we started off with. And now, again, the contour sigma is just going to be the real line going from minus infinity to plus infinity. And for x and t fixed, we define vxt, which is going to be a 2 by 2 matrix function on here, which is going to be given by 1 minus r of z squared minus r bar e to the minus 2i tau. Here, it's going to be r of z e to the 2i tau. Here is 1 tau as this form plus 4t z cubed. And r, which is called a reflection coefficient. And you should think of it at this stage as being in one-to-one correspondence with the initial condition. This is the initial condition for MKDV. So MKDV, I remind you, am I getting? That yt minus 6y squared yx plus yxxx is 0. And y at x and t equals 0 is this function y0 of x. So in fact, there's a scattering problem associated with y0 of x. And that gives here a reflection coefficient r. And there's a one-to-one correspondence between r and y0. So if I know y0, I know this function r. And if I know this function r, I know y0. But here is the jump matrix. We let remember x and t are frozen. We need the solution of the normalized Riemann Hilbert problem, sigma, which is r vxt. So m of xt is asymptotic to 1. And there's a residue term as z goes to infinity. Then uxt is equal to 2 times m1 of xt to 1. So it's again this truly remarkable fact. So let me just say again what you're doing. You fix x and t, and you build this matrix. And this is a reflection coefficient which is equivalent to the initial data for the equation. You solve this normalized Riemann Hilbert problem, getting function. You look at this residue term when z gets large. We take its 2, 1 entry, we multiply by 2. That's the solution of the modified kdv equation. If you can analyze this Riemann Hilbert problem asymptotically when space and time x and t become large, you have then got your hands on the long time b-bit behavior of the solution. Part of that b-bit behavior that you get, as we saw in this region here, this is x, this is t. So I'm freezing at time t. I'm looking at y. Here we had panel of a. And this region here is of length x is t to the third. That's that region. Somewhere out here, it looks like a modulated sign solution. And you can describe what the solution looks like everywhere using these techniques. Just a little word which you'll set on direction is that just as in one of the main features of the classical steepest descent method is deformation of contours. We picked a contour. You remember going to basically 120 degrees up and down. But you then can deform the contour by Cauchy's theorem in such a way which is advantageous for the stationary phase analysis. In exactly the same way, there's a deformation procedure for Riemann Hilbert problem. And to show the connection of how panel of a 2 is going to come into the solution, because we said over here the solution is going to be panel. How does this Riemann Hilbert problem have built into it or hidden in it panel of a? And it comes out in the following way. And we'll see more about this later. Is that in the case we would be writing out here, where q was between 1 and minus 1. And R was 0. If R is 0, that line isn't there. So you've got a Riemann Hilbert problem. So p plus q plus p plus R plus p, q, R is 0. So these drop out because of that. So q is just minus p. So you've got a Riemann Hilbert problem left on something like that. This will be in panel of a 2. But for MKDV, you've got a Riemann Hilbert problem just on the line. Now what you do is that you take your jump matrix and you'll factor it into two pieces. I'll say more and more about that later. That enables you to deform this problem here into a problem on lines looking like that. In other words, this piece is going to be associated with this line. This piece is going to be associated with that line. And then if you just rotate this around a bit, you begin to see this contour coming through just to give you some money. So let me just end with the following. The way I've motivated Riemann Hilbert problems is the use in analytical theory. I want to know what the solution of KDV or MKDV or the NLS equation looks like when x and t becomes large. I want to know what this probability distribution looks like, let's say the gap probability, that there are no eigenvalues in the gap when that gap gets large as an analytical problem. But Riemann Hilbert problems are also useful in other contexts. So not only, let me just finish off with this. So we've spoken about the analytical uses. But they're also algebraic uses. In other words, Riemann Hilbert problems is a canonical way in which they give rise to other differential equations which are of interest, sometimes in the physical community called string equations. Also other things. Also, it's purely, so I should put here rather than that, I should really put asymptotic. But now this is going to be purely analytical. So as I've motivated this by speaking about asymptotic issues, but you'll see that Riemann Hilbert problems are also useful for algebraic purposes. And finally, also for purely analytical issues. For example, there's something called the Pan-Levey property, which distinguishes the Pan-Levey equations from all other equations. How do you prove that a Pan-Levey equation is in the Pan-Pan-Levey class? A little bit like Gerard's question to Ivan when there's KPZ in the KPZ class. You can do this by using the Riemann Hilbert problem. It's purely and analytically. OK, thank you. Thank you very much for a beautiful lecture. Are there any questions? So how do you know that there is a Riemann Hilbert problem associated with the problem of interest? OK, now that is this whole question in all of mathematics. It's really, let me phrase your question like this. Once you know that the problem has a Riemann Hilbert representation, you know that you can solve it. So the question is, how do you know a priori that the problem is solvable? There are many different partial answers to this. If it has this property, you'll be able to, if you have this. But the real answer is that it's really just hit and miss. That is the real answer. People who feel that they can write down a systematic procedure, which was some sort of Galois theoretic point of view, which will tell you when you can solve a problem, and particularly when you can write it down. I don't believe it really cuts a mustard. In the end, you just have to be lucky. And if you're lucky, you should use it. I have a short question. So are the contours always fixed in this Riemann Hilbert problem? So you can also study what's happening when you kind of deform these contours? Absolutely. I mean, deformation is a fundamental part of the technique. The technique, and in fact, it's essential to get bounds, which will be independent of how you deform the contours. Just as in the classical steepest, the same method. Are there any other questions? Yes. No, then let's take the first again. OK.