 consistency condition is exactly that Q should solve in LS. So this is the way a Riemann-Hilbert problem encodes differential relations since one of the big uses of it. Now, there is another. So that's just how first we'll spoke about the occurrence of a Riemann-Hilbert problem which comes from out of the blue. This is an occurrence of Riemann-Hilbert problem which is systematic. You give me a differential equation, ODE, or an ordinary difference equation and there will always be associated with that some Riemann-Hilbert problem. If you're looking at the Boussinesq equation, you'll again get a Riemann-Hilbert problem on a union of six lines like the Penn-Levey situation. Now your matrices will be three, three by three and so on. Now, here's another absolutely remarkable source of Riemann-Hilbert problems which is a systematic source. Once you recognize one fact, you know that the whole problem is amenable to Riemann-Hilbert analysis and that's through a class of operators which are called integrable operators. Now I should mention, of course, that when it comes to the analysis equation, many people have worked to not the first understanding of what the asymptotic should be when T becomes large was due to Zakharov and Manikov. They didn't use these techniques. The work was not rigorous but it really involved the introduction of Riemann-Hilbert techniques to really prove the thing rigorously. But let me just leave it like that. So now I want to speak about integrable operators. So again, you have some contour which is oriented, usual business. And we say K is an integrable operator in sigma. If K has a kernel, Kxy equals the sum of fi of xgi of yi running from 1 up to n divided by x minus y. Those are for points x and y on the contour. And to see how the operator acts, you just do the usual kind of integration. So K on H of zy where you integrate on the given orient. Any operator like this is called an integrable operator where fi are bounded and the gj's are bounded. Now these operators have come up in a special form in many, many different situations so they first began to be investigated as a group by Shashnovich in the 1960s. But then the people who systematically investigated the properties were Itzirgin, Kytayev, and Slavnov. Maybe I have it wrong. Itzirgin, Kapayev, and Slavnov. That's right. For example, if we look at the old sine kernel, that's of course just e to the ix, e to the minus iy, minus e to the minus ix, to the iy, for x minus y. It's exactly of that form. And many, many other operators which you meet have this particular form. Now the most remarkable fact about this operator, which is completely unclear in the beginning, is that attached to this operator in a canonical fashion is a Riemann-Hilbert problem, in a canonical fashion. And what it means is that any problem which involves such operators, say, of an asymptotic problem, gets transferred over to an asymptotic problem about a Riemann-Hilbert problem for which we have technology. So this is what I mean. You see an operator like this coming up. You know you've got a tool in your hand which will enable you to analyze it. So what is this Riemann-Hilbert problem? Is the following. Such operators, integral operators, fall for an algebra. If you add them, you again get one of the same form. If you multiply them, you'll get one of these same form. But also very importantly, it's invariant, undertaking inverses. So if k is integrable, then 1 minus k is also integrable. So it's invariant undertaking in inverses. So here's the fact of the matter. On z, on sigma, phi of sigma is defined as the identity minus 2 pi i f transpose times g. Where f is the vector f1, fn, and g is the vector g1. So rank 1 perturbation. Then 1 minus k inverse is equal to the identity plus the sum of f i of x, g i of y, i from 1 to n, from x minus y. So if k has that form, its inverse has this form. And what f is, which is f1 to fn, is just given by m plus minus acting on this vector. And g, which is g1 to gn, is equal to m plus inverse transpose acting on this vector. Where m solves the normalized Riemann-Hilbert problem, normalized Hilbert problem, sigma v. So I've got my contour. I've got my integrable operator. I construct this matrix here. I solve this Riemann-Hilbert problem. In other words, m is analytic off the contour. It has jumps given by v, asymptotic to 1 at infinity. I compute the solution on the axis. Take either plus or the minus. It's not going to matter. Act on f. That is going to give me this function here. This is going to give me that function there. And this object here is the inverse of the original operator. So this is, if you think about it, from a sort of categorical point of view within mathematics, within the category, I mean, you're seeing that the problem of inverting an operator becomes a problem in complex analysis. That's a very strange mixture of things. So let me, in my remaining time, let me just give. I've spoken again and again about the steepest dn descent method. Let me just give you a flavor of it by looking at, let me go over here, to solve the Zegostrong limit theorem. How do you use it to solve the Zegostrong limit theorem? Percy Diakona spoke a lot about it during his talk. There's the theorem, Zegostrong limit theorem. So you've got a function phi equals e to the, say, l of z on mod z equal 1, circle. And you assume that lk times k, 1 to infinity, where lk of the Fourier coefficients of l. And you look at the operator chi phi, which has components jk running from 0 up to n. This is the matrix. These are the Fourier coefficients of phi. So you build up the top, the top of its matrix of that form. And we're interested in dn phi, which is just a determinant of phi. And this goes to what? As n goes to infinity. And the answer, it looks like e to the minus n plus 1 l0. So it's the 0th Fourier coefficient of l time plus sum of k lk squared to 1 plus little over. Now, what makes the Zegol limit theorem so canonical and so interesting is that when you think about it, when your initial motivation and go all thoughts and going from finite dimensional systems to infinite dimensional system from linear relative to function analysis, you want to see what ideas you can push forward. Well, there are going to be topological ideas. xn goes to x. Therefore, f of xn goes to f of x. So the function's continuous. Can you do something similar in this particular case? Now, what's happening is we're going to a situation where n is getting large and large. So you haven't got a fixed space there. So you haven't got a limit theorem of the kind. There's no clear idea of what you mean by continuity. You could think of this as a matrix filling in the first n plus 1 plus n plus 1 block, and then it's getting bigger. Does the determinant of those things go to the determinant of this very big semi-infinite matrix? Obviously no, because such a determinant won't exist. It's not clear how to take this limit. You don't have continuity in any obvious functional analytic sense. That's what makes the problem so interesting. Now, there are many ways of proving it, but we're speaking about integral operators. And this is the way you'd approach the problem using integral operator theorem. You observe that if you have a map taking ek to z to the k, so ek are the standard vectors, k going from 0 up to n. So e0 is the vector 1, 0, 0, 0, and so on. So there's a mapping taking you from the finite dimensional complex space into, so this takes you from cn plus 1 onto pn, which is the polynomials of order n. Now, what we can introduce is that this mapping will take this operator chi and induce it as an operator on here. So tau of pn, if you just do the usual mapping back and forth, finite dimensional mapping, this will be a mapping tau, which is equal to the conjugate of the operator chi under this mapping. Tau will take pn to pn, just to change your basis, really. It'll look like this, tau in on some polynomial, or tau in of z to the k, is just the sum of phi j minus k, z to the j, j running from 0 up to n. No surprise about that. But now, if you take some polynomial p belonging to this pn, then tn of p, if you just do the algebra, you will substitute for phi. Remember, phi l is just e to the minus i theta l times phi of e to the i theta d theta over 2 pi i or something like that. It's a Fourier coefficient of phi l. And so you can substitute this integral here into here. And you'll find that this object here is just the integral over the unit circle of phi of z prime times p of z prime times z over z prime plus 1 minus 1 upon z minus z prime d bar z. d bar z prime, I guess. So you can just verify that. And one finds, of course, that what that means, tau of this operator is going from pn to pn, that dn, which is the determinant of phi, is just the determinant of this operator tau n, tau, which is just the determinant of 1 minus k. Well, let me, OK, maybe I'm jumping the gun a little. Let me get back to the tau n of p. It's just 1 minus kn of p, where kn is given by this kernel here, which you see immediately is of integrable type. Now, I'm going to jump the gun too much. And you see that this kn of z z prime looks like this. It's equal to f i at z, f i at z prime, i running from 1 up to 2 upon z minus z prime, where f is z to the n plus 1 1. And g has the form z to the minus n minus 1, 1 minus phi upon 2 pi i n minus 1. So you've got just minus 1 minus phi 2 pi i. So this operator is of this form. So this is what it looks like in this particular space. Now, I want to see how this operator 1 minus k and acts on the whole of l2 of the circle. So I must see what it does, not just to polynomials of degree n, but I must see what happens if, say, k is either less than 0 or k is bigger than n. Then you can see that 1 minus kn of z to the k is z to the k plus some combination here of phi j minus k z to the j, j equals 1, j equals 0 up to n. That's for k bigger than n and less than n. So it means that if you write down a matrix for the operator in the basis z to the k, there'll be an identity here. There'll be the identity here. This will be 0. This will be 0. This will be 0. This will be 0. Here you'll have the action of tau. And there's going to be some stuff over here. That's what kn looks like in the whole space. But then the determinant of the whole thing is just the determinant of tau. So the determinant of the entire operator is just exactly given by this operator, 1 minus k. And so the conclusion from this fact is that the original dn is just the determinant of 1 minus k and now as an operator acting in the whole of l2 of the circle. k is a finite rank operator that certainly trace calls. So now, how do we use it? So we use the old trick of looking at the log determinant, which is the trace of the log of 1 minus kn, which is the integral from 0 up to 1 of d by dt trace of the log tkn, which is equal to minus integral 0 to 1 to the trace of 1 minus tkn. Now you see it in business. Because kn is this integral operator and we know that that means that the inverse of the operator can be expressed in terms of the original Riemann-Hilbert problem. Then you can show, if I put tkn divided by t here, then tkn is exactly the same as k, except we replace in here phi by phi t, which is looked like this 1 minus t plus or t plus 1 minus t times phi. If you just put that in there, you'll see that that's what you have. Right. So tk, if you just put tkn, it's the same thing as k. We replace phi by that particular object. So it means that this operator here is just 1 minus 1 upon 1 minus up to a sine minus this, 1 minus tkn. So this object here is exactly what is expressed right over here in terms of the solution. So this operator here is something which is exactly expressible in terms of the solution of a Riemann-Hilbert problem. So I just want to give you a little hint how this thing is going to work. Now, and this will take two minutes. The Riemann-Hilbert problem involves a matrix which has a particular form. And the key fact is that this can be factorized in the following way. 1z to the minus n minus 1 times 1 minus phi t. Phi t is as I gave it there. Times 1, 0 times phi t, 0, 0, phi t inverse times the following, 1, 0, 1, 1 minus phi t inverse. And we're looking at the Riemann-Hilbert problem on the circle. Now you see what the motivation is. If we can somehow move this part inside, this part will be exponentially decreasing. On the other hand, we want to move this part outside, and this will be exponentially decreasing. Then we'll just be left with a Riemann-Hilbert problem on the circle in its large. But this is just a direct sum of scalar Riemann-Hilbert problems, and we saw how to solve scalar Riemann-Hilbert problems by explicit formula. Now this is done the following way. Here's your original circle. You introduce a smaller circle and a bigger circle. You define m tilde to be m outside here, and m tilde to be m over here. Here you define m tilde to be m times the inverse of the right matrix. This one here. This is the right matrix. This is the left one. Like this, and draw a bigger line here. So this is the unit circle. And here you define m tilde to be m times the left matrix. And this m tilde will solve a Riemann-Hilbert problem in the complement of these three circles. But the jump matrix over here is just phi t, phi t, if you think about it. Here it will have the term which involves z to the n. The jump matrix over here. The jump matrix here will involve the term z to the minus n. So now you see that as n gets large, this jump matrix is just going to the identity. This one is just going to the identity. What you're left with is just a Riemann-Hilbert problem across the circle, which is a direct sum of two scalar Riemann and completely solvable. Filling in the details there is what gives you the precise asymptotic of the problem. The message, just as in the scalar case, you must deform your integrand to a place which is obviously exponential decreasing. And this you've got to do in this particular way. So let me just make a summary statement. So I've shown you some things about Riemann-Hilbert. What I have not shown you is that in addition to the algebraic properties, the asymptotic properties, what about the analytic properties? Well, for example, we know something very special about the Penn-Penlaver equations. They have certain properties. But if I write down a Penn-Penlaver equation, how do I know that the Pennlaver equations are actually Pennlaver equations other than they have the property? That you can do using Riemann-Hilbert analysis. Another systematic source of Riemann-Hilbert problems I haven't spoken about to a so-called Wiener-Hoff problems. It's an old form of Riemann-Hilbert theory. And I've only given you just an inkling of how the steepest ascent method works. In one particular case, it's a much more extensive theory. And I'll just leave it at that. OK, thank you for this. I think perhaps it's time for a quick question if anybody has one before we go for tea. Any quick question or comment? It's exercise. It's not a difficult exercise. So you do it row by row. You see, because in the jump matrix, there's a 1 and a 1 and a W there, right? So that means that if you look at the jump relation for the first component, the 1-1 component, it must. It has no jump. Must be entire, but it has polynomial growth. So it's a polynomial. When you look at the second component, you see that this polynomial must be orthogonal with respect to certain vectors. And hence, it's the orthogonal polynomial. That's an exercise. Once you do it, it's fun. And I recommend you do it. OK, well, I do want people to have a time for tea. But before we go for tea, let's thank Percy again for this lecture and the whole lecture. Thank you. Thank you.