 So, finally in this lecture I can put a little more meat on the bone and show how you actually use Riemann-Hilbert problems for interesting problems and maybe give you some idea of how the steepest descent method actually works. So the first problem I want to show is that first of all Riemann-Hilbert problems arise in many different ways. Some of them are systematic and others sort of come to you from out of the blue. The first one I want to speak about comes from out of the blue and concerns orthogonal polynomials. So you've got some measure, d mu of x, which has finite moments, this is over R and so on. When one knows that by Gram-Schmidt you can get applied to one x, x squared and so on, you can get a set of polynomials, pi n of x, these are monic polynomials of degree n such that pi n pi m d mu is equal to delta is equal to zero if n is not equal to m. And then one can introduce what are called the normalized polynomials, gamma n times these pi n's, gamma n's are positive with the property that pn pm d mu is equal to delta nm. Now orthogonal polynomials are an extraordinary important and useful part of analysis. So what we're interested in is for various reasons the asymptotics of the orthogonal polynomials as n gets large and there are special cases or special OP's for example the Hermite polynomials, Crouchok polynomials, the Legendre polynomials, many, many and then if you open up Zago's famous book on orthogonal polynomials you'll find them written out there and everybody in random matrix theory knows about these polynomials but they come up in so many different on psalms and in so many different ways. Now just as we started off these lectures speaking about special functions and what's special about special functions is that they have integral representations from which you can then infer their asymptotic behavior by using the classical method of C-C-Pistocent. For example, if you look at the Hermite polynomial, hn of Z has this representation, this chalk isn't good, hn of Z is equal to n factorial times the integral around some circle of w to the minus n minus 1 e to the x2xw minus w squared dw and c is any contour around the origin so if you want to know what's happening this should be Z. If you want to know how the polynomial behaves when n becomes large for any fixed Z you just use steepest descent, classical steepest descent formula, you open up any book on orthogonal polynomial, you'll find their asymptotics and it's obtained in this way. In fact for the diligent student one should take a volume like a Bramovitz and Stegen and regard it as an exercise book in which you derive all the asymptotics that you see and then using the one-two which is steepest descent. But this is what you have here. Now these polynomials are orthogonal h and x and not equal to m and they arise as we know in analyzing the Gaussian Ensemble and in fact the first proof of the computation of the sine kernel for example the gap property was obtained by using the asymptotics of the polynomials. So now people became interested in universality and the first work on universality was really for unitary ensems. We instead of having this weight over here you had a weight like e to the minus x to the fourth or a weight like e to the minus v of x where v of x is some polynomial x to the 2m and so on like this. So for ensems of this kind you want to prove universality and this boils down as we know to a question about the asymptotics of the polynomials which are orthogonal with respect to such weights. Now for such weights there is no known integral representation. So the question is how are you going to obtain the asymptotic information that you need to prove universality in random matrix theory. Now comes what I still regard as an absolutely remarkable fact and that is although there is no known integral representation there is a representation in terms of a Riemann-Hilbert problem. So let's look at weights which are absolutely continuous with respect to Lebesgue measure and consider the following Riemann-Hilbert problem. The contour sigma is just going to be the real line oriented from minus v to plus infinity and the jump matrix v is just going to be a 2 by 2 matrix looking like this. For that is that function. And the Riemann-Hilbert problem one wants to solve is you seek y of z which will depend on this on a parameter n and that is 2 by 2 between 1 and 2 and it is analytic in c take away r and y plus of n equals y minus of n times this v and the way it is normalized is not in the standard fashion. It's not what we call a normalized Riemann-Hilbert problem in the standard sense but it has this property z to the minus n, 0, 0, z to the n goes to the identity as z goes to infinity. Now you look at that Riemann-Hilbert problem and here is the fact. Solution y n of z is equal to pi n of z minus 2 pi i gamma n minus 1 squared pi n minus 1 of z, Cauchy transform of pi n times w, Cauchy transform of minus 2 pi i gamma n minus 1 squared pi n minus 1 times w. That's the solution of that Riemann-Hilbert problem. The key thing you should be focusing on so pi n is the polynomial, the monic polynomial orthogonal with respect to the weight w. This is the polynomial of n minus first order and gamma n minus 1 is the normalizing coefficient for pi n minus 1. So you see immediately from that that the 1, 1 entry of z is the polynomial you are looking for. So if you solve the Riemann-Hilbert problem you have obtained the orthogonal polynomial. Now just as it stands you've converted one problem into the other, have you made progress? You have made progress because the Riemann-Hilbert problem is exactly of the kind for which you can use the Steepest Ascent method. So the Steepest Ascent method applies to this and you can obtain asymptotics for the orthogonal polynomials which is as precise as you could get out from this in the, in fact even more precise it turns out. And this can be used then as a basis for proving universality first of all for unitary ensems and then for, this was done by a number of us and then you can also use it for symplectic and orthogonal ensems. Also Schabiner did work on this. So that's why you're interested in why you can do it. So why this, why this representation exists is just something which comes to you out of the blue. So that's one of the features. It's, you know when we learn mathematics we learn simple problems and we get more and more sophisticated. We build up more and more big machinery which we then hope we can direct against it. What one must never lose sight of? Two things. One is luck. It was in the end you're going to have to be lucky. And the other thing there's absolutely nothing wrong with just being clever. You're allowed to be clever and this is what these people did. This was, this Riemann-Hilbert problem was found by Fokkes, Itzen, Kitaev. So I want to, earlier on I made a remark of the following kind. I said there are many uses of Riemann-Hilbert problems. I've been focusing very much on the asymptotic applications but they're also analytical applications and they're also algebraic applications. One of the algebraic applications is that how to obtain differential equations using the, or difference equations using the Riemann-Hilbert problem. So let me show you here. In this particular case the key thing, key fact, v is independent of the parameter n. So whenever you have a Riemann-Hilbert problem which is independent of a parameter that will give you rise to either a difference or a differential equation in that parameter. This should remind you of Nertis theory in Lagrangian analysis. Whenever you have a Lagrangian which is independent of a parameter you obtain an integral. It's a similar kind of situation here. So how does this work? Let's look at yn plus 1 multiplied by yn inverse. Let's look at this. Let's call it R of z. So this is the solution of the Riemann-Hilbert problem normalized like that with n replaced by n plus 1 and here is with just n. Now you observe that if you look at R plus of z that's yn plus 1 plus times yn plus inverse and this is equal to yn plus 1 minus times v and this is yn minus times v. The v's cancel out so you just left with R minus of z. So in other words R has no jump across the axis and taking into account that you have no singularities of the kind I mentioned last time that means that R is an entire function. R of z is entire. But now you observe that if I put in here yn plus 1 of z and I multiply z to the minus n, n plus 1 and I put z to the n plus 1 here and I put the z here and the 0, 0 and the z inverse here and I put in a yn times z to the minus n, 0, 0, z to the minus n and I take the inverse of that. I take the inverse of that and I'm going to get the z to the n plus 1 here and z to this so I've just inserted something for free. But this object here goes to the identity. This object here goes to the identity. So this means that R of z is an entire function which grows linearly as z gets large. So any entire function which grows like a polynomial is equal to a polynomial. So you see that R of z must have the form a of z plus b. But if you just interpret what that means, you see that means that yn plus 1 equals a of z plus b times yn. You can get a and b explicitly. But the way you understand it, that's a difference equation. And the difference equation is very familiar to you. It's the, if you work it out, it's just nothing more than the familiar three-term recurrence relation for orthogonal polynomials. Here you see using Riemann-Hilberprom to derive an equation. Now, let me give a different example. Consider this is called the AKNS operator. It arises in the theory of integrable system in particular, very famous for the nonlinear Schrodinger equation, many other systems like that. And it looks like this. It's dx minus i z sigma plus 0 q of x, q bar of x, 0 and the dachshund functions phi equals 0. So it's an operator on the line, q of x is some function on the line decaying at infinity, z is a complex variable. And we have that sigma is one half of 0, 0 minus 1. This operator is important for the following reason. That is, if you define l to equal 1 over i sigma times dx minus 0 q, q bar 0. This is a self-adjoint operator. And it's the isospectral operator for the nonlinear Schrodinger equation. This is a isospectral operator, isospectral operator for NLS. So in other words, if q, which there we have just as a function of x, but if we think of it as being a function of x and t, and it's evolution as a function of t is according to the NLS equation in the spectrum of that operator is going to stay fixed. Those give the integrals of motion for NLS and make it an integrable system. Now, it turns out that for such a differential operator and more generally for ordinary differential operators, you can always canonically associated with them a Riemann-Hilbert problem. So how do you get this Riemann-Hilbert problem? We introduce solutions of the equation. You look at, these are called Biels-Koiffmann solutions. So you find solutions, there exist solutions of this equation, z belonging to c take away r with a property that phi, that this c times e to the minus i x z sigma goes to the identity as x goes to infinity. This is for fixed z. And phi to the minus i x z sigma is bounded as x goes to minus infinity. Such a solution exists for every z which is not real and is unique. Now, what we have from this property, we have that phi e to the minus i x z sigma is equal to 1 plus m1 of x over z plus order 1 over z squared. So, what we have is z squared as z gets large. It's part of this normalization. I'm just picking up the next term. Now, phi of x z for a fixed z is a solution of this differential equation. So, the boundary value as z goes down to the right will continue being a solution. The same is true if I now take the boundary value from the bottom. I get a different solution. Any two solutions of an equation of this kind must be related by some matrix over here which is purely a function of z independent of x. And the fact of the matter is, if you now instead of, we were working over here, we're freezing z and letting x run, now freeze x and let z run. We found the following. Phi of x z is analytic in z take away r and continuous up to the boundary. So, therefore, phi times e to the minus i x z sigma solves the normalized Riemann-Hilbert problem given by sigma v. So, immediately you get a Riemann-Hilbert problem. This is true for all ordinary differential equations of different kind. You can work out what v looks like. v will depend on x as a parameter. It will be a 1 minus r of z squared, r of z minus r bar of z. And again you have that r infinity is strictly less than 1. So, r of z is called a reflection coefficient and there's a bijection from the initial data from functions q to the reflection coefficient associated with q which is r. So, there's a one-to-one mapping between q's and r's. You know q, you know r of course I mean an appropriate function space. You know q, you know r, you know r, you know q. r is an encryption of the initial data. In the linear case r would just be the Fourier transform of the initial data or of q. Now, as I said in an abstract sense you can show that l is isospectral. In other words you can show that the spectrum of l remains fixed. Now, to turn that information into something which is analytically useful you have to have a method. And the method which works is this Riemann-Hilbert problem turns this integrability into a viable tool. And it works in the following way. If we let q be the solution of NLS. So, plus this would be a minus 2 q squared q. The q of x and t equals zero is some function just say. Let's call it q, q zero of it. So, q, qt means the solution of this equation at time t. So, for any time t you've got a function of x. So, now you can map qt to the reflection coefficient of qt which we can call rt. And then we this turns out to have the following behavior which is given by each of the minds i t z squared times r at t equals zero. r at t equals zero would correspond to r at t equals zero is just fraction code. So, the Riemann-Hilbert problem really is providing the variables which linearize the NLS equation. If you take a logarithm of this variable you see it just moving linearly in time. So, that gives you a solution procedure for NLS. Qt, you start with your initial data r zero. You let r act on it. You obtain the reflection coefficient. You multiply by e to the minus i t squared and then you take r inverse. So, there is this very elegant solution procedure for NLS. It involves two operations. How do you compute the reflection coefficient? Well, that's a direct scattering problem which you obtain by solving for these equations and then seeing what the jump matrix is. So, you've got a jump matrix. You update it by this data. Then you've got to pull it back by solving this Riemann-Hilbert problem with r replaced by r sub t. So, everything depends, first of all, on efficient computation of r at time t equals zero. That's one thing. That's a direct scattering problem. But the inverse scattering problem is the question of solving this Riemann-Hilbert problem as t evolves. And for that you need some method and of course, the method of the nonlinear method of steeper center applies in this particular case. But I just, I won't be getting into that fully, but what I want to do is just show you again this use of Riemann-Hilbert problems to get differential equations. So, here is an interesting sort of structural question. I present you with a Riemann-Hilbert problem looking like this. Now, I let it evolve in t. There will be two t z squared and an appropriate factor. Down here. And I know that if I can solve this Riemann-Hilbert problem, I will be able to recover what the solution is. In fact, q of x t is just m one of x t. I think it's two one times two i, something like that. Here is my m. So, it's, this object here is solving the Riemann-Hilbert problem. I look at its asymptotes. I pick up the residue term. Now x, not only have the parameter x, but they have the parameter t. And I look at this residue term and I take the two one entry and that reconstructs q for me. Now, this is the thing. Where in this complex variable problem is the fact hidden that I really have a PDE there? How is that piece of information, the differential got encoded into the Riemann-Hilbert problem? And it comes about in exactly the same way we were speaking about. If you look at phi e to the minus two i z, e to the minus two i x z, I put in plus t of a two z squared term sigma. The effect of this over here is to remove this factor here. In other words, if I call this object here h, then I have that h is x and t are parameters, but as a function z is analytic. In c take away r, h x t of z from the plus side is equal to h x t of z from the minus times one minus r squared r minus r bar one. And h goes to the identity. So, h solves the Riemann-Hilbert problem. Normalized Riemann-Hilbert we're looking like that. In other words, all the dependence on the external parameter has been removed from the Riemann-Hilbert problem. And the mantra which we spoke about is that if ever your jump matrix is independent of a parameter that will give rise to a differential equation. How? You'll find that if you differentiate with respect to x, because this doesn't depend on x, then this derivative also solves this jump relation. So, if you look at this times h inverse, that will again be an entire function. And you can show that the asymptotics is just of the form az squared plus bz plus c. You can compute all of these, which tells you that you've got a differential equation of this form. But there's nothing special here about x. You can do the same thing for t. And then you'll get another relation, which will look like this, dh by dt. We call something else, let's call it d times h. So, we've got dh by dx equals this thing here. You get two difference equations. Now, these relations must be consistent so if I put d by dt of dh by dx, that should be d by dx dh by dt. And you substitute this in here. And you need a consistency condition. And the consistency condition.