 on the inverse of the operator. And conversely, if I had an a priori bound on this, you can reverse engineer this to conclude that you have an a priori bound in LP on these objects over here. So let me not belabor this point. And I want to start now, give me a little bit sense of how you would see or understand when the solution of a normalized Riemann-Hilber problem exists and is unique. Suppose F is equal to CH, or CF, of Z, F in LP. And G of Z equals C, G of Z, where G belongs to LQ. And 1 over R, which is 1 over P plus 1 over Q, is less than or equal to 1. Then FG at Z is the Cauchy transform of some function H. And it's a little exercise to show that H, on the contour, is minus 1 upon 2i. And it's a little exercise to show that H, on the contour, is minus 1 upon 2i G of S HF of S plus F of S times HG of S. It's a simple little concept. In other words, FG, FG belongs to boundary of C and LR. Because if F is in LP, then the Hilbert transform of F is in LP. This is in LQ, so the product is in LR, and we have over here. And we have, connected with this fact, if we take FG plus the point Z minus FG, and we take FG plus the point Z minus FG. Point minus, that's just going to be this function H of Z, because C plus minus C minus. Now, remember, C plus and C minus are bounded only if P is bigger than 1. But nevertheless, C plus and C minus are defined, even if the function H is just in L1. We just don't know it's a bounded operator. But you still have that fact. So now, we have a theorem. So, fix, suppose M plus minus solves the normalized Riemann Hilbert problem, sigma B. Now, suppose M plus minus inverse exists almost everywhere, and M plus minus inverse is equal to I, C plus LQ, where 1 over R equals 1 over P plus 1 over Q less than 1. Then, M plus minus is unique. This is the uniqueness there. So, let me show you how you'd prove this because it's instructive. Now, suppose M hat plus minus equals 1 plus C plus minus H hat is a second solution. You don't need to assume this thing about M plus M hat inverse. Then, we have that M hat plus minus equals identity plus C plus minus of some function K. K is an LP. Now, you consider the following. M hat plus minus times M plus minus inverse minus 1. And you write it like this. M hat plus minus minus 1. M plus minus inverse minus 1 plus M hat plus minus minus the identity plus M inverse plus minus minus the identity. So, just writing it out. Now, what we showed about capital F and capital G, this belongs to, this is equal to C plus minus of some function H, where H belongs to LR plus LP plus LQ. This product here belongs to LR. This object here is C plus minus by assumption of LP, and we've got this. So, immediately it follows from here. Lost my... So, it follows that M plus hat times M plus minus minus M hat minus minus inverse is just H, because C plus minus C minus of H is the identity. So, M hat plus times M plus minus is M hat minus V. And this is M minus V inverse. The V's cancel out. So, this is M hat minus inverse. So, this is equal to that. And therefore, H is zero, which means that this thing is zero, which means that M hat plus minus is equal to M plus minus. So, now what is really the issue here analytically? The issue is really this. Suppose that I have the real and analytical issue that I have. Suppose I've got a Riemann-Hubert problems with M plus, and I find that some, let's call it M plus equals M minus. So, I've got something like this is equal to something like that. How do I know? So, then I would know that M has an analytic continuation above and has an analytic continuation below. And I know that they match from the top and the bottom. So, therefore, by Morera's theorem, you'd be able to conclude that the function M is entire. And you can work from that. But can you? Let me give you a trivially example. Let's suppose that M plus, remember this is almost everywhere. Let's suppose that M plus equals M minus for all z belonging to R take away zero. So, there's two functions which are analytic, a function analytic above, analytic below. And it matches from the top and the bottom for all except one point. Can I then conclude that M is an entire function? The answer is no. I could just have M just to be the function one over z. So, the whole analysis is controlling the sense in which the function approaches the boundary. That's where the LP theory is really coming in. So, I just wanted to bring this example to your mind why there's an analytic component behind us. Now, the special features of Riemann-Hilbert problems for one by one matrices, the scalar case, I'll say more about that later, and also for problems which are two by two. Many of the Riemann-Hilbert problems by no means all involve two by two matrices. But here is a corollary. Suppose N is two and P is two. And if the determinant of V is just identically one, then you have uniqueness. Now, why do you have uniqueness? For the following reason is that you have M plus equals M minus times V, which means that the determinant of M plus equals the determinant of M minus. So, we're in exactly that kind of situation. But because of our assumptions that the function M plus is in M plus minus is in the boundary of the operator C, you know that if you write out M plus, it's going to be something here, times something there, times something there, times something there. So, something is going to be in some LR, where R is 1 over P plus 1 over P plus 1 over Q. So, that will immediately tell you that the determinant of M plus, the determinant of M plus is entire. And because M plus minus is equal to 1 plus the boundary of some L2 function, immediately you conclude that the determinant of M of Z is just identically equal to 1. So, you know that. But now, of course, you're dealing with two by two matrix, which has M11, M12, M21, M22. And its inverse is M22, M11 minus M12 minus M21. And because the determinant is 1, you see you've got the same functions aligning up here. So, this problem belongs to, if this belongs to boundary of C and L2, the same will be true for that one, and hence you can apply that lemma. So, in other words, the condition on the inverse is automatic. Okay, now you will remember, so let me try and do this. So, let's apply this to a real problem. Suppose we've got sigma equals R and Vxt of Z has the form 1 minus R squared minus R bar. E to the minus 2i tau, or e to the 2i tau, 1. And tau, it was Xz plus t times 4 times z cubed. X and t are spaced in time, they're real. Now, this Riemann-Hilbert problem here arose. You will remember in the very first lecture in analyzing the modified KTV equation. How do we know that the normalized Riemann-Hilbert problem here exists? Now, there are different ways of doing this. But one way is to use a factorization where Wxt, and remember, V is V minus inverse times V plus, and V plus is 1 plus W plus, and V minus is 1 minus W minus, 1 minus then. So, we define Wxt to be 0, 0, 0 minus R bar e to the minus 2i tau, and that's W plus, and the other is 0, 0, R e to the 2i tau. So, we're choosing a particular factorization to achieve a particular goal. What is the goal that we obtain is that you see that if you compute Cwxt function h, and the norm we use is the Hilbert norm. In other words, if you've got a 2 by 2 matrix, the L2 norm is the sum of the squares of the L2, the sum of the squares of each component. Then if you evaluate this, this turns out to be just equal to C minus h12, R e to the 2i tau, C minus h22 of R e to the 2i tau. I'm going to put over here C plus of h11 minus R e to the minus 2i tau, and here I've got C minus h22 of R e to the plus 2i tau, this squared. So, in other words, with this judicious choice, we see that the whole action of the operator separates out. So, the L2 norm is simply the sum of the norms of all these four components. But this is a bounded operator, and we have that C plus minus have both got norm 1 because they are projections. You can check that in this particular case. So, the norm of this component squared is just the norm bounded by the norm of this times the sup norm of R. So, you find that the norm of Cwxt of h squared is bounded by R infinity norm of h squared. And for this particular problem, you might remember that the sup norm of R is strictly less than 1, and hence you know that the norm of the operator C is less than 1, and therefore 1 minus Cw exists. So, this is a judicious choice of the factorization. If you did a different one, you wouldn't get that. So, this is one way of proving existence and uniqueness. A different way of proving uniqueness is just to observe that the determinant here is 1. So, hence by the previous application we get a second proof not of existence but of uniqueness. Now, this proof is really a judicious and fortunate combination of circumstances. This is not the general way you would try to prove that the problem has got a unique solution. The way you'd approach this more generally, and this makes the connection to what the theory is really about, is that the operator 1 minus C, which is a bounded operator, is in fact a Fredholm operator. So, remind you, Fredholm theory is the natural limit within functional analysis of the ideas of finite linear algebra. So, finite linear algebra is distinguished by the fact that if I've got operator A, its null space is finite dimensional and its co-dimension, in other words, is also finite dimensional, obviously. So, a Fredholm operator picks up on that and the operator is Fredholm if and only if its null space is finite dimensional and its co-dimension is finite dimensional. It's the most natural extension of the ideas and the methodology and the methods of finite linear algebra to the infinite dimensional case. The operator 1 minus C, the C is a Fredholm operator. Now, I just do not have the time to be able to show that, but I'll leave it to you as an exercise to do the following, is that we'll use the following fact. T is Fredholm if and only if there exists an operator S such that fS is identity plus compact, and SF or SST is also identity plus compact. It's called the pseudo inverse. And what I leave for you as an exercise is to show that if we take, you can take any factor, I'm going to take a trivial factor as V plus equals V and V minus is the identity. And CW on H will just be C minus H of V minus 1. This is W, W plus. Now, we introduce a new operator, each ECW tilde of H is just C minus of H, V inverse. So, this is very much in the spirit of pseudo differential operators. If you've got an operator, then the inverse symbol should tell you something about the operator. What it tells you is that if you let this be your operator T and you let this be your operator S, you can just verify these conditions. It relies on the fact that R is just a continuous function which goes to zero. Now, the general approach, the general approach to proving that a Riemann-Hibbel problem has got a solution and the unique solution proceeds in the following way and I wish I had time to follow through in this case, but I don't. There's a three-step process, step one. Show CW is freedom. It's the first thing. You do a computation like that. In other words, there's a natural choice for the pseudo inverse. Next, you compute the index of CW which is the dimension of the null space of CW minus the co-dimension of the operator CW. In other words, that's the dimension of the target space factored by the range. And you want to show that this index is zero. And the third thing you do is to prove that the dimension of the null space of the operator is zero. But these facts immediately implies that the co-dimension is zero and that the operator is a subjection. Now, the way it would work in this particular case is you show that this operator here is Fredholm. But now you introduce, you replace R here by T times R. And the same proof that you had that for any, it is a bad symbol, let me call it gamma time. Then for any gamma, you prove that that operator is Fredholm. But the fact of the matter is that if you have a family of operators parametrized by gamma, and for each gamma, you know the operator is Fredholm. And the operator depends continuously on gamma, then the index will be independent of gamma. So what you do, it means that the index when gamma is one must be equal to the case when gamma is zero, when R just disappears from the story. And your operator, C, just becomes the trivial operator. So 1 minus C is clearly a bijection, and it's invertible. Then you're left with showing that the kernel is empty, that the dimension of the kernel is empty. I don't have time to do that. But why this is important is in, for example, this is the Riemann-Hilbert problem you get for the modified KDV. The modified KDV is sort of the poor relation for want of a word of the KDV equation. The KDV equation is the king of all equations, right? Or non-linear equations. And for KDV, this condition breaks down. So for KDV, R of Z, the Riemann-Hilbert problem looks exactly like that one, but R of Z is less than one as long as Z belongs to R, take away zero. But R of zero is generically equal to minus one. So the soup. Now this is not just a technical matter. If you look at the long-time behavior of solutions of the KDV using the steepest and the same methods, etc., you find that there is an asymptotic region which arises specifically from this problem, a very interesting region called the collisionless shock region. So there's real physics associated with this phenomenon. But what it means is that you can't use this argument to prove that the norm of the operator is strictly less than one. But this approach works. So that's a key example of how things go. If you look at the notes, you'll find that this argument is laid out, and I'm sorry that I just can't do that. Okay. Now let me just take two minutes, if I could. I mentioned that Riemann-Hilbert problems of size one and two are special. We saw in what sense the case where the matrices are two by two are special in the sense that the inverse is essentially the same kind of function as the matrix itself. But just the case where we have scalar problems, scalar case, means you can take the logarithm of m because the logarithm of m minus plus the logarithm of v. And this is a kind of problem which has clearly solved just the Cauchy transform, v of s d by s by s minus z because log m plus minus log m plus, log m minus is just the log of v, so you've got an explicit solution of e to the integral over R, log of v of s upon s minus z d by s. So it seems that you've solved the problem by formula. You know the logarithm of a product of two scalars is the sum of the logs. But the logarithm of a product of matrices is what? In fact, the whole theory of Riemann-Hilbert problem can be viewed as trying to understand what the logarithm of a product of two non-scale scalar matrices is. It's getting round and round that issue. But have you really solved the problem? We would have a situation where v of z goes to one. So this looks good. Log of v goes to zero. But does it go to zero? It may go to zero on one side, but there could be windings. And at minus infinity it could go to 2 pi i. So you see a very important point here which would really alert you that there are subtle fundamental obstacles to solving a Riemann-Hilbert problem. There are topological obstacles. And when you're looking at the matrix case you'll find that these obstacles are present. So far we've spoken about when you can solve it, haven't spoken about what happens when you can't solve it, and there will be very subtle factors which come in. Okay, thank you. Okay, so maybe I'm just for one very short question. Very. Let's thank Percy again. And we resume in 10 minutes.