 This lecture is part of an online graduate course on Lie groups and will be about the exponential map. So the exponential map is a map X from a Lie algebra, L, to the corresponding Lie group, G. And it's very easy to define in the special case that the Lie group G is a matrix group by which we mean a closed subgroup of the general linear group. In this case we can just define the exponential map of a matrix A to be 1 plus A plus A squared over 2 factorial plus A cubed over 3 factorial and so on. So here A is an n by n matrix over the reals. We should check convergence of this series but this is very easy to do because what we do is we just choose a norm on R to the n. I don't really care what norm and we can define the norm of a matrix A to be the norm of A is the supremum of A of V over V for V none zero and this easily implies the norm of A B is less than equal to the norm of A times the norm of B and so on. And you can then check that series for X of A is absolutely convergent in much the same way that it is for the usual exponential function of the reals. We can also check it has some fairly obvious properties for example X with A plus B is equal to X with A times X with B if A B equals B A. So the proof of this is very similar to the proof of the similar relation for real numbers except you do need to use convergence at some point. In particular X of lambda A plus mu A is equal to X of lambda A times X of mu A for any lambda and mu is a real. So the function taking lambda to X of lambda A is a homomorphism of groups from the reals to whatever group you're working with say gln of R. So in other words the exponential map turns elements of the Lie algebra into one parameter group one parameter groups in glr. That means a homomorphism from the reals to the group. Well so far we've only defined the exponential map for matrix groups. We can ask what about X for none matrix groups. Well first of all there are some none matrix groups in other words groups that aren't isomorphic to a closed subgroup of a matrix group. For example you can take the metaplectic group over the reals which is a double cover of the special linear group over the reals and when we do the representation of the special linear group later on it will follow that the metaplectic group can't be a closed subgroup of any finite dimensional matrix group. However any finite dimensional linear group is at least locally isomorphic to a matrix group and this means at least for elements close to the identity we can define an exponential map for this group by just sort of using the exponential map for the matrix group and using this local isomorphism. There's an alternative way of defining the exponential map for arbitrary groups which doesn't use the fact they're locally isomorphic to matrix groups. What use you take a group G and you take the identity element and you take an element of the Lie algebra which is a tangent space at the identity and you can then extend this to the whole of G by left translation and get a vector field and now what you can do is you can integrate this vector field in other words you from this vector field you get a map from the reals to the group G just by integrating this vector field by using a bit of differential geometry and this essentially is the same as the exponential map we defined earlier gives a one parameter group in G. But we're going to keep things simple and just talk about matrix groups for the rest of this lecture. Well I said that x of a plus b was x of a times x of b if ab commutes so next we can ask what if a and b do not commute. Well let's take a look so we have x with a is equal to one plus a plus a squared over two plus higher terms and we have x with b is equal to one plus b plus b squared over two plus higher terms we don't care about and x with a plus b is equal to one plus a plus b plus and now we've got an a squared plus ab plus ba plus b squared over two plus higher order terms and now if we multiply together these two things we get x with a times x with b and this is one plus a plus b plus a squared over two plus b squared over two plus ab plus higher order terms. Now let's compare these two expressions and you can see that they sort of differ a bit because on one of them we've got an ab plus ba and on the other hand we've just got an ab. So what we see is that x with a times x with b is equal to x of a plus b but then we've got to add a correction factor for this and you can see the difference between these is just ab minus ba over two so so that will be we get a plus ab over two here plus higher terms. Well you can ask what are these higher terms? Well there's actually a formula for these higher terms which is due to Baker, well it's actually due to quite a lot of people, the names Baker, Campbell, Hausdorff and Dinkin, they all contributed quite a lot of these and what I'm going to do is I'm going to postpone the discussion of these higher terms to next lecture. It turns out you can write all these higher order terms in terms of the Lee bracket of a and b. So let's see what the exponential map looks like in some simple cases. So let's take a to be a two by two matrix over the reals. Then we can ask what is x of a and it's actually a little bit complicated to write down. First of all we notice that a squared is equal to something times a plus something by because a satisfies its characteristic equation. This is the Cayley Hamilton theorem. In particular a to the n is some linear combination of a and a constant, I guess I should put the identity matrix in there. So the exponential of a is equal to something times a plus something times the identity matrix. So it's a linear combination of these two. That's for two by two matrices. If we were doing n by n matrices then it would be a linear combination of the identity in a and a squared up to a to the n minus one. And now let's try and figure out what these constants are. Well I suppose a is diagonal. So I suppose it's got eigen values lambda and mu. Then x of a is equal to e to the lambda, e to the mu. It's very easy to work out because powers of diagonal matrices are trivial. So if you want to write this as x times i plus y times lambda mu which is a, we find e to the lambda is equal to x plus lambda y and e to the mu is equal to x plus mu y from which it follows that y is equal to e to the lambda minus e to the mu over lambda minus mu and x is equal to mu times e to the lambda minus lambda e to the mu over mu minus lambda. And now we see that we can figure out what mu and lambda are because lambda plus mu is equal to the trace of a and mu times lambda is equal to the determinant of a. So that's just a d minus b c if we take our matrix a to be a b c d. So this is just a plus d. And now we see that we can use the same formulas for any diagonalizable matrix because we can just choose a basis in which it is diagonal and then we find this expression writing x with a in terms of the trace and the determinant of a still holds. So the exponential of the matrix is going to be x times i plus y times a where x and y are given by these expressions here which can be given in terms of a like that. So that works for all diagonalizable matrices. Not all matrices are diagonalizable of course but the diagonal matrices are dense. So this formula actually holds by continuity for all matrices. So this gives a formula for the exponential of a two by two matrix at least it would do if I substituted everything in but that's a little bit messy. Next we can ask is the exponential map onto. So we've got an exponential map from the Lie algebra to the Lie group. And well obviously it's not if the group is disconnected because the image of the exponential map must be connected. So let's assume that G is connected. Well first of all we notice that it's at least a local isomorphism. So x is a local isomorphism and that's because it has an inverse locally. So the inverse is given by log of one plus b is equal to b minus b squared over two plus b cubed over three and so on. And notice this does not converge for all matrices b's. This at least converges for b having norm less than one. And you can show that this is the inverse of the exponential map in pretty much the same way that you do for real exponential and logarithm. So this makes it very plausible that x is onto. So here's a plausible argument which I better warn you an advice is actually false. It turns out that x isn't always onto but it sort of looks as if it is going to be. So let's first of all say why it seems to be onto and then explain what's wrong with it. So let's look at G and let's take the identity element of E. Well x is at least an isomorphism locally. So locally it will be its image will at least contain a neighborhood of E. Well now what we can do is we pointed out that x of a times x of b is equal to x of a plus b plus something. And we mentioned earlier that we could write all terms of this in terms of the lee bracket. And this seems to suggest that if we take some region which is in the image of x then we can always extend it a bit by choosing a point just inside the boundary of this region. And we can then use this formula to extend the image of x to something somewhat bigger. And we can it sort of looks as if you can just continue like this until you've covered the entire group. So this sort of is it makes it look as if x was onto but as I said this is actually wrong. The problem is that this does not converge for all a and b. So this argument breaks down beyond a certain point and in fact the result is false. So we can cross this all out. The exponential map is not onto and this argument is just completely fallacious. Let's have an argument that's an example where x is not onto. And for this we just take the lee algebra g to be and the lee algebra sl2r this means matrices of trace zero. And we take the group g to be the group sl2 of r which is matrices of determinant equal to one. The convention is that you write a lee group using capital letters and you sometimes write a lee algebra using small letters except I like using a capital l instead of a little g for because g is little g is used for other things. And now let's try and think what what the image of l must be. Well if we look at sl2 of c instead we can we can we can diagonalize most matrices. So let's take the eigenvalues to be lambda and mu and we know that lambda plus mu must be equal to zero. And now if this is diagonal the eigenvalues of the exponential of this matrix will be e to the lambda and e to the mu. And now let's try and figure out what these are. Well there are two possibilities for what the eigenvalues in sl2r must be. So either lambda and mu are both real or they can both be both imaginary. And in this case the eigenvalues of the exponential are either going to be e to the lambda e to the mu which are going to be real greater than zero and product one. Or they're going to be e to the something imaginary and e to the minus something imaginary. These will be complex of absolute so if we draw the complex plane we can see there are sort of two possibilities. Either the eigenvalues sort of look like this. So we've got some some eigenvalue e to the minus lambda e to the lambda. Or they're going to be two complex numbers both of absolute value one which are complex conjugates of each other. In either case we see that the trace of e to the a is going to be greater than or equal to a minus two. Because in this case the trace is is is at least zero but strictly positive. And in this case the trace is between minus two and two. So if we've got any element of sl2 of r of trace less than minus two for example minus a half minus two this is not in the image of the exponential map. So the idea of the exponential map is that x of a Lie algebra element is in the Lie group. And more generally if we're given an element a of the Lie algebra then x of lambda a is a homomorphism from the reals to the group. And we can ask if this also applies to infinite dimensional groups. Now usually I'm assuming that groups are finite dimensional and the reason for this is that infinite dimensional groups have several extra complications and I'm going to explain one of these now. So suppose we take g to be the diffeomorphisms of some manifold m. Then at the Lie algebra of g can be more or less the space of vector fields on m. And I'm putting this in inverted commas because actually pinning down the Lie algebra of an infinite dimensional Lie group is actually a surprisingly tricky technical problem. Well let's sort of forget this for the moment and just try and see what happens if we just work naively. So what we should have is that a vector field on m should give you a one parameter group of diffeomorphisms. And this is quite easy to describe geometrically so here we've got a manifold and we've got some sort of vector field on it and the corresponding diffeomorphism is quite easy to describe. What you do is you are just going to take a point on the manifold and follow this vector field for a given amount of time and if we do that for every point then it seems plausible that we get a diffeomorphism. So let's try and look at some examples and see what actually happens. So let's take the manifold m to be the reals just for simplicity. Then a vector field is just going to be of the form f of x times d by dx for some x. And what we want to do is to kind of integrate this vector field to get a diffeomorphism of the real numbers. So we might pick a point of the reals and then we might have a function x of t which is the image of the point say a point x naught if we follow the flow for time t. And what we see is that d by dt of x must be just f of x. That's just essentially saying that x is following the flow of this vector field. And we can solve this differential equation easily. It just says dt equals dx over f of x. So t is the integral from x naught to x of dx over f of x. And let's try this for a few examples. So as we put f of x equals 1, well that's pretty obvious what's going on. It just means we've got a sort of constant vector field. And not very surprisingly this gives us t equals x minus x naught or in other words x equals x naught plus t. In other words not very surprisingly if you start at any given point x naught and move along this vector field for time t you get to the point x naught plus t. So that's just a translation. Not very interesting. If we take f of x equals x it's kind of similar. We find t is the integral from x naught to x of dx over x which is equal to log of x over x zero. So x is equal to e to the t times x naught. So we're just sort of expanding the real line by something depending on t. Now let's try f of x equals x squared. So we've got the vector field x squared dx. And here we find t is equal to the integral from x zero to x of dx over x squared which is 1 over x zero minus 1 over x. And now we find x is equal to x zero times 1 over 1 minus x zero t. And now we've suddenly got a problem because this becomes infinite at t equals 1 over x zero. So what happens is we don't actually get a diffeomorphism for any non-zero value of t because what's happening is this vector field is getting big so quickly that if we started at a given point and run it for a finite amount of time we eventually hit the point infinity after a finite amount of time. So these two give you one parameter groups but this one does not. So we see that the relation between vector fields and one parameter groups is kind of tricky. Some vector fields will give you one parameter groups just like the exponential map but some of them run into convergence problems. And actually there's an even easier way to see that something goes wrong if instead of taking our manifold m to be the reals we take the manifold m to be the unit interval open unit interval zero to one and now we take the vector field to be just d by dx. So the picture is here we've got an open unit interval and we've just got a constant vector field on it. And this ought to correspond to a one parameter group given by translations so we would just have the function x equals x zero plus t. So after time t we just move the point we move each point to the right distance of t but this obviously fails because if we take a point here and move it to the right too far we just fall off the end of the open unit interval. And this suggests the problem is that the manifolds we're using are not compact and if you stick to compact manifolds then you can integrate vector fields more easily basically because they can't become arbitrarily large. Okay so next lecture we're going to discuss the Baker-Cambell-Hausdorff-Dinkin formula which describes x with a times x with b in terms of x with a plus b.