 This lecture is part of an online course on Lie groups, and will be about Lie algebras. So the problem with the Lie group is that it's kind of complicated. For example, if you look at the group gln over the reals, this is the matrices with non-zero determinant, and the determinant is really quite a complicated function, and it's quite difficult to visualize what the set of matrices in the set looks like. I mean it's a fairly complicated topological space with non-trivial homology groups. And the idea is to replace the Lie group by something rather simple, which is a Lie algebra. And the Lie algebra is easy to visualize because it will just be a vector space with some extra structure. So it's a linear object, and we can use linear algebra to study it. So the informal picture is as follows. Suppose you've got a Lie group G, so it might be a circle. What we do is we pick the identity element, and we look at the tangent space. So this is going to be the Lie group, and the Lie algebra will be the tangent space at the origin. And the informal idea is the Lie algebra sort of gives you the, describes the elements of the group that are close to the origin. So if you look at the neighborhood of the origin, then you can more or less identify this with a bit of the Lie algebra near the origin. So the idea is the Lie algebra sort of tells you what the Lie group looks like near the identity element. So the problem is, of course, we can always take the tangent vector space of a Lie group, but this doesn't tell us anything very interesting. It's just a vector space. We want to put some structure on the Lie algebra to capture most of the structure of the Lie group. So to do this, let's first of all recall the connection between first-order differential operators and vector fields and infinitesimal automorphisms of a manifold. And for simplicity, I'm just going to take the manifold to be euclidean, ordinary euclidean vector space. So first of all, what does a differential operator look like? Well, it's just going to be a sum of functions fix1 up to xn of the partial derivative with respect to xi. So that's a first-order differential operator. The order is, of course, the highest derivative that appears. So if you had a second derivative of xi, that would be second order and so on. And first-order differential operators are more or less the same as vector fields. So if you've got some sort of manifold here, we can consider a vector field on it, which you can think of as a little pointy arrow at each point. And if this manifold is r to the n, then a vector field gives you a first-order differential operator because at each point you can take the derivative in the direction of this vector field. And if you write down coordinates, you see that's just essentially the same as a first-order differential operator. So vector fields are just a geometric way of picturing first-order differential operators. Finally, we can think of this as being an infinitesimal automorphism. So if you've got a vector field on, say, r2, it will have a lot of vectors looking like this. And we can think of this as being an infinitesimally small automorphism of r2. What we do is we just push each point an infinitesimally small amount in the direction of the vector field, whatever infinitesimally small amount means. So, but informally, a vector field is sort of a bit like an automorphism that just moves everything an infinitely small distance. So what we want to do now is to discuss how to put an algebraic structure on first-order differential operators. And this will later turn out to be essentially the lee bracket. So let's suppose we've got two differential operators. Let's make the first order. So d is going to be sum of fi times d by dx i. And let's make e equal to sum of gi d by dx i. So here fi is going to be a function of x1 up to xn, which I can't be bothered to write out, and similarly for gi. So here we've got two first-order differential operators. And we want to find some sort of algebraic operation we can do with them. Well, we can multiply them together. So de is now going to be sum of fi d by dx i. So this is going to be a sum over i and gi d by dx j. And if we work this out, we can see that's the sum over i and j of, well, first we've got fi g j d by dx i d by dx j. But then we also have to move the d by dx i past the g j's. So we've got plus fi delta gi over delta xi times d by dx j. And you see this is the product of two first-order differential operators is in general not first-order because this is second-order. However, if we multiply them in the other order, we see we get something very similar. So first we get the same expression here. So that should have been a j. But then we get a plus g j delta, should be a g j again, delta fi over delta xj times delta over delta xi. And what you notice here is the second-order parts of these are the same. So d and e do not quite commute, but they almost commute in that if you just look at the highest-order parts then they commute, but there are some lower-order parts that don't commute. And what you notice from this is that d e minus ed is first-order. It's just this bit which is a first-order operator minus this bit which is another first-order operator. So here we've got an algebraic operation on first-order differential operators. We can't multiply them, but we can sort of take what's called a commutator. And what we do is we define the le bracket to be d e minus ed. So this will be essentially the operation we have on the le algebra of a le group. We can't multiply two elements of a le algebra, but we can take their le bracket. So I said here that if you've got two first-order operators, the le bracket is a first-order operator. I'm just going to have a little exercise. It says if d e have orders m n, then the le bracket d e has order m plus n minus 1. So roughly speaking, differential operators are trying to commute. They commute if you just look at their leading terms, and the thing that causes them to fail to commute are terms of order less than the leading order term. So this is just the special case where d and e both have order 1. Next, we can ask what identities are satisfied by this bracket of two operators? Well, it's obviously not commutative in general, because d e is not equal to ed for most e and d, and it's not associative that d e f is not equal to d e f in general. So what identities does it satisfy? Well, there's one obvious one, which is that d e is minus ed, which is completely trivial if you just look at the definition. And there's a more subtle one due to Jacobi, or I know I've mispronounced it, but this is the standard pronunciation in English-speaking countries. So the Jacobi identity says that a b c plus b c a plus c a b equals zero. And it's easy to remember these because these are just the three cyclic permutations of these. They all rotate a, b and c. And this is actually easy to prove. It's true for any associative ring. I'm saying associative because we do actually have a non-associative product here. If we define a b to be a b minus b a, then you can check this identity holds. And if you haven't already checked this, you should do so because as someone said everybody should check the Jacobi identity for themselves once in their life. And are there any others? Well, you can find some other obvious identities just by taking combinations of these two identities. So you can ask are there any other identities which are not consequences of these two? Once it turns out to be no, we have actually found essentially all the identities satisfied by general the algebras. So a Lie algebra is a vector space over the reals with a bilinear map taking d and e to a bracket d and e satisfying conditions one and two. So that's what it's going to be in this course. In general, you may notice there's no particular reason if you're an abstract algebraist why this should be a vector space over the reals. You could perfectly well take it to be a module over some sort of commutative ring, but we will start off the case over the reals or sometimes the complex numbers. So that's what a Lie algebra is. Now we have the following problem. Let's find the Lie algebra of a Lie group. And I'm first going to give a sort of abstract way of defining the Lie algebra over a Lie group as motivation. And then what I'm going to do is to try and calculate what it is explicitly. So here's the sort of abstract version. What we do is we take a Lie group. Let's call it G. And we take the identity element and we look at the tangent space at the identity element. So this will end up being the Lie algebra. And we're going to look at vector fields on G. And a vector field is going to be got by taking a little tangent vector at each point. So it might look something like this and so on. So in general, there are an awful lot of vector fields on G, at least if G has dimension greater than zero. But what we're going to do is we're going to take a look at the left invariant vector fields. And what we do is we notice that G acts on itself by left translation. And what we can do is we can ask for the vector fields that are fixed by this translation. And these are pretty easy to work out because suppose we pick a tangent vector at the identity, then at any point G of the group, this vector field has to be invariant under multiplication by G. So we just transport this vector field to this other point here. So the left invariant vector field is uniquely determined by its value at the identity. So the left invariant vector fields correspond to just tangent vectors at the identity. So they form a finite dimensional vector space, at least if our lead group is finite dimensional, which it will be for the moment. So vector fields correspond to first-order differential operators. And as we've just seen, the first-order differential operators on any manifold have this nice lead bracket. And this is preserved by all automorphisms of your manifold, in particular preserved by left translation. So the left invariant vector fields are also closed under the lead bracket. Well, the left invariant fields are closed under the lead bracket, and we can identify them with tangent vectors at the identity. So the lead algebra of G has a lead bracket. Given by, if you take two vectors here, you first of all turn them into left invariant vector fields. Take the lead bracket of those and then restrict back to the identity. So this gives us an abstract explanation for why the lead algebra of a lead group, its tangent space at the identity, has a lead bracket operation. Well, we can compute it in one trivial case. Suppose we take the group to be just a vector space under addition. Well, then G is just acting on itself by translation. So the invariant vector fields, well, left invariant and right invariant are the same in this particular case. Well, what do they look like? Well, they just look like sum of a i delta by delta x i where these a i are constant. And the lead bracket of any of these is always zero. And if the lead bracket is always zero, we see that the lead algebra is abelian or commutative. And obviously, abelian lead algebra is not terribly exciting because they're really just the same as vector spaces. You're just saying the lead bracket is always zero. So that's a rather trivial case. What we'd like to do is a more interesting case where the lead bracket isn't zero. So let's try and work out the lead algebra of the general linear group over the reals. So here the identity element is just as one's down the diagonal. And the tangent space, well, this is the tangent space at E, can be identified with all n by n matrices over R because this is an open subset of the n by n matrices. The tangent space at any point can be canonically identified with this. So suppose you've got a vector x in MnR. What does the corresponding vector field look like? Well, what we have to do is just move an element according to this vector field, roughly speaking. So what we want to know is what is d of f at some vector a in the general linear group. So here f is going to be a function on gln. And in order to know what the vector field does, we need to know what it does to any function f. And we can define a vector field just by taking f of a times 1 plus epsilon x minus f of a over epsilon. You notice this is more or less the usual rule for a derivative, except we're taking a derivative in the direction of the vector field x where we've moved it to a by something that commutes with left translation. Or we can think of this as being the lowest order term, or at least the lowest order non-zero term of f of a times 1 plus epsilon x minus f of a, which is equal to epsilon times something plus higher order terms. And what we want is this term here. Well, now we can now take two different vector fields. So suppose x and y are vector fields and they correspond to differential operators d and e. Then we want to compute what is e df at a point a. And if we just plug in the definition, we find it's f of a times 1 plus delta y. 1 plus epsilon, we take this minus f of a 1 plus delta y minus f of a 1 plus epsilon x plus f of a. So we take this and we want the lowest order term, because this will be epsilon delta times something plus higher order terms. So here what we've done is just plugged in the definition for the two vector fields x and y. So we're looking at 1 plus epsilon x giving you one of the vector fields and 1 plus delta y giving you the other. So we just get this complicated expression. And we can look at d e of f, and this will be something similar with x and y inverted, of course. So if you look at d e minus e d of f, most of the terms cancel and we get f of a 1 plus epsilon x 1 plus delta y minus f of a times 1 plus delta y 1 plus epsilon x. And we want, and this is going to be delta epsilon times something plus higher order. And this something is going to be the value of this at the matrix A in the general linear group. And since we are ignoring all higher order terms, you can change A to any sum of higher, you can add any higher order terms to A without changing this. So if we just change A to 1 minus epsilon x minus delta y plus delta epsilon yx times A, then this will just become f of 1 minus delta epsilon yx plus epsilon delta xy times A if we ignore higher order terms. So this will be equal to, sorry, minus f of A. And this is equal to delta epsilon of xy minus yx times, sorry, this is just the differential operator corresponding to vector xy minus yx. You can see there's xy minus yx. So this shows that the lead bracket of differential operators corresponding to matrices x and y is just xy minus yx. So we get the same bracket of x and y, whether we consider x and y as matrices in the associative ring of matrices, or whether we consider x and y as corresponding to left invariant vector fields on the lead group. So now we can work out the lead algebras of various lead groups. For example, gln, the lead algebra, is just n by n matrices over the reels with the lead bracket given by xy equals xy minus yx. Now if we've got a closed subgroup of the general linear group, the tangent space at the identity is just a subspace of mnr. And it's not very difficult to check that the lead bracket you get is just the restriction of this bracket. And this makes it very easy to work out the lead algebra of any closed subgroup of gln. So let's do a few examples. Suppose I take the special linear group r. Then I want to know what is the tangent space. So the special linear group consists of elements such that the determinant of A is equal to 1. Now the tangent space at the identity will now consist of matrices x such that the determinant of 1 plus epsilon x is equal to 1 plus higher order. Where epsilon is some very small number. So if we expand out this determinant, sorry I should have said 1 plus order epsilon squared. And if we expand out this determinant, the determinant of 1 plus epsilon x is 1 plus epsilon times the trace of x plus epsilon squared times something complicated. So we get the condition that trace of x should be equal to 0. So the lead algebra of the special linear group is just n by n matrices of trace 0 with the lead bracket given by that of course. And we can do the same thing for say the orthogonal group. So let's work out the lead algebra of the orthogonal group. Well the orthogonal group O n of r is just matrices A such that A times A transpose, A times A transpose is the identity matrix. So the lead algebra, we just have to figure out the tangent space of these. So this will be matrices x such that 1 plus epsilon x times 1 plus epsilon x transpose is equal to 1 plus things of order epsilon squared. So here we're sort of approximating x by 1 plus epsilon times the tangent vector. And if we expand this out it's 1 plus epsilon x plus epsilon x transpose plus epsilon squared times something we don't care about. So if this is equal to 1 to order epsilon squared, we find the condition x plus x transpose is equal to 0. So the lead algebra of the orthogonal group consists of n by n matrices such that the sum of x and its transpose is equal to 0. There's an alternative way of finding the lead algebra. So the alternative approach is in some ways simpler but it doesn't give quite such a good explanation of why the Jacobi identity holds. The alternative explanation is you define x, y to be the lowest order term of a commutator A to the minus 1 B to the minus 1 AB. Here we're thinking of A is equal to 1 plus epsilon x or roughly equal to that and B is approximately 1 plus epsilon y where x and y are tangent vectors and A and B are elements of the group which we will take to be the general linear group over the reals. It'll be a little bit easier if we just choose explicit coordinates. And if we work out A to the minus 1 B to the minus 1 AB, we see that it's approximately 1 minus epsilon x, 1 minus delta y. Let's make that a delta because I want to keep the epsilon's and delta's distinct times 1 plus epsilon x times 1 plus delta y. And if you work this out it's equal to we get an epsilon delta times x, y minus y, x plus higher order. So these are of order epsilon squared times something or delta squared times something and so on. So this gives a simpler calculation of the Liebracket. However the Jacobi identity isn't so obvious from this approach. If you want to get the Jacobi identity use the Hall-Vitt identity which says the following. It says that x y to the minus 1 z with exponent y times y z to the minus 1 x with exponent z times z x to the minus 1 y with exponent x is equal to 1. So let me explain this here. x y for elements in a group is defined to be x to the minus 1 y to the minus 1 x y. And x to the power of y is just the conjugate of x with respect to y. This is notation sometimes used by group theorists. So you can check this by brute force. Let me leave this as another exercise because you really don't want to watch me checking this. And as you can see this is formally very similar to the Jacobi identity. And if you use the definition of the brackets in the previous sheet of paper and apply this expression you can deduce the usual Jacobi identity for the Lie algebra. So next lecture we will be discussing the relationship between the Lie algebra and the Lie group in more detail using the exponential function.