 Okay, so I guess we should probably start on my last class I didn't, I'm not sure if you guys have to do this, you probably do, but maybe I don't know, but last class I had to sort of arrange an interim week. So I have to bring in like half the class at a time to test. So just, you know, the scheduling everything has to be sort of done appropriately. I'm glad we don't have any tests. Right. So today, what we're going to do is we're going to talk about finite dimensional algorithms. And we're going to talk about the theory of item problems. And this goes back to many, many years. And the first place I sort of learned about this is in a book by Curtis Reiner, so I just want to just write this. So a here is a finite dimensional algebra. And let's say it's over. Okay, so it's a ring which is a vector space over kids. And we're going to talk about item problems. So this, this theory can be sort of found in a book by Curtis Reiner. Did you ever read Charlie Curtis when you were in Oregon? Oh, no. So he's kind of legendary there. I think he must be posted. Oh, he's definitely over 90. So anyway, Curtis was a, was a member of the University of Oregon faculty for a very long time. And Ryder was at Illinois and they wrote this beautiful book on associative algebra. So if you're interested in seeing like all the details of this, I would refer you to this book. But, you know, there's a lot of theory that goes on, a lot of proofs, but we're not going to be able to go over all the proofs. We just need to highlight some of the important properties. All right, so what's an item problem? So an item problem is an element inside of a such that what E squared is. So for example, the identity is a trivial item problem. So there's something called the, I can never get this right, whether there's an I before D or E before I. So let me see what I have here. I have small, pure small. So a lot of times we call this a purest decomposition. So let's say E is an A. E is an item problem. So what we can do is we can sort of split me up. So we have that one over one minus E is an item problem. And it's orthogonal in the sense that when I take the product of the two item problems, I get zero. So E times one minus E equals zero. That's pretty obvious. So E and one minus E are orthogonal. So whenever you have orthogonal item problems like this, whose sum is actually equal to one, then I can split A up. So A is A times E, and I can correct some decomposition one minus. Now, when you look at this decomposition, who do we know? Well, we know that is a free one. These two are a summons of a free one. So A times E and A times one minus E are projected lines. Item problems allow you to split the algebra into projected models. Now what I can do is I can go further. And this takes some proof, but you can do the following. So if you take sort of more general version of this, what you can do is you can sort of take one and split it into primitive item problems. E1, E2, up to EF. So you kind of repeat this process. And the PIs are primitive. So what is primitive means that EI is not equal to, let's say F1 direct sum plus F2, where none of these item problems are equal to EI. So this cannot happen. So you can't split the item problems up any further. And what I want to impose is that, again, that these item problems are thought of. And then once I do that, I can split the algebra up in terms of projective and decomposable models. So the projective and decomposable models can be obtained by just taking this sort of item-potent decomposition and sort of looking at the factors. So why do we know that primitive item-potent exists? What you're going to do is you can just iterate this process kind of over and over again. Now, you may know about this theorem. Like if you take a semi-simple like algebra, and it's an isomorphic to a direct sum of matrix rings. So this algebra here may not be semi-simple. But when you take the algebra module and it's radical, it turns out to be semi-simple. And so this will be related to this item-potent decomposition. It's compatible with that. So this is in terms of looking at item-potence in general. But now I'm going to look at special types of item-potence, which are called central items. So now I'm going to do the same thing, except what I'm going to do is I'm going to look at the I's, the I's, which are all central. So they're in the center of the algebra. Primitive. Let them be orthogonal again. So C i, C j is zero. Now I can do the same thing. I can split A up in terms of this decomposition. But when you, when you look at A times C i, it's not just a left ideal. It's a two-sided ideal, too. Okay. In terms of A, C one. We're at some ACM. A times C j, this is a two-sided. So the idea is that when you split something up in terms of central primitive item-potence, you get a decomposition of A in terms of indecomposable to-sides. Indecomposable. To. These are precisely called, this decomposition is unique so sort of approximate. So the, so A times C j, C we call these strictly J, or blocks, so people who have seen this before People who have seen this before, if you talk to any like a finite group theorist, you know, works in module of finite group theory, probably has talked about blocks before. And this is precisely what they're talking about. Now, as I mentioned before, the blocks sort of make the algebra a little bit easier to study. So, let me kind of say something about this. So, here's a fact. So, right. And decomposable. Right. So, there's a crow Schmidt there are mainly like if you take a module, and it can always be broken up in terms of the composer. Let me just say this. So, let's say I'm decomposable. Then it turns out that if you take any of the central item codes on the applied to him, then only one of them is going to be not zero. It is not equal to zero. Right. So since the rest of them are equal to zero. And so the nice thing is now I can actually stick or say this module belongs to the block. So, you say, so for any in decomposable module, it's only going to belong to one. So I think any in decomposable select, sort of what, let's say, and the name of what they're finding. And it turns out that and is a direct some of the composable modules. So, and it's going to be anyone. And then what you can do is you can sort of put, you know, you can actually look at each of the components, and then look at the blocks that it belongs to. So the way you can kind of think about the blocks are, they're sort of like little receptacles for your algebra. So when you need the composable module, you just look at what block belongs to and you sort of throw it in that receptacle. And why is this useful. Is that external day. So if the extensions if there's any comology between the two modules, then they have to be the same block, and they can't be in two different blocks and have comology. So this is kind of a crash course and planning dimensional algebra theory and I don't know, are there any questions on this. If you've seen parts of this or some of this. Is it isn't weird, because I know I don't want to show up as projections. Yes. So is there a way to sort of visualize these projections and some kind of. Exactly. That's that's projection. See if you like, hit a module times Z, right. And then you hit it again times Z, then you stay where you want. So, so we should think of like the image of the, of, of everything in the funky name something is like the projection onto something. Yes, that's right. And that sort of projection is a projection operator is right. And the piece where it is equal to P projection operator. All right. Now, there's a different side of the story. So I want to actually talk about that. So I talked about item phones. Now I need to talk a little bit about representations and how they relate to the item over here with the blocks. What I want to do is I want to be more specific in terms of the simple modules and the projected models. I'm going to do an example. So a definition. So P is projective. Now, if I take my algebra, a. So, a. Finally, any simple one is to find a dimensional. Let's label them by L1, L2. So another way to say these are irreducible representations for the algebra today. So for every. Simple module. There's a projective module sits on top. So, so for any. Okay. This objective. Indicomposable. This is a connection with the item bonus. So, PGA, such that PGA, projects on an LJ. If I take PGA. Modulo, it's radical. And I can help you. So I can just take the radical of the algebra times the model and that's radical PGA, or I can think of it just like I've done it before. It's simple. Okay. And then we're over on there exists an item called PGA, or PGA is a primitive item such that I can realize this PGA as A times PGA. I can actually realize this projective cover as a sum out of it. So this is kind of a nice story. You can actually get your hands on the projective indicomposable as long as in some sense. Although it's a big question and representation area like what the structure of the project. A lot of times. So these PGA is in the literature called. I want to kind of projective covers. Your representation very course did you do like Carton-Barrar and Modulo. Talk about that. Don't think so. Okay. Okay, so if you do find a group representations you'll, there's a whole, there's a beautiful theory of all of these projective colors. Okay. Okay, so these are called the projective covers. And another term. You might see one of the projective indicomposable models. Sometimes people in the literature, especially people agree and call it pins. All right. There's a simple module. There's a project with cover. And there's a beautiful formula which generalizes the Wettemberg decomposition. So this is a nice formula. And it basically says, I take the dimension of a, I can take the sum of the dimensions of the projective covers and simple models. So the sum of J was 150. Projective cover. So there's a really nice formula which relates the dimension of the projective indicomposable modules, the dimensions of the simple modules, and the dimension of the algebra. And if you notice, if it is a semi-simple algebra, the projective cover is actually equal to the simple one. It's a semi-simple. Everything is completely reducible. That means that the projectives are equal to irreducibles. And when you write this out, the dimension of A is equal to the sum. And you've probably seen this in one of the representations over characters 6-0, right? So then you get dimension. So this is like the Wettemberg decomposition, right? Because you have a direct sum of matrix algebras and those each square of the dimensions of the irreducible. Okay, so this is a generalization of sort of the Wettemberg decomposition. So what's the dimensions on this? You know, this is the first step of the projective revolution. Yes, it is. So you're exactly right. You just take, you take your day. And you can cover it. And then you look at the radical. And you cover that. So it turns out once you cover this or find the projective cover for this module. So there's sort of a whole theory of projective covers. So you just have to know what the radical of this module was. And you cover that with all the projectives. And then you just continue the process. And that's how you construct the project. So I know that there's some, some, some theorem that says that projective modules are equivalent to vector models in terms of, in terms of like over schemes. In terms of over schemes. So like a project model is equivalent to a vector model. So you know, is there a way to visualize that here in this context? Probably. Yeah, I'd have to look at a little more. I can't think of it right off the house. I'm sure there is. Now, what I'd like to do is show you that it's not, it's not so easy finding item posts. I'm going to do like the easiest possible example, trying to find item posts. And what I want to do is I want to point out the differences between during the characteristics of the field. Well, maybe you've done this in group representations. You can think of this as like Laska's theorem for, for this very special. So let's do an example. So I'm going to try to look at the most basic example possible. And this is the group algebra for the group cinema to. So it's only a two dimensional vector space. So this consists basically of formal signs. Let me just write this one. So A1 times one and E2 times G. Z2 has just two elements. So I'll take any element of the generator G or G sports. Everybody okay with that? All right. Now, let's say I have an item post. E is a one times one plus a two times G. Now let's say I take a sign up on side to pick me square, which is a one times one plus a two times G. And, you know, you work it out. So I would make mistakes. I get a one squared, a two squared times one. Plus two a one. Now we use the phone. All right, so I'm assuming that it's an item problem. So we sport. So that means that this has to be equal to. A two times one. All right, now. If a two is zero, let's analyze these equations. So I have that a one is equal to a one squared plus a two squared. And I have that two a one a two is equal to a two. So first of all, if a two is zero, a two is zero. Then, well, it's clear what happens. He is equal to a one times one, right. And we have that a one is either going to be equal to zero one. So A one is going to be zero. So then you will either be. So not much is happening here. You just get the obvious item. So I'll call it trivial items. And so we sort of know they're trivial items around. They're not very exciting. Now, if a two is not zero, then we need to go. So if. If a two is zero, then this equation here is you that to a one is actually equal to one. Right. So now what do you have to worry about include something about a one. But hot. Well, I mean, so, so let me ask you this. When can I conclude a one is equal to one half. Okay, so this is an important. And I have a one. Okay. Good. I can actually put things back in up here. So what do I get again one half. One half is equal to one for a two squared. Okay, so one half is equal to a two squared. All right, so now I have two solutions. So a two equal to one or a two minus. So I get two items. And those are going to be primitive items. So then I get the one is equal to one out one plus G. And the two is equal to one out one minus. I split things up nicely with my algebra as long as the characteristic is not to the characteristic of K. Not to. Doesn't provide the order group. In my algebra. It's split up into eight times one or some eight times. And these are both one dimension. Right. So these should be matrix rings. And can anyone tell me what irreducible is a correspondent irreducible representations. You know for Z two. The sign. Trivial. Z two is the same sequence. That's a trivial assignment. See what kind of work on the trivial. And when is it trivial. I'll just say. So the algebra splits up as a trivial plus the sign representation. Now, what happens if the characteristics. Characteristic. So we just back up over here. And let's back up to these equations. This is zero. That means that he has to be zero. We already analyzed the case when a two zero. Composite or zero one. So what that means is that the algebra itself. Composable. A here. Is. Composable. In fact, it's a projective indie composable a module. It is projected. Well, it's a projective indie composable a module. Dimension, what? Two. It turns out that, and this isn't taking much work, but it turns out there's only one irreducible representation. When the field is characteristic to just the trivial representation. The only irreducible representation here is just the trivial representation. Okay, so when characteristic. Two. And I'll call it K. The only irreducible. Or only simple. Well, that doesn't leave very much. So if I have this module over here, and the only irreducible representation is the trivial model. I have a two dimensional representation. And what does it's composition series have to look like? Two dimensional, right? How many composition factors? The trivial model is only one dimension. The whole thing quotient and trivial. Well, I mean, okay, so I did get me. And I know that the only composition factors the trivial one. So I have the trivial model sitting inside of there. When I take the question, how big is it? So I have two composition factors. A. Has a sitting down at the bottom. So this is in the sock. So that's the, that's in the sock. And then this is the next layer. Similarly, this is actually in the, in the head. And it's a over the radical composition. And this is not. Not semi simple. Right. So this is a very simple example in many ways. But what it does is it gives you the flavor of modular representation. Because unlike the case when we have characteristics zero where you have a complete reducibility. In the case where you have characteristic words, characteristic device group. You may not have a semi simple situation. Modules are not necessarily completely reducible anymore. And here's a, here's a classic example. Let me say this is. This is. Modular. Taking away from today. You can just sort of think about module representation theory as like, even just taking projective and decomposable modules. And finding out what the structure of projective. So we don't know the answer like to many, many situations. But if you take the general linear group, for example, then you take a prime which divides the order of the fight and general linear. Okay, any questions on this? Let me mention one more thing about this. One thing I hope is that, you know, some of these principles you'll actually see in talks. On representations. So let me talk about one more concept. And. These are called carton in areas. So it's not hard to sort of state this. So right now, I gave you an example of a projective module. This is the projective cover. And so you can sort of ask yourself the following. So if you take. This is a reducible representation for a finite dimensional algebra. And you look at it's projective cover. Let's say that. One of the old team. Simple a module. Awesome. Simple modules up to which are not isomorphic. And then you take the one up to. Projective covers. And so what you like to do is you like to know what the composition factors are inside of each of these projective covers. And that's what the carton matrix is. So when I write. PI. J. Number of times. Opposition. So that makes sense. Just a module. Composition series. Just get the carton matrix. The time matrix is. You know, I. Okay. It's just a number of times. This is what's called the carton. Different carton. Okay. This is a carton matrix. It's not like with root systems or anything. This is completely different. All right. So over here. So the carton matrix just involves. One entry. And a number of times the trivial module appears in here. Any questions on this. So. A little bit of time to start the new topic. So. People might know more about this. So. So what I want to talk about next. Are. So I don't know what the best way to motivate this is, but a lot of times, you know, what you want to do is. You want to sort of talk about finite dimensional hop out. So. One thing is that. I have all mentioned this later on. So if you talk about group schemes. This turns out to find a dimensional co committed. Applying group schemes or. Yeah. I should mention applying groups. So. Let me just put out. So now. The reason we. Like to talk about group schemes is. They behave. Sort of similar. Do is I could talk about sort of normal. Subgroup schemes and I could take potions and I do things in an appropriate way. And so the language is just. It's a. It's cumbersome working with. It's much easier sort of working with groups in some sense. So this is actually a more. More. More. Everyone likes groups. Now, the disadvantage is it's more abstract. You know, this is. I'm not exactly sure about the history about this, but I think it goes back to. And. And the thing is it's more abstract. But in the same way, it's easier to work with. So what we'll do is we'll sort of translate back and forth once in a while, but sometimes it's actually better to work. This side rather than just. So let me give you an example. I haven't been finding anything yet. This is just an example. So let's say G. Is a productive. So you can actually think of like GLN. SLN as a group scheme. Now you have a nice map. Which is called the. Which goes from G to G. Now there's a way to define it. Let me just do it for the general. So if I take GLN. Now the reason why I'm not putting my GLN C. Or GLN. Okay, for example, inside of your. It's going to be. Okay. Now you have a nice map. Which is called the convenience map. Which goes from G to G. Now there's a way to define it. Let me just do it for the general. So if I take GLN. Now the reason why I'm not putting my. So I'm going to do this for the general. So the reason why I'm not putting my GLN C. This is because you sort of insert. The coefficients. You let the coefficients run overall community of chaos. And for convenience. It's made for the service. And it just places each under the. So this turns out to be a map of groups, schemes. And. He thinks. Yeah. that you feel that you're working with as characters to be greater or lesser. You need it to be actually a map group. OK, so when you do that, you can take the kernel, which I'm going to call g1, and this is also an alpine group scheme. So it's not an algebraic group per se, the group scheme, but we still have a coordinate algebra. We still have functions we should define as group scheme, which is a coordinate algebra. So this is a good example of keeping in mind. This is a finite dimensional algebra too. So this turns out to be finite dimensional. What I can do is I can take its dual. So when I take its dual, and let's call this a, this is a finite dimensional whole commutative. And Ethereum says that representations for g1 are equivalent representations for g1. And now you know why. Now you can see why I wanted to talk about finite dimensional algebra, because sometimes you want to look at this in terms of group schemes. Sometimes you want to look at this in terms of finite dimensional algebra. So we want to pass back and forth between the two languages. All right, so that gives you a brief introduction. So let me just give you a flavor for what I mean by group factor. The first learning was hard to get my head wrapped around this idea. Here's the way I thought about this. So r is going to be a commutative. I'm going to assume that I'm always going to assume that this thing happened right now. So the idea is when you take the general layer for g1, you sort of know what that means. You just take matrices, which have some sort of like the determinant is not zero condition. So you usually plug in a field to g1. But what I want to do is I want to think about plugging in any commutative k algebra in this. When I plug in any commutative k algebra, I spin on a group. So this actually gives you a factor from commutative k algebras to groups. So I can do glm, I can do slm. So you just plug in any r into here, I get a group. So I want to consider k, it's r to the f, and it's going to go from commutative k algebras. And it's sort of just by plugging in coefficient, so that's the easiest way to see that. So in this case, you take absolutely r and then you get my g l and r. And this is actually a factor. So I want to consider this as a factor. And so therefore, if I have a map between commutative k algebras, I would get a map between groups. So glm, commutative k algebras, I get a map of k algebras, which shows there's a way to sort of see the connection between alpine group schemes. These are what are called alpine group schemes. And also, it's a thought process. Are you going to do something like you have more structure, ring structure or something, or k algebras? You might, but right now you're just going to go over k algebras. So you kind of have to set it up right to get this focal length in some sense. Do people familiar with representable factors at all? Yes or no? Kind of? All right. All right, so maybe I was supposed to stop. Wait a minute. I think I was supposed to stop, right? So we'll stop now. But next time what I'll do is I'll start in on some things about representable factors. And then I'll make a connection. I don't want to span like 600 in the first semester so you can see a lot. What do you do on representable factors?