 Well, I'd like to start by thanking the organizers for giving me this challenge and opportunity. What I'd like to talk about today is some more recent methods and perhaps more recent understanding of older methods concerning calculating Galois groups and because of limitations of time and the speaker, I'm not going to be able to give you a survey of all the new methods. Looking around the room, I see people whose methods I won't be mentioning and I hope they forgive me. The organization of the talk will be as follows. I want to describe some algorithms that certainly are algorithms that calculate Galois groups. And is the case with many theoretical algorithms. There's nothing wrong with them. But when you try and use them, you know, they don't work. They're not not in real time. So I'd like to talk then about what what does work, what people do use and finally, since differential Galois theory is my field and it's been mentioned already, I'll take the opportunity to talk about calculating differential Galois groups. So let's start. Calculating Galois groups in theory, everybody, yeah, that's legible. I guess one should really first ask, can we calculate Galois groups? Is there an algorithm? And there's a two-word answer. Yes, but. So I want to spend some time talking about each of these words. As far as yes goes, yeah, there's an algorithm. And in fact, I mean, it probably goes back further. But the source that I know is Chroniker's Grünzüge, an Arschmettete theory, the algebraische Größen, and he presents the following algorithm. It's very easy to describe. So let's let's start with a polynomial with rational coefficients, degree n. And I'm going to assume it has simple roots. That is, it has no roots in common with its derivative. So this is a polynomial in one variable of degree n. The thing that Chroniker suggests is that you form a new polynomial. It will be a polynomial in n plus 1 variables. And it's gotten in the following way. You take the roots alpha 1 through alpha n of your original polynomial, multiply alpha 1 times x1 plus alpha 2 times x2 and so forth. And then you look at all permutations, all those linear expressions for all permutations of the roots. So you look at now at the polynomial r, polynomial in y, whose roots are all these, what I think of as generic primitive elements. So it will be a polynomial of degree n factorial. And if you think about it a little bit, you didn't really need to know what the roots were. The coefficients of this polynomial will be symmetric functions in the roots. And so they can be expressed in terms of the coefficients of my original f. Okay, so started with a polynomial in one variable, degree n, now we have a polynomial degree n factorial in n plus 1 variables. Okay, the next step is to factor it. So you factor it over the rational numbers. And Chroniker gives, gives an algorithm to factor this. I'll say a little bit more about that. But, but he gives an algorithm to factor this. And then you take one of the factors, let's say r1, and you look at the permutation, permutations of the x variables that leave it fixed. And that's your Galois group. So I'm not going to try and justify it, but I do want to say a few words about this. To start with, from a practical point of view, it's a disaster. It's really a total catastrophe. So you start with a polynomial, let's say, of degree 10, in one variable. This r is degree n factorial, what that's about 370,000. So you have a very large polynomial. And then you're going to factor it and Chroniker's idea for factorization. So you have some polynomial r of y x1, xn. He's going to replace the variables with, let's say, a new variable to a power n to the i, where n is something bigger than the degree of r. So this will be n factorial again, raised to the i power. Do the same thing with y. You'll get a polynomial in one variable of, well, just unbelievably large degree. So But it's an algorithm. And, you know, Chroniker was quite careful about this and it really does give an algorithm. But the point I want to make is that it has high complexity. So I've given you a vague idea of what I mean by high complexity, but one of the themes in the 20th century, going into the 21st century, was a formalization of the notion of algorithm and along with that, a formalization of the notion of complexity. So I'd like to spend some time about thinking about the complexity of algorithms to calculate Galois groups. And the the measure of complexity I want to use is what's known as polynomial time. So I just say a few words about this. The question I want to ask is, is there an algorithm to compute the Galois group with running time that's a polynomial in the size of f? So I've made these words in red. I want to define them a little more by size, the way we measure the size of an integer is not its size, but just the length, the amount of space it takes to write it down, the number of digits. I'm not going to worry about which base we're writing it in, just say base 10. So that's about log, log of the real size of the integer and then a rational number its size, let's say, is the size of the numerator plus the size of the denominator. And for a polynomial, its size is n times the maximum size of the coefficients. So the size of something is the space it takes to write it down. That's the idea. So what do I mean by running time? Well, running time, it's going to be the number of operations. And to be honest, I should just think about operations on digits, but if you want to think about just general multiplication or addition, that's fine. So now the question is, is there an algorithm to compute the Galois group of an equation whose running time measured this way is a polynomial in the size of the input? Well, that's not a fair question. Because you can have a polynomial of degree n, relatively small coefficients, but the Galois group will have size n factorial. So your output isn't really polynomial in the input. So just to even write it down would take too much time. On the other hand, if you have a group of some size, let's say size m, it's not hard to see that there always exist log m generators. So if the group has size n factorial, there's something like constant n log n generators. So one, the right question to ask is, is there an algorithm to compute generators of the Galois group whose running time is polynomial? And this would be a modern way of trying to capture finding an algorithm that isn't too complex. And now that I've posed the question, I can tell you that I can't answer it. So we don't know. And I think it's a very interesting problem in algebraic complexity theory. Now, Harold Edwards showed us this quote of Galois, where it's apparent that he understands that although he can tell you what's going on with solving a polynomial in terms of radicals, you can't give you a method that's practical. So he has some sense of understanding what's the difference between some theoretical approach and some practical approach. And in these modern terms, we could formalize this and ask the following question. So is there an algorithm to decide whether a polynomial is solvable in radicals? Who knows? If you could do this without calculating the Galois group, that may be possible. And in fact, the answer is yes. And this was given by Landau and Miller in 1985. And in their paper, towards the end of the paper, they very proudly give the following statement, which is why I didn't translate Galois' statement, perhaps with a little bit of fibrous, but they deserve to be proud. And so I'd like to talk a little bit about the Landau-Miller algorithm. What has happened since Galois that lets them do this? So let's see, talk a little bit about the ingredients of this Landau-Miller algorithm. The first and for me most important ingredient was the discovery by Lentzter, Lentzter and Lovash of a way of factoring polynomials in polynomial time. This was really a remarkable achievement. This is polynomials in one variable. Chroniker suggests to factor a polynomial, you evaluate it at integers, factor the integers, and then interpolate possible factors and test whether these interpolated polynomials divide. And, well, you have to take some exponential number of combinations and try this. And the Lentzter-Lentzter-Lovash algorithm avoids, it doesn't approach it quite that way, but it avoids this exponential number of choices that you have to test. But again, I don't want to go into the details of it. Using this algorithm, Susan Landau was able to show that you can construct splitting fields and compute the Galois group in a time that's polynomial in the size of the input, the polynomial itself, and the output. So this is already an improvement on Chroniker. Chroniker, no matter what size of Galois group this polynomial has, Chroniker had us calculate a polynomial of degree n factorial, even if the polynomial had trivial Galois group. But Landau says that to calculate the Galois group, if it's small, you don't have to do that. You don't have to go all the way to n factorial. And just quickly, let me show you where the Lentzter-Lentzter-Lovash algorithm comes in. So again, Harold Edwards described this situation. You take a polynomial, it's easy to symbolically adjoin one root to the rationals. You just take the polynomial ring mod out by the ideal generated by the polynomial. I'm assuming the polynomial is irreducible for now. And calculations in this field, q of alpha, can be done using the Euclidean algorithm. It's not hard, you don't have to know what alpha is. So that's the first step. But at the second step, you already have a problem. You want to construct the splitting field, what do you adjoin? You have to adjoin another root, and in order to do the same thing, you have to know its minimum polynomial. And so to know its minimum polynomial, you will need to factor the original polynomial over this extension field. Well, the Lentzter-Lentzter-Lovash algorithm, it's easy to make it work over finite extensions, or at least reduce to the rational case. So at this point, you need a polynomial time algorithm to factor. So you've factored, and then you adjoin another root. You factor, and you adjoin another root. And you do this until everything is factored. And at that point, so it's the Lentzter-Lentzter-Lovash algorithm that gets you to that point in polynomial time, polynomial in the size of the group, since of course the degree of this big field is that size. And then it's not hard to find a primitive element, the minimum polynomial of a primitive element, and the Galois group is the set of permutations that take this primitive element to another root of its minimum polynomial. Okay, so this is Landau's algorithm. And in fact, at this point, we can decide if an equation, an irreducible equation, has an Abelian-Galois group. And let me do this because the ideas come up again as I describe the Landau-Miller algorithm. So the first thing to observe, so we have our polynomial f of x, g is the Galois group of f. I want to note that if g is Abelian, then, well, what does Abelian mean? All groups will be normal, so all subfields of the splitting field of this will be normal. In particular, when I adjoin one root, this will be a normal field, I'll get all the roots. So that's something to keep in mind. So in particular, the splitting field will have degree n. So what is the algorithm? So the algorithm is you form the first step in the way to the splitting field. And there are two possibilities. So is this the splitting field? So does f split? And if the answer is no, the game is over. It can't be Abelian. So you ask again, does f split? And you get the answer yes. Well, then you have the splitting field. Then now you have to test whether the group is Abelian. This doesn't work in the other direction. You have the splitting field and Landau says, well, you can construct the Galois group in this case. The size of the Galois group is going to be n, size of the polynomial. And when you have it written down, you can certainly check if it's Abelian just by multiplying in both directions. So at this point, we can check whether a group is Abelian. How do we look for whether the group is solvable? Well, we need some more information. The first step is we need to be able to look at a group and see if it's solvable. And Sims in the 70s gave an algorithm to do that. Again, it's polynomial in the size of the group and the size of the equation. So there were two ingredients here. One was that if G is Abelian, it must be small. And the other ingredient was I was able to test whether it was Abelian. So here we can test whether it's solvable. And it turns out that in certain cases, solvable groups have to be small as well. So Palfi in 82 showed that you have a solvable transitive subgroup of the permutation group. And it's primitive. Primitive just means that as a permutation group, it doesn't move blocks of things around, doesn't permute blocks. Then its size is small. It's less than into the fourth. And what Landau and Miller did was they showed that if you want to decide whether an equation has solvable Galois group, they were able to construct some auxiliary equations that they knew had primitive Galois groups. A priori the Galois groups were primitive and that it would be enough to test whether these auxiliary equations had solvable Galois groups. So how does one do this? You have an equation. You know that its Galois group is primitive. You start constructing the splitting field. If at some point you, in your construction, the size of the splitting field, its degree is bigger than n to the 3.25, the game is over. It can't be solvable so you stop. Otherwise you'll get the splitting field. It'll be relatively small in size. And Landau's original first algorithm will, together with Sims, will let you test whether it's solvable. So there's some new tools in the game. One is this very good way of factoring things. And the other is we now know more about groups. Okay, so that's the Landau-Miller algorithm. And it's a polynomial time algorithm. I forgot to write down bounds on the degrees of the polynomials. But in any case, I don't think one wants to program this. So what does one do in practice? The answer is many things. But I want to describe a couple of things that work. And these are things that actually are used, for example, in the computer algebra system, magma. So the first is mod-p techniques. And well, we've heard a lot about mod-p techniques. But I'm going to look at them from the point of view of algorithms. And so I'm going to start with a polynomial that has integer coefficients. And for simplicity, I'm going to assume that the leading coefficient is 1. And we all know this expression for the discriminant. And the first fact to realize is that, except for some finite set of primes, if you look at the equation mod-p and look at its Galois group, one can think of it as being a subgroup of the Galois group over the rational numbers. Well, the number theorists in the audience know this. This was part of what was spoken about yesterday. But I can't resist giving a proof of this fact. So I will. And it's a very non-canonical way of thinking about things, but very much in the spirit of Kroniker. Remember that Kroniker said that the Galois group was gotten by looking at this enormous polynomial, factoring it and taking one of the irreducible factors and looking at permutations that leave it invariant. Well, we could factor it mod-p. And mod-p, it's going to be the same story. You take one of the irreducible factors. Well, take this R11 and look at the substitutions, the permutations of the x's that leave R11 invariant. Well, if they leave R11 invariant and these other factors have no roots in common, it's going to leave R1 invariant. So a permutation mod-p will give you the permutation in characteristic zero. So at least this identifies the Galois group mod-p with the Galois group in characteristic zero. So now the question is, what kind of information does that give you? How much of the group in characteristic zero can you capture mod-p? And boy, there's a lot of work done on this. The Chebotariff density theorem addresses this issue. But an earlier result, and perhaps at least for me simpler to state, also will give you a sense of it. And this is this old theorem of Frobenius. So the theorem of Frobenius says, take the degree and take a partition of the degree. So write it as a sum of integers. And you can ask, well, you take your polynomial and it may factor mod-p. Look at the primes for which it factors into polynomials of those degrees. And the density of those primes, the ratio of the primes for which that is true, over the set of all primes, is precisely the percentage of elements in your group when you write them as in terms of cyclic permutations, the cyclic permutations have these lengths. So factorization mod-p, the percentage of p's for which you get a certain kind of factorization tells you the percentage of elements in the group that have a certain decomposition in terms of cycles. And this is before the Chevitarov density theorem, and certainly the Chevitarov theorem, let me just say it gives you more information, it doesn't give, so these going into, factoring into cycles, that's a conjugacy class in the permutation group. The Chevitarov theorem tells you about conjugacy classes in the group itself. So what does this say? It says that if you have an element in your group that has a certain cycle decomposition, eventually you will get a witness of that, you go out far enough, there will be a primes that when you reduce mod-p you get a factorization of that form. So this says that by factoring going mod-p for a lot of primes and factoring, you'll get a sense of what kind of elements are in your group. So for instance if there is an n-cycle in your group, this says eventually by factoring you'll get to a prime where the polynomial is still irreducible. Or if it's a polynomial of odd degree and there's a 2-cycle in your group, eventually you'll get to a point where there will be a factor of order 2. So what are the advantages of thinking in this way? Well first of all it's easy to factor mod-p, and this is a result due to Berlikamp who gave this 67 very nice algorithm how to quickly, really, really quickly factor mod-p. And this approach gives good probabilistic tests for determining whether your group is the full symmetric group or the alternating group, which more than likely it is. And it gives good evidence for other groups. So we want to start calculating Galois groups, start factoring for a few hundred primes, and you'll get a sense, at least one can get a sense for what group you possibly have, or at least exclude other groups. So the disadvantage is it's an asymptotic result. So I said that if you have a certain cycle decomposition of an element in your group, eventually there'll be a prime that witnesses that. And we have estimates for that, but they aren't really good enough to give you a fast algorithm. The other problem is that the groups are not determined by the distribution of their cycle patterns. You can have groups where the number of elements given cycle pattern are the same in both groups, but they're not the same group, and this already happens in degree eight. But still, as a way to get your hands on things, it's good. So that's one technique. Another technique is invariant theoretic techniques, and this certainly goes back to the ancients. So let me start with a well-known example. This is the example of the cubic. How do you determine the Galois group of the cubic? Well, first thing you ask, can you factor it? Well, if you can factor it, if it has just one root, then the Galois group is just S2. If it has three roots, it's the identity. So you start by factoring it. Now assume it's irreducible, then that says that the Galois group acts transitively on the roots. So from group theory, well, that's overstating it a little bit, but from group theory, we know that the transitive subgroups of the symmetric group are S3 and A3. So you have to distinguish between these two groups. So I'm going to now construct a polynomial. It's Z squared minus the discriminant. And we know that the alternating group leaves the square root of the discriminant fixed, but the symmetric group doesn't. So the Galois group is going to be the alternating group if and only if this new polynomial factors. You've seen this perhaps stated in a slightly different way, but what I want to underline is that we're reducing the calculation of Galois groups to factorization properties of not just the original polynomial, but polynomials that we calculate in some other way, associated polynomials. And the question is, why does this work? I mean, why should one even start thinking about these things? And I'd like to address that question. And it depends on two facts. And the first is that a group is determined by its permutation representations. And this is very much in the spirit of Grottendijk and the Galois categories. But for me, in a very concrete way, I just want to say that this means that if you have two groups, a group in its subgroup, you can always find a permutation representation so that the big group acts transitively and the little group doesn't. And in fact, it's easy. You let the group act on itself by multiplication on one side or the other. The group acts transitively on itself, but any subgroup will leave that subgroup fixed under multiplication, so it's not going to act transitively. So in fact, there's one representation that distinguishes the group from all its subgroups by saying here the group is transitive, but none of the subgroups are. So, of course, that's some enormous representation. One would like to find smaller representations. The second fact is that you don't have to look in strange places for your permutation representations. You get them in the Galois extension already. So what I mean is if I have a representation of the Galois group as a permutation group, then I can find elements in the field in the Galois extension such that the Galois group acts on these elements via this kind of permutation. So how does that help us? So, remember I had A3 and S3. There was this permutation representation on two elements, U and V, such that A3 left these elements fixed and S3 permuted them. These were the square root of the discriminant and minus the square root of the discriminant, and sure enough they lie in my Galois extension. So what does one do? You want to test whether you have a group. You think your Galois group is a certain group. You can do the following. So you want to distinguish your group from possible subgroups. So you think your Galois group is this group. So for each of, let's say, maximal subgroups here, you find a representation so that the group acts on the representation transitively, but the subgroup doesn't. And I said one could find that in the Galois extension itself. You write down a polynomial whose roots are this. So this is my polynomial f of x, and this will factor when the group doesn't act transitively, and it will be irreducible when the group acts transitively. And this works a lot of times. You need to know a lot about your groups. Transitive subgroups of the symmetric group, I think, are classified up to degree 35. I'm told that this idea works for polynomials up to degree 20-something, 23-24. And this is what can be done certainly using magma. So there's a lot more to say about the usual Galois theory, but I'm going to stop here. But I would like to say that these ideas, these two ideas are from a computational point of view what I get out of this Grotendieg theory of Galois categories. There are many other uses of it. Okay, differential Galois groups. So Mel Grange and other people spoke about this. I only want to talk about the linear theory, and I want to give a quick introduction to it as well, just to nail down some definitions. Again, I'm going to, well, I'll first talk about what these things are and what good are they. And then, again, talk about some aspects of what we can do in theory and what we can do in practice. So here's the quick introduction to the Picard-Vessio theory. Let's start with a linear differential equation. I'm going to simplify things, think of it as having coefficients that are rational functions with complex entries, coefficients. In Galois theory, at least nowadays, what we do is we form splitting fields and automorphism groups and so on. So I want to form a splitting field. One can not do this constructively, but certainly in theory, there exists, you go to some nice regular point, there exists independent solutions. And you can form a splitting field by adjoining these solutions to the base field, rational functions. And you also want to adjoin their derivatives, the first derivative, second derivatives, and you keep adjoining derivatives. Once you get up to the n-minus first derivative, you get the n-th derivative for free because these things satisfy the differential equation. So higher derivatives come for free. The Picard-Vessio group is going to be the group of automorphisms of this large extension k over little k, which I haven't defined. So little k is the rational functions. And I don't want just field-theoretic automorphisms. I want those automorphisms which commute with the derivation. So I want them to preserve derivatives. So the first thing to notice is that because they preserve derivatives, this group of automorphisms will take the vector space of solutions to itself. And each automorphism will be a linear map on that vector space. So you can identify, once you've chosen a basis, this group with a subgroup of invertible matrices. And one can show that it's Zarisky closed, meaning there's some set of polynomials and n-squared variables, such that this group of matrices is the set of matrices whose entries satisfy those collection of polynomials. For example, that the determinant minus one is zero. That's a polynomial and n-squared variables. So this is a group of matrices, but it's not just any group of matrices. And finally there's a Galois correspondence between certain subgroups and certain subfields. The certain subgroups are subgroups that are also Zarisky closed, defined by sets of polynomials. And they correspond in the usual Galois correspondence to subfields which are also closed under derivation. So this is the Galois theory. So what's the point? Why is one interested? Well, for example, in the usual Galois theory, the size of the group measures the size of the extension. That is, the physical size of the group is the linear dimension of the extension. And here the natural generalization of this is true. The size of the group, meaning its dimension as a variety, or the group, is equal to the transcendence degree of the big field over the small field. And this is very useful in number theory and other places. One is interested in showing that values of certain functions are transcendental. The usual first step in this is to show that the functions themselves are transcendental. And so, for example, if you look at the Bessel equation and which has a parameter lambda and you put a restriction on lambda, one can show that the Galois group is the special linear group. I have left out a derivative here. There should be a y prime. And the special linear group has dimension three. So these four functions, any three of them are going to be algebraically independent. Another thing you can measure is solvability, just like the usual Galois theory. So I'm going to say an equation is solvable in terms of Liavillion functions. And here's something that is named after somebody who actually introduced the notion. So this is when it goes back to Liaville to see this, although perhaps not quite in this algebraic form. Just as in the usual Galois theory, it's in terms of towers of fields. And you have a tower of field where each one is gotten from the previous one by either adjoining something algebraic over the previous one, an integral of something from the previous field or an exponential of an integral. And we say that the equation is solvable precisely when your equation has a full set of solutions in such an extension. So here's an equation, a differential equation. And I've expressed the solutions as e to the integral of square root of x minus that integral. And I've exhibited a tower which shows you that in fact this has Liavillion solutions. And the Galois group measures this. That is, the theorem about solvability in this context is that a group is solvable. I'm sorry, an equation is solvable in these terms. If and only if, well, I'd like to say that the group itself is solvable. That's almost true. The group has some subgroup of finite index that's solvable. And in fact the equation that I've written down, its group is solvable. It's the group of diagonal matrices of determinant one, union, the anti-diagonal matrices of determinant minus one. Okay, so that's the introduction to the Galois theory. Now, what can we calculate? And this actually has a long history. There's a wonderful book by Jeremy Gray on linear differential equations that outlines this history. And I just want to quickly talk a little bit about what was done in the 19th century and what's done in the 20th century. So the first thing is one can decide if linear differential equations have a basis of algebraic solutions. That's a very special case of the Liouvillean solutions. And work on this was done by Schwartz. Schwartz made a list of hypergeometric equations that have algebraic solutions. Pepe, subsequent to this. And I'll talk about him a little bit later. Gave an algorithm which reproduced Schwartz's list as some gaps. Fuchs thought about this. He published papers on very special equations having special groups. And Klein showed that if you have any second order equation that has only algebraic solutions, so second order, only algebraic solutions, it comes from Schwartz's list by a change of the independent variable. And in the 70s, Baldessarion Dwork turned this into an algorithm. And this has been further worked on by many people here. Mark van Huwe, Jacques-Rivon, the other people. So for higher order equations, this was looked at by Jordan. And in fact, Jordan has a very famous theorem in group theory that says that if you have a finite subgroup of the general linear group, then it has a very large, abelian subgroup. I don't want to say exactly what his theorem is. And this, boy, you look at books on group theory and you see this theorem again and again. And in fact, this theorem was published in a paper devoted to whose title was, in English, a memoir on equations, linear differential equations that have algebraic integrals. There's no mention of group theory in the title, although I think what's mainly known from that paper is the group theory. So Jordan looked at that. And then this was looked at by Boulanger and Penlevé. Boulanger was a student of Penlevé, although you look at the words, it probably should be the other way around. And they also considered this problem and they were looking for algorithms. And they almost had algorithms. There was a missing fact that they couldn't deal with at the time. And that missing fact was what I think Boulanger calls the problem of abel. Let me say what that is. So you have y and algebraic function. And the question is, is e to the integral of y algebraic? And they weren't able to solve this problem. They knew it was a problem. And as far as I know, this was only solved in the 70s by Robert Rich in his work on integration and finite terms. And he reduced this. So you want to bound the torsion of the Jacobian of a curve defined over some number field. And he solved this. One can do this effectively by reducing the curve to good primes where the Jacobian reduces in a nice way. It becomes then some finite group which you can calculate and bound the torsion. So I don't want to get into it. There are people who know this better than I do. But here modern mathematics at work solving a very classical problem. So let me quickly go through the general problem of solvable in terms of Liouville functions. It was looked at by Liouville for n equals 2 who characterized what can happen without group theory. And Pepe gave an algorithm. Liouville's work by the way is in the 1830s. All this other work is in the 1870s, 80s and 90s. Liouville was quite a bit ahead of his time in this as well as many other areas. This was turned into an algorithm by Kovacic in the 70s although published much later. For higher order the old guys looked at it also. Again they ran into this kind of a problem. Marat worked out many cases n equals 3, 4, 5 by knowing the groups. And a general algorithm was given by myself and improved by Ulmer and other people. For n equals 3, for n equals 2 and 3 it's been programmed. One can calculate things. One can characterize when groups, when equations are solvable in terms of smaller order linear equations. And most recently, the last year or two Mark Finn, who in his students actually can take an equation and tell you can it be solved in terms of Bessel functions? Can it be solved in terms of Erie equations? And this has nice applications to understanding formulas and combinatorics. And finally in 2002, Ruschowski gave an algorithm to compute the Galois group. And I would say that this is at the stage of where we are, where Kroniker was. It's not an algorithm you would want to implement but it certainly is a wonderful and interesting algorithm. So I will take two more minutes and tell you about what is done in practice. And let me say that what is done in practice nowadays is strongly influenced by the theory of Tanakhian categories. So I want to tie this with the talk that was given on Tanakhian categories. So the ideas are based on this philosophy and I say philosophy because I don't want to state any theorems. Once again, a linear group is determined by its representations. In the sense that you start with the group, you look at all its representations. These are vector spaces on which the group acts. There are subspaces which are invariant. You can form products, direct sums, various things. You have maps between these vector spaces that preserve the group action. Now forget the group and just look at this category of vector spaces with these designated maps and constructions. From that, that determines the group. But if somehow you have two categories that look the same, that you knew came from a group, then the groups they came from must be the same. So saying it in another way, you can recover the group from its category of finite dimensional G modules. And the finite dimensional G modules, all of them you can get from one. You have one faithful G module by various linear algebra constructions. You can get all of these. The next important fact, and this is the key, is that a linear differential equation is an avatar for its representation theory. So I looked up the definition of avatar. And what I mean is it's the embodiment of the representation theory, but in some other form. So what do I mean by that? If I start with the differential equation, form this extension, look at the solution space, then for any other G module, or for any other representation, I can take my solution space, do something algebraic to it to construct this. But what's more important is I could start with my original equation and do something to it so that the new equation I get, its solution space will be this new representation. So whatever I could have done with the representations, I can do with the equation. And so these two results are the fundamental theorem of Tanaki in categories in this setting. So what does this mean? I have one more slide and then I'll finish. So for example, if I take my solution space and I see, oh yeah, there's an invariant subspace here. Well that corresponds to my original equation having a factor, the factor solution spaces, the subspace. If I have two solution spaces, let's say whose intersection is trivial, I can form the direct sum. That corresponds to the solution space of the least common left multiple of my operators. And I can do things like form symmetric products on the representation theory side. I can form an operator whose solution space is a symmetric product. Now a group is determined by all these representations. That means I can distinguish it from its subgroups by looking at representations. And that would mean in terms of the operator I can distinguish the group from its subgroup by looking at properties of operators I construct. So here are some results and this will be my final slide. An equation is solvable in terms of Li-Avillian functions if and only if when I construct this new operator, operator whose solution space is the sixth symmetric power. This will be an operator of order 7 if and only if it factors. Write it as a composition. Another example, here's an example of a specific group. The central extension of the alternating group on six elements that lies in SL3. I have a third-order equation. Does this equation have this as its group? Well I have to look at several things. The second symmetric power, this third symmetric power. Are they irreducible? How does the fourth symmetric power factor? But it's this Tanachian view of life that forces you to think in these terms. And then of course you need group theory and representation theory to work out special cases to make it work. So I've abused my privilege here and run over time but I'm going to stop here. Thank you. Do you have another question today? The question is whether one can factor differential polynomials as efficiently as... Yeah, so there are good heuristic methods. But the complexity of a general method that I know was analyzed several years ago by Dima Grigoryev. And the general method, the upper bound on the complexity is doubly exponential. It's a real disaster. So again, in practice good heuristic methods will work and in theory regrettably we're still kind of at the chronicle stage of life. So the question is, is there something like a Schwartz list in general? And people have thought about this for third-order equations. It's rather complicated. It isn't as simple. And in general, I don't know but I'm skeptical. So second order, the thing about the Schwartz list is that the hypergeometric equations are determined by local phenomena. That is the local monotromy. And this isn't true in general. And as one goes to higher order equations, life gets worse and worse. So in my mind for that reason I have a feeling that things are much more complicated. Do you know any algorithms that may apply to function fields, like analyzing the scalar group of an extension of C of X, for example? So the short answer is no, I'm not that familiar with that. Many of these techniques, certainly the representation theoretic techniques work. And the key is that if you have an equation, the only thing you can do with it is factor it. I mean, if you think about it, you think you can adjoin roots, you think you can do that. But it's just you're deluding yourself. The only thing that you can do is factor. And once you have good factorization algorithms, this idea of using the representation theory to calculate the group and using the category of representations in their relations to their invariant subspaces will let you calculate the group. But I don't know of any explicit algorithms other than what I spoke of. There are some analytical descriptions of the differential scalar group. So can it be used to make practical computation? Absolutely. And this is one of the things that I ignored. So there are a finite collection of objects that generate a dense subgroup. Several of these objects can be computed explicitly and several numerically. And I guess, Michelle O'Day, you've worked on this with some of your students and quite successfully. And also, even theoretically, for example, a calculation of an equation whose group is G2. I guess this was done with Claude Michi's thesis together with Anne Duval. So it's been done. I don't know of any algorithms, but one can approach this that way. Thank you. Thank you.