 and welcome to my second lecture. My second lecture is devoted to linear algebra over z. And it will mostly consist of applications of the lattice basis reduction algorithm that I dealt with last Tuesday. And to illustrate this, let me start with what I call the kernel image algorithm. And that is a polynomial time algorithm. And the input that is a group of morphism, z from z to the n to z to the m. That's called F. So this input is specified, as you might expect, by saying what n is, which is a negative integer, and likewise m. And then this group of morphism is specified by the matrix. So by the images under F of the standard basis vectors of z to the n. So the length of the input is essentially well the sum of the logarithms of the absolute values of the entries of that matrix, where maybe the logarithm of 0 and 1 should be 1 for the change. And the output that is a basis for the kernel of F and the basis elements that constitute that basis will be specified as elements of z to the n, which is where the kernel of F lives. And also basis for the image of F, which is the subgroup of z to the m, and the basis elements are again specified by their m coordinates. And the algorithm runs as follows. So what we do is that we make z to the n into the lattice. So that means that we define an inner product of zn, and that inner product will be such that when we write down a reduced basis for this lattice, which you can obtain by the algorithm from last Tuesday, that that basis gives you both the basis for the kernel and the basis for the image. So this inner product will be encoding properties of F, and it will therefore be different from the usual inner product. The usual inner product, if I take an element x in zn with coordinates zxi, that will be the sum of the xi square, but that is not the one that I will be using. My new inner product is such that if I take an element x in zn, then I start by, this is by the way F's absolute square, I start by the standard length square, and then I take a very big number in the notes. It is called omega, and it is called omega because it stands for infinity, although in reality, although it will be very large, it is not necessary to take it infinite. And then here I take the analogous standard length square of the image of F, which is the standard inner product on zm. And you have to notice here that I am not really specifying the inner product of any two vectors, but only the inner product of a vector with itself, because of course you know that if you want to have inner products of other elements, then you can get it by looking at the length squares of the elements and take some weighted combination of them. So the inner product of two elements x and y from zn is expressed in q in this manner. And this q is such that if x wants to have a small length for qx, then it is better if this infinite contribution is actually zero by x being in the kernel. So the short vectors that you get out of basis reduction, they will give basis for the kernel of F. That is what I am going to make precise in a moment. And the remaining vectors in the basis, they will then give rise to basis for the image of F. And this omega you can take if you consult the nodes equal to some explicit function of the size of the matrix entries. So I write down 2 to the n minus 1 times n to the n plus 1, if I am right, times the absolute value of, well, let me call it capital B. B means bound to the 2 power 2. And where B, that is the max of the absolute values of the coefficients in the entries in the matrix that describes F. So that is a number of which the logarithm is polynomially bounded in terms of the length of the input and for the purposes of what I have to say, it is close enough to infinity. And then what you do is that you find a reduced basis B1 through Bn for this lattice gn with this length squared function. So let me call it gn comma q. And then what you do is that you prove something, then it turns out that if R is the rank of F, so that is the rank as a real group of the image of F, if R is equal to this rank, then it turns out that qB1 through qBn minus R, if R is the rank of F, then n minus R by a basic linear algebra is going to be the rank of the kernel. So then these are smaller than omega. You have to prove this. This is essentially an application of Kravos rule combined with Hadamard determinant inequality. And the others, q of Bn minus R plus 1 through qBn, they are at least omega. I see that in the notes, it is better to take omega just a little larger than me at 1. And then one proves next, one proves that B1 through Bn minus R form maybe I should make this a little bit more visible for you. Next one proves that B1 through Bn minus R form a basis for the kernel of F, because if these elements are less than omega, then this contributes less to be 0. They are in the kernel. And actually, they generate the kernel. It's a basis over the integers. And if you look at the others and you apply F to them, then F of the remaining basis vectors form likewise a basis for the image of F, again, a basis over z. So that is a typical application of lattice basis reduction. And it depends on properties of the reduced basis in general, in particular, in this case, to the fact that the lengths of the basis vectors in a reduced basis are fairly good approximations of what are called the successive minima. OK, if you would like to see more details of this, then I must refer you to the notes where the proof of all these statements are written out in detail. And you see also it is a perfectly polynomial time algorithm since, well, this is not a very onerous computation. And once you have the Q, then the lattice basis reduction algorithm from last Tuesday runs in polynomial time. So this kernel image algorithm, that is pretty much the basis, no pun intended, for everything else that I will have to say today. The linear algebra over z will be a little bit more general than what I am saying here in the sense that we also are going to consider a billion groups that are finally generated but that are allowed to have elements of finite order, in particular finite a billion groups will be of interest for us. You see that there is one thing that the algorithm does not do for you, and that is that it computes the co-colonel of f, the co-colonel being z to the m, modulo the image. And the reason is that that is not something that is, again, necessarily a free a billion group, but those co-colonels, they are going to play an important role in the rest of the lecture because the co-colonels of these matrices, they are essentially, up to isomorphism, all finitely generated a billion groups, and that is actually the way we will represent them. If you have any finitely generated a billion group and the number of generators is m, then there will be a finite number of defining relations for them. And if you call that n, then that is just another way of saying that your group is the co-colonel of isomorphism of a group homomorphism from zn to zm. Okay, but before I pass to the finitely generated a billion groups, I want to give one further illustration of application of the kernel image algorithm. Maybe it is also of interest to know that you can actually avoid computing this omega, but just treat it as a formal symbol that is, so to speak, infinite that lives in some owner of two dimensional vector space, but that is not something that I want to elaborate upon that is, okay, so here's another algorithm, and that is the following. So here are the fibers, fibers of linear maps. So given, given means that is the input, given again a homomorphism as before, in the same manner, so with a matrix, and we also give a target vector b in zm. And we are interested in knowing the fiber f inverse b in zm, zm, and in particular, we are interested in knowing whether or not the fiber is non-empty, whether there is an element here that maps to b, and if indeed it is non-empty, we also like to know at least one element in there. And once we know one element, then we know the whole thing since we just have to add the kernel of f to that element. And I told you already how to compute the kernel of f. So how does one, so to be decided is whether this is non-empty, and if so, find an element in that fiber. And one way of doing this is that you look at a slightly different function, that is z to the n times z, so that is z to the n plus one, and you map it to z to the n, I call this map g, and it sends a pair x, comma z, so x is sitting in z to the n, and little z is just an integer, and it goes to f of x minus z times b. b is a vector in z to the m, I can multiply it by an integer, and I can compare it to f of x. And now you see that this fiber is non-empty, even only if there is an element in the kernel of g for which z is one, we like f of x to be equal to b, so we like f of x to be equal to z times b, where z is one. So what you do is that, or at least one thing they do, can do is that you look at the kernel of g, the kernel of g, which you can determine using the kernel of image algorithm, and this kernel of z lives in z to the n times z, and now I'd like to know whether there is an element here whose second coordinate is one, so let's just compute this second coordinate that is little z, and then you see that f inverse times b, is non-empty, even only if this map, let's call it h, which is the composition of the inclusion with this second projection, even only if the image of h is all of z. So you see that if you apply the kernel image algorithm twice, first you apply it in order to find the basis for this kernel, and next you apply it with this kernel in the role of zm, and with z in the role of zm, you apply it to determine the basis for the image of h, and if that basis vector is plus or minus one, well, that is the case, even only if h is onto, even only if h is subjective, so that decides whether or not this fiber is non-empty. What you can also do is, and let me just mention this by way of curiosity, since it is, I believe, actually an exercise in the notes, and I don't want to give away too much of it, namely you can do it in one stroke by putting the following lattice structure on this z to the n plus one, you take a vector q, z, and you take the standard in the product for the x, then you take a number as before, in fact the same little omega will do it, which measures that this fx minus zb is zero, so that is fx minus zb, and then you take a second order of infinity, that is omega prime, omega prime is much bigger than omega, it is like the square of infinity, and that is going to give me f of x squared, thank you. And now what you do is that you compute a reduced basis of z to the n plus z, z to the n plus one for this quadratic form, and then you see that if you want to have an element for which f of x is zb, then you see, so this is the case, even only if a reduced, to start from any reduced basis for z to the n plus one, with this new q contains an element which actually did the notation of the previous proof, it will be the first one after you have listed the kernel with the property that is length squared is at least omega, but smaller than four times omega. So that means that the b will have to be in the, I think this f has to be a g, it will have to be in the kernel of g, and fx minus gb cannot be at least, so this g cannot be at least two, because that would make the inner product too large, and as a consequence, you find that the z coordinate of this b will have to be plus or minus one. Somehow it looks as if this is not completely right, what I'm saying here, I think that this probably has to be z square, so that if it is not g, zero or one, then it will have to be at least four. Okay, so if you do everything correctly, then you see that in one stroke, you can solve linear equations over the integers, which is already pretty much what you would understand by linear algebra. Okay, so this is then the situation that the finitely generated Abellion groups are free, and now we are going to pass to the finitely generated Abellion groups in general. So we are going to discuss the rest of this lecture, algorithms for finitely generated, that means the fg means finitely generated, finitely generated Abellion groups, and the typical group we will call it A. And the first issue to be addressed is how do we specify A? We want to compute with A, so we want to say what it means to give A as an input or as an output or to say that we compute A, which numbers do you mean when you are talking about A in a numerical manner? And that should be in such a way that the way of specifying A should also come with a way of representing the elements of A. And the answer is what I told you already, each such A as a finite presentation, which is to say that it can be defined by a finite number of generators subject to a finite number of relations. And if you call the number of generators M and you map Z to the M to A by sending the i-th standard basis vector to the i-th generator then because you have generators, it will be a subjective local morphism. And then if you list the relations and you have M relations and you map Z to the M to Z to the M by sending the i-th generator to the coefficient vector of the i-th relation, then the image of Z to the M, if those are really defining relations will be exactly the set of Z linear combinations of the defining relations, which is precisely the kernel of this map. So in other words, if you want to write down A, then what you have to write down is this thing. You have to specify N, you have to specify M and you have to specify an M by N matrix of integer coefficients. And that is A. And in principle, if you define the same group or an isomorphic group in a different manner than a priori, you have a different group. And then you can of course ask whether there is an isomorphism between them. That is something that we will address. If you represent A in this manner, then you can also represent the elements of A, namely, as a linear combination of the generators. So that means that you represent an element of A by means of an element of Z to the M that maps to it. And of course, it is certainly possible that a given element of A has many representations and you may wonder whether you can recognize whether two elements of Z to the M define the same element of A. And the answer is yes, because all you have to do is take the difference of that element and decide whether it is in the image of F. So if that difference of elements, if I call it a little B, then all I have to know is whether the fiber over B is non-empty. Two elements in Z to the M represent the same element of A, even only if the difference vector is in the image, has a non-empty fiber over it. And by remarkable coincidence, I just gave you a polynomial time algorithm for deciding this. Likewise, if we have some other group, B, another finite generator, the Boolean group, which is presented to us with, in a similar way, with maybe a different map and different numbers of generator and relations, then we will also be interested in defining homomorphisms. And it is very easy to see that such homomorphisms always come from homomorphisms here, which we already know how to write down, named by means of a matrix. But again, you have to be careful, not every arrow here will give rise to an arrow there. So if you have an arrow here, then what you want to decide is whether the image of Z to the N, another composed arrow, lands inside the image of F prime. So you have to be able to decide inclusions between subgroups. And that is something that you can do by applying the kernel image algorithm in a way that I didn't mention to you yet, but that you can surely find in the notes. So we have also a good way of talking, not just about finite generated Boolean groups, but also about their elements and about their morphisms. And then it is very easy to see that you can compose morphisms. If you have a third group, you just have to multiply the matrices, specifying the two individual morphisms. Okay, so that is the way you specify or represent if you like the finite Boolean groups. And then there is a long list of operations that you will want to be able to perform in polynomial time. There is a long list in this printing of the notes. It is a list of 11 ones. For example, here, compute the order of a group if it is finite, compute the order of an element. Decide whether two group morphisms are equal. There is a long list. And I am going to treat only a representative selection of those so that you see how you can compute the finite Boolean groups. And the general lesson is that everything that you can want in this situation is indeed possible in polynomial time provided that there is nothing unreasonable that you want. And what does it mean to be unreasonable? Well, there are several unreasonable questions that you can ask. One is, for example, that you ask for something that is too large to write down. Suppose, for example, that I have some anamorphism of finite or infinite Boolean group, but it's finally generated. And I want, for example, to take some high exterior power and I want to compute the corresponding mapping here. And that is a very difficult thing to do in polynomial time because when M gets a little larger, you will see that the number of bits that you need to write down the answer, well, you cannot do that in polynomial time. So you cannot certainly not compute them in polynomial time. So you have to be a little bit modest in your requirements. And of course, A may, for example, be free over some ring and maybe you want to do such a thing in order to compute the determinants. Computing determinants is really something that is not completely trivial. There is an exercise devoted to it and it can happily be done. Determinants of indices or matrices, for example, can be computed when they are square in polynomial time. There is another issue that is considered unreasonable when you ask for it and that is anything having to do with prime factorization. So for example, if A is a finite Boolean group, then you can write it as a direct sum of cyclic groups of prime power order. So the P's they form a sequence of prime numbers. It's a finite direction. And the Mi, they are positive integers. And if you ask given A and A is finite for such an isomorphism that is considered completely unreasonable because for example, when A is just a cyclic group, itself like Z mod NZ, then that comes down to factoring N into prime factors. And that is not something that people know how to do. So maybe what I'm saying will not be true anymore a few years from now when some member of my audience will have invented a polynomial time factorization algorithm that will really change the subject somewhat. And there are related questions. For example, here's a question, is A, let's again take it finite, is a semi-simple? If I have a Boolean group, then it is called semi-simple if every exact sequence of this nature splits. So every subgroup has a complement that is also a subgroup. And that is equivalent to the exponent of A being square free. And there is no known polynomial time square freeness test for positive integers available. Again, this is something that might change when someone finds such a thing in polynomial time. So if you avoid these unreasonable questions, then anything that you can do, that you can wish to do with finitely generated Boolean groups, in particular the laundry list in the notes, they can all be done in polynomial time. And in many cases, all you do is that if you want to, if you have a theorem that states that something rather exists like this isomorphism that I wrote down, then there will typically be proof for that in the textbooks, usually those proofs are sufficiently constructive that you can turn them into an algorithm. And in many cases, those algorithms are already in polynomial time. The only danger being coefficient blow up and the coefficient blow up is actually most adequately taken care of by the lattice basis reduction. So I want to illustrate this procedure by giving you an algorithm for one specific and very well-known theorem. But before I do this, let me tell you that for general finitely generated Boolean groups, one has also kernel image algorithm. So that is exactly the same as before, except that now the groups are not required to be free over Z anymore. So everything I said about the kernel image algorithm can also be done in the full generality. It is a little dull and therefore I don't want to spend my time on it. A lot of things are more interesting to talk about. And there's also actually in this case, there's a co-kernel algorithm. And that is actually so easy that I may as well mention it right away. If I have here A and I have a homomorphism from A to B and I want to compute its co-kernel, then I take my three groups that generate that represent A, the relations I don't even need. And you look at this morphism as I told you is represented by a morphism here. And well, here we do need the relations here we have the relation group. And now you see that if you mod out this group by the image of the direct sum of those two, this is not a tender product sign, but it is a diagonally printed direct sub-sign. If this three group you met onto there, then its co-kernel is equal to the co-kernel of F. Now B is this one, but you know that one. If you also want to kill the image of A, then you kill that one. So this one, what you know the image of that one is the co-kernel. The co-kernel is in this case the easiest of three. And likewise, there is this fiber algorithm that I mentioned, if I am given a group of morphism from A to B and an element little B of capital B, then I can decide whether it is in the image. And if so produce an element of A that maps to it. All that is pretty straightforward. And I will be using it several times in the scene. So let me then discuss one particular algorithmic problem that is of interest in this context. And that is given a finitely generated boolean group. It is well known, that is often called the fundamental theorem about these finitely generated boolean groups. You can write it as a sum of cyclic groups and R is the rank that is the number of infinite cyclic factors. And then you will have still a finite number here. And this M is the non-negative integer. These N i, they are integers that are greater than one. And they divide each other. So N M divides N M minus one and you keep going until N one and all of them are at least two. And one reason to put in these requirements, which are quite different from the previous requirements that these are prime powers. One reason that you put them in is that these numbers that I wrote down are uniquely determined by the isomorphism class of A. So if you have an algorithm that given A produces an isomorphism of this sort, then you have also an isomorphism test. You can tell whether two given finitely generated boolean groups are isomorphic to each other. So let me try and spend the remaining amount of time by sketching how you go about this. And I advise you to just take a proof of this theorem and make it algorithmic. And the proof of this theorem, at least one proof of this theorem consists of two statements. And the first statement is if T in A is the torus in subgroup, so that is the subset of all elements of A that are finite order. And because A is a boolean, that will be a subgroup, then first of all, A minus T is free. So that will be Z to the R. And the exact sequence that you get out of this, mapping A to A modulo T with kernel T, that splits so that means that A is isomorphic to A modulo T direction T. And that is pretty standard. The fact that this is free is easy to prove. And once it is free, then it must split by a general properties of three groups. So this is something that you want to make algorithmic. But before I tell you how to do that, let me pass to the second part of the proof. So now this one is my Z to the R. And this is the piece that is left. So take now a finite, replace A by T. And then I define the exponent of A and that is the LCM of the orders of all of its elements. So that is the least positive integer that kills, so to speak, every element of A. Then it is true that there is an element in the group with the property that is order is equal to the exponent. And also if I look at the exact sequence that you get by dividing out that element, this exact sequence splits. So if you believe that, then your A will be isomorphic, your capital A will be isomorphic to a cyclic group direct some smaller group. And if you really can find the splitting, so that would mean a group homomorphism here, such that the composition is the identity of this final group, if you can do that, well, then you see that you have already the first direct cement here. In this description, N1 is the exponent of this finite piece. And if you have an element of order N1 like this little A, then it will generate G modulo N1, G. And if you split it off, then you have a smaller group, if A is not the critical group at least, and you can simply continue with this new group, which is a direct cement of A and roll up the entire group. So it is more or less clear that if you can do everything that I wrote down here explicitly in polynomial time, then you can write down your polynomial time algorithm for writing A in this manner. Well, let me not say too much about this in the five minutes that I still have, but let me say a few things. So first of all, I have this presentation, Z to the M goes to A that is given. And here I have this, let's call it alpha, this map of which A is the co-kernel. And I like to sort of think of this as an inclusion, if you don't mind, since I can replace it by the image anyway. So let me just say that I have a subgroup there, a free subgroup of which I know our basis, such that it's A is Z to the M modulo H. And then it turns out that if you pass to orthogonal complement, so let's take H dagger to be the orthogonal complement of H with respect to the standard inner product on Z to the M. Yeah, so that is this set, and that is the same as the kernel from the map, from A to the homomorphism group from H to Z that sends a little X to taking the inner product with X. So this H is a free group. So this is a free group. This A that is a typo that should be Z to the M that is also a free group. And we can compute kernels of homomorphism between three groups. So that is the orthogonal complement of H, and you can compute it in polynomial time by what I told you. And then it turns out that if A is Z to the M H, then its torsion subgroup can also easily be described by the double orthogonal complement of H. So you can compute this group. You can repeat the operation. Clearly H is contained in the orthogonal complement of its orthogonal complement. And well, if things were really proper linear algebra, then they'd be equal. But in our case, since Z is not a field, this is at least finite. And it is actually equal to the torsion subgroup. And out of this computation, you not only get the T, but you get also when you look a little bit at the images of those maps rather than the kernels, you also get the basis for the A modulo T. So that means that everything that I have told you in step one is perfectly polynomial time, doable in polynomial time. Okay. Well, for the second one, there are several ways of dealing with it, but a very elegant one is that you also first study how you compute with homomorphism groups. If A and B are finitely generated Abelian groups, then the group of homomorphisms is also finitely generated. And if you have good control over this homomorphism group and the notes explain to you how you gain that control, then you will see that everything that I say here can be phrased into the homomorphisms. Since first of all, if I take A equal to B, then this is a ring and the morphism ring of A and the exponent of A is simply the characteristic of that homomorphism group which you easily compute from the kernel algorithm. So that means that you get this exponent of A for three, this construction of little A, I cannot talk to you about in minus six seconds, but it is something that you use a called prime basis algorithm for. And then finally splitting the sequence, well, that is a theorem, so it splits. And that is again by finding a homomorphism that maps to the identity homomorphism here. So again, by working with those homomorphism groups and finding a fiber over the identity element there, you can find that this sequence splits. So that is then the conclusion of this very brief sketch of the polynomial time algorithm that gives rise to such a direct sum of cyclic groups. And tomorrow night, I hope to be able to pick the fruits of all this labor and tell you how you use all this material in order to solve the interesting problems that all these abstractions have been conceived for. Thank you for your attention. All right, well, let's thank Henrik for the beautiful lecture. Hey, questions. Can you do the... Everything in chat was resolved. Are there any further questions? So there are a few questions that were asked while you were giving your lecture, but I think all of those have been answered. I'm not seeing any other questions at the moment. Thank you. All right, so if there are no further questions, let's thank Henrik again.