 Okay, so today we are going to study about an economic model of production and in the process we are also going to study some bits and pieces of theoretical nuggets and so far as positive matrices are concerned. So this is not positive definite matrices that we have discussed earlier rather positive matrices by which I mean that every entry of the matrix is going to be strictly greater than 0. There is also a theory of non-negative matrices. So all this embodies a very rich domain which is known as the Perron-Frobenius theory. We shall not have the time to really delve deep into and do justice to this entire topic of Perron-Frobenius theory that in itself can constitute a long course. So we shall only touch upon certain aspects of positive matrices and if you are interested I urge you to go through similar results that exist for non-negative matrices as well. And you might think well positive matrices seems like an artificially constructed matrix after all in which application do you find positive matrices but there are in fact a lot of applications of positive matrices. In fact many of the day-to-day applications that we see admit positive numbers. So the example that I have in mind is the so-called Lyon-TF's model. So it says that suppose you have a certain number of industries say n number of industries okay. They each manufacture a different kind of product all right. But this is a very closely knit group of industries in the sense that every industry needs the output of every other industry including itself to manufacture its own product. See why I am going with this it is like industry one cannot manage to get its product made if any of the other industries shut shop. Every single unit that industry one manufactures relies on the output of every other industry including its own. You might say where did it get its first product from? Well that is a question we will not get into. So the same holds for every industry. Now this gives rise to a certain matrix of the following form. So it will be an n by n matrix. So this every column corresponds to, I will explain shortly what it corresponds to. So every column of this matrix tells me exactly how many units of the other industries product is needed to manufacture one unit of that given industry. By that same token what I mean is this is L11, L21, Ln1. Similarly this is L12, this is L22, Ln2 and by the same token this last one would be L1n, L2n till Lnn which means that industry one in order to manufacture one unit of its output it requires L11 units of itself, L21 units of industry two. So I might as well create an array like this until it requires Ln1 units of industry n. So if you gather together these many units of the first industry, these many units of the second industry so on and so forth until these many units of the last industry and you combine them together you can manufacture one unit of industry one's product, is that clear? That is the description, that is what this data, that is the data encapsulated by this matrix, all right. The explanation is clear what this means, yeah. Until industry two requires L12 units of industry one's output, L22 units of its own output, so until N2 units of industry n's output to manufacture one unit of its own product. So this is the basic description. Now there is also another thing called profitable. So when is an industry said to be profitable? Very loosely speaking if you are manufacturing more than you are consuming then you are said to be profitable. So very loose description would be if the sum of these columns for any one column if the column sum is less than unity then that industry corresponds to a profitable industry. Why do I say that? It means that industry one gobbles up a total of less than one unit from every other industry including itself in order to manufacture one unit of its output, right. So in the language of this matrix if Li, where i is the ith column of, yeah this is ith column of L has a column sum less than unity then the ith industry is said to be profitable. Now if this system has to run in a closed environment in a closed setting what do you think is the equilibrium setting for such a system? So that none of the industries shut shop they all keep running forever without ever having any crisis. At the same time they are not profligate which is to say wasteful, sorry column sum zero can it be all column sums equal to one then what would happen? So very intuitively you think that that is when they will keep manufacturing till eternity but we should be able to translate that in linear algebraic parlance, yeah. So what are we trying to look for them? How many units must each industry manufacture to just about have sufficient for itself as well as for the others? So suppose each industry's manufacturing capacity is given by this vector x1, x2 till xn, right. Then what are we essentially asking for x1 units must equal what exactly? If i2 manufactures x2 units then how many of x1 units must it consume? L12 times x2. If in manufactures xn units how many units of x1 must it consume? L1n times xn. So if I sum together like this L11 x1 plus L12 x2 plus so on till L1n xn must it not equal x1, yeah. So if I may use the term sort of break even even though it has a different connotation in finance but to just cater to everybody the condition needed is so the required condition is this x1 must be equal to summation L1j xj x2 must be equal to summation L2j xj until xn must be equal to summation Lnj xj which does it not translate to Lx is equal to x, right. Now just recall the description of a positive matrix that I gave at the beginning. Is this not going to be a positive matrix every entry has to be not just greater than or equal to 0 but greater than 0 because we have said everyone depends on everybody else. So none of these entries can be 0. So we are essentially then faced with the task of solving this equation for a positive matrix which sort of picks our interest in what sort of eigenvalues can they have because this tells us apparently that this condition can be met if this matrix L has an eigenvalue equal to 1, right and it is a positive matrix. So I have kind of introduced this model we will revisit this again towards the end of this lecture but before that we will have to develop a bit more sort of tools around it particularly delving to positive matrices I will not be able to prove every one of them as I said Perron-Fromini's theory or the theory of positive matrices is quite detailed so I have it all jotted down but maybe we will not have time for all of those proofs but I will nonetheless give you their main results so that you can just see how this problem can be solved, okay or how we can talk about when this problem is in fact solvable, okay. So we will get back to this Lyontaev's model again towards the end of this lecture but before that a bit of groundwork needs to be done. So in general let me just talk about norms of vectors and matrices again these may not need not be norms induced by inner products general norm. So if you have for example a vector like so then we say that the PS norm of this vector is given by summation of the absolute values of or the moduli of the individual components raised to the power p yeah then the pth root thereof this is a general norm okay interestingly as you let limit p tend to infinity what do you think is this going to turn out to be any guesses yeah why is it obvious so how do we prove this so I mean it's not that difficult it's after all the finite sum so this is moduli x1 plus moduli x2 plus moduli xn right so of course there's at least a maxima among these fellows one of these fellows at least maybe multiple fellows are equal to that maxima but at least one of these fellows corresponds to the maxima right so let's pull that out of here so this all raised to the power p then the pth root right so let's say I pick out one of these largest fellows one of these fellows as the largest let's call that m then what happens to the rest isn't it every just dividing it out by the largest taking its raising it to the pth power and taking its pth root just pulls out this what do you know about each of these so of course remember that p tends to infinity at most these fellows can be unity at those locations which correspond to the maximum value all the others will go to 0 as p tends to infinity because these are all smaller than unity magnitude wise so what happens to those fellows that go to unity when unity is raised to the pth power it's still unity so you multiply them let's say k of them are equal to unity so it's just k raised to the 1 by p but p tends to infinity so k raised to 0 that's again unity so it is equal to indeed m which is to say that if you look at the infinity norm of a vector of an n-tuple of numbers it is just the absolute value of the largest the largest absolute value of all of those fellows right magnitude wise whichever is the largest you pick that fellow out yeah so that's one now if you want to define by the same token for a matrix a that is m cross n and you want to define its infinity norm now this is an induced norm not something that you get from an inner product like a fobenius now so again you'll have to say that I'm going to search for this now this is defined see see every time you put something on the left the things that you use to define that thing on the left on the right things on the right must be well defined thing on the right is well defined here is it not so the thing on the right because this is after all a vector some y this is also a vector so this is an m-tuple this is an n-tuple right so at least this fellow is defined now what do we need can we not just say that look instead of searching for all possible excess let me just search over all those excess so this is of course we say the maxima over x infinity not equal to 0 but that is also the maxima over a x infinity such that okay let's just call it 1 shall we because then we can get rid of this yeah now what happens think about entries in this these are what if you look at this numerator fellow it is just something that's going to look like summation a 1 j x j 2 j x j until summation a n j oh sorry it was m right x j right so you're basically trying to look for the maximum of these fellows so one of these m-tuples is going to correspond to the maximum right so let us consider that fellow to be summation a k j x j suppose this is the max so corresponding to k that could be multiple but let's just take one of them one of these entries is going to correspond to the maximum and that's going to be pulled out here is the max so this fellow what can you say how can you bound this fellow can I not say that summation a k j x j is less than or equal to summation a k j x j I should pull out the summation here right yeah this is true yeah just kind of triangle inequality with the absolute value operator right now suppose I were to pull out the maximum row sum this is less than or equal to the max row sum of mod a mod a I mean the absolute values of the entries okay just an abuse of notation we'll use this very frequently today so times x j this inequality is it always going to be inequality I'm allowed to choose for these x j's such that the maximum value of any x component is unity or minus unity so suppose whenever an entry of this a k j is equal to positive some positive number I choose that x j to be plus one whenever it's negative I choose it to be minus one in that sense will this not then equal summation a k g right so that is exactly what this norm is going to be if you think about it in the generic sense if you look at the vector you're looking for the maximum entry if you're looking for the matrix you're looking at the maximum sum of the absolute rows or maximum row sum the absolute row some yeah take each individual entry of the in that row take its absolute value take its sum the maximum that any particular row can correspond to that is going to be the infinity norm okay because this inequality can be satisfied and made into any equality so that's exactly how it's defined because this is your best possible choice that's how you can maximize it look you are trying to maximize this no you're trying to maximize this then this is your best bet you cannot make any of those x j's to be more than one so even if you have a k j which is very large and you want to give it a very high weight the maximum you can give is one because the x j's are restricted by one so whenever the age a k j's are positive you give them a weight of plus one whenever the a k j's are negative you give them a weight of minus one and that's how you can maximize this sum through any choice of x j x that you can choose that is the only way for you that's the best trick for you to maximize this and therefore that maximum sum will correspond to the max row sum right so indeed the infinity norm of a vector is its highest entry the infinity norm of a matrix is going to turn out to be the sum of the absolute values of the rows and then you look at the maximum of those row sums right so these two important notions are clear this is true for any generic matrix we have not even gone into positive matrices yet you agree that this is the best choice of x that we can make in order to maximize this rate yeah all right so now we are going to look at specifically positive matrices and see how where all this is leading us to okay so before going into that another general notion that we have to describe is so-called spectral radius of a matrix okay yes it is about eigenvalues but it may not be an eigenvalue in itself so the spectral radius of a is defined as see the absolute value of z such that z is an eigenvalue of a you see why z need not miss I mean why the spectral radius need not be an eigenvalue in itself because the spectral radius is always going to be a real number it's a radius as the term suggests but your eigenvalue with the largest absolute value need not be a real number it can be a complex number yeah definitely radius has to be a positive number so this the absolute value so spectral radius in itself may not be a complex number however it turns out that for positive matrices and this is why positive matrices the theory is beautiful because it turns out that with positive matrices we can do everything that we do with positive numbers the arithmetic is that simple the analysis isn't which is why it's going to take us a while to get there but the arithmetic is very simple you multiply something with a positive number if you have an inequality multiply both sides of the inequality with a positive number it remains invariant the inequality sign doesn't flip and there's several other important analogies that you will see that carry for positive numbers also holds for positive matrices okay it will turn out that for a positive matrix the spectral radius is going to be exactly the eigenvalue and I said the eigenvalue which means that no other eigenvalue can have exactly the same magnitude as the eigenvalue corresponding to the spectral radius which means if you look at the complex plane yeah and you draw a circle of radius equal to RA you are only going to have one of these eigenvalues sitting here for a positive matrix and there can be no other eigenvalue anywhere on this circle yeah everything is going to be contained inside it why inside because of course by definition this is oh I didn't write the most important thing right this is the max yeah so I should write the yeah so this is the largest that is the spectral radius everything is contained inside it yeah so there's exactly one at the periphery here there can be no other at this periphery what is even more interesting as we shall see hopefully if we have time is the fact that even this fellow has an algebraic multiplicity equal to one exactly so it cannot even be repeated all right so the spectral for a positive matrix there's a unique largest real eigenvalue so when I say spectral radius that's also an eigenvalue it's real of course it's non-repeating so it has a geometric multiplicity equal to the algebraic multiplicity is equal to one right so these are all properties of positive matrices we shall see how far we can get with those some of those proofs and those explanations but again if even if we don't get there I'll at least ask you to remember this much because this is what is going to be very useful for us in our Leontief's model that we've introduced earlier okay so once we have defined the spectral radius now let's try and see how far we can get with those some of those proofs all right so suppose I'm going to use this notation for a positive matrix now okay suppose a did I use this for positive definite okay then I'm just going to use the simple notation is a positive matrix okay and V is a non-negative vector not equal to 0 then what can you say about AV what can you say about AV just think a bit I'm saying AV has to be positive cannot even be 0 at any entry V is has at least one non-negative at non-zero entry because it's not otherwise it would have been 0 if it has any non-zero entry that must be positive so V is a vector right V is a vector a is a matrix a has all its entries positive V can have at most zero entries but no negative entry certainly now if you hit V with a in one go all of those entries of AV must be positive I mean even if you have one of those as non-zero it just got it's just going to pick out that particular column of a and scale it by some factor so therefore this is really going to be very straightforward right I'm not even what the overwriting a proof here but I hope that you are convinced that this is true so provided a is positive I mean and V is non-negative then of course AV must be positive right that is that is a given what about a second property suppose u1 is greater than u2 again I'm dealing with them like their numbers but that's because precisely they are almost u1 and u2 are vectors when I say u1 and u2 have this relation it means that every entry of u1 is greater than every entry of u2 so that u1 minus u2 is a positive vector yeah this essentially means and is meant by u1 minus u2 being a positive vector now if you let a sorry yes u1 and u2 are vectors now let a be a positive matrix then we have a u1 is greater than a u2 what I need to prove this I need to just show that a acting on u1 minus u2 is possible but that's already done if you are minus u2 is a positive vector then a acting on u1 minus u2 must also be a positive vector therefore a u1 must be greater than a u2 in the sense of the vector like you compare n tuples entry wise so every comparison here is entry wise so you see how they are exactly similar to the arithmetic we do with numbers no different yeah only thing you have to sort of imagine the operations a bit in your head and you will arrive at these conclusions most of these early parts of the proofs they are not required too much okay now the one proof that we are going to see and conclude this module with is the following so suppose v is a positive vector so from the short hand notation I'm not being very technical here because we don't have the time for all those technically detail writing up of this yeah so suppose we have this and let a v be greater than v which is to say that whenever a acts on v it generates a new vector yeah so is a square matrix of course square positive matrix like we have in the Leon tf's model otherwise you cannot compare so whenever a acts on v it generates a new vector each of which whose entries are greater than that of v yeah then we have that a to the k v is also going to be greater than v is this this is obvious yeah why yeah but how do you prove this then the first time you apply a you only comparing a v yeah yeah it's basically just keep arguing that if you had hit this fellow with a on both sides now right then this is greater than a v but a v is greater than this but what do you have then you have a squared v is greater than a v yeah which is greater than v right if in fact if you want to see it in a little more elegant fashion you can just say that well I mean what I have a acting on a v minus v is also going to be greater than 0 right yeah so therefore a squared v minus v minus a v plus a v minus v is greater than 0 because this is one positive vector this is another positive vector yeah so therefore a squared v minus v is greater than 0 so this is the base step of the induction if you like assume it to be true for a to the L v minus v greater than 0 and then hit with a again so a to the L v minus v greater than 0 which means a to the L plus 1 v minus a v again plus a v minus v greater than 0 this is a positive vector this is a positive vector some of two positive vectors has to be positive vector therefore a to the L plus 1 v is greater than v I mean whichever way you like you like stacking them up that's it you like to do a bit of induction that's also true right so these are some interesting properties so you see already I hope you can begin to see the the analogy between operating on a vector particularly the vectors positive with positive matrices to what you do when you operate on a positive number by multiplying it with another positive number in so far as inequalities are concerned and these are going to have very interesting and grave consequences as we shall see shortly in the next module