 offer or propose maybe to leave this three problems in chapter eight in the homework okay and have a do let's say Friday this week but if you've done it by one is that that's fine but whatever that is okay let's see the final exam I'll post it on the web tonight so I'll email you so so it will be due on Wednesday May 12 let's say 5 p.m. originally was I mean in class was supposed to be on Monday a week from today but being a take-home I said you know it just you can have two more days so let's see the final exam so this would be posted to be posted later tonight and I'll email everybody so just try to have a look at it before Wednesday so we can if there are any questions we can discuss them on Wednesday also I'll have some extra office hours so let's have office hours so this week will be on Tuesday tomorrow from 1.30 to 3 and usual Wednesday 10 to 11 but it's not a very popular time I'll also be here on Friday if you're between I guess 10 and 12 if you have questions feel free to stop by let's see I think that's all I had for the announcements I handed in the solutions to the last homework so you know post solutions to the last home to homework number 10 on Friday so okay so I want to make the new Friday by noon okay so I can post solutions and let's see anything else I was sad to see that the oil spill is actually now it's gotten such a huge issue right keeps keeps coming for those of us that had a good time in New Orleans for some time you can appreciate I mean how bad this is okay so let's see so the plan for basically today on Wednesday is to talk a little bit about Markov chains unless unless they're any and of course yes yeah it's a good idea to keep make a copy of your home of your own homework when you submit it now try to grade it by you know next week by Monday say sorry no there's no class they've changed the policy yeah the final will be on chapter six seven and eight so just on the kind of chaotic behavior and probably probabilistic yeah no because I I've made myself an image of what that causes okay so let's let's just talk about we have limited time so I don't know how much we'll be able to do on Markov chains but some of you that already took or taking the stochastic modeling class probably seen this extensively so the point here is I would like to talk about some processes that are random and they have only discrete number of states so it is it is kind of limiting to only certain models I mean you can only use this for certain situations but but it's kind of a start of the more broad the broad topic of stochastic model models so basically in the physical models the physical model models that you know the underlying physical model system excuse me is that takes only finite number of states and that can transition between states with certain probability with some probability these usually can be modeled with so-called Markov chains or a Markov chain and what's important to understand is that transitions between states can only take a place at at like finite intervals of time so it's a discrete so this is a discrete model stochastic model the transitions take place at regular if you want intervals of time every second every hour every day or you know doesn't have to be regular but it's just a prescribed time so it's not a continuous dependence of the state and let's call so system schematically if for instance if this physical stator takes three physical system can take one of three states so every moment of time you have the state one characterizing the system or state two or state three characterizing the system and with possibility of moving from at regular you know at intervals distinct intervals of time from like state one to state two with a certain probability and we're going to introduce this notation of p i j to be the probability of the system in state i to transition or jump to state j okay and you have also certain probability of the system from state two to move to the state three and so forth right and of course you could have e3 1 you could have that the system that stays you know is in state one stays in state one and that would be p 1 1 p 2 2 p 2 3 now oftentimes if if any of this probability is 0 you know maybe you just don't draw that that arrow or that edge right also it's possible that the system is from state to move back to state one and so forth right state one to state three so this would be sort of a oops this would be a complete graph but notice that I mean keep in mind that some of this probably is maybe zero okay and there is an example we're going to talk about that has probability zero and some some of it so certainly that these numbers are between zero and one as being probabilities and there's another very important property of these and that is if you're if you're in any of the states the next iteration of the system so the next time you have to go somewhere I mean either you have to also be in another state so so if you're in state one with hundred percent probability so it's certain that you're gonna either be in state one two or three in the next time interval and that amounts to saying that the sum of the probabilities let's see if you state i and you summing up over j so p i j again if you have three states is going to be for each i from one to two to three but for each state you have the sum of p i j run over j to be one okay so this leads to to keep this kind of self-contained it's easy to introduce or to keep this numbers in a metric so transition metrics p capital p stands for p 1 1 p 1 2 p 1 3 in this case with 3 p 2 1 p 2 2 p 2 3 p 3 1 p 3 2 p 3 3 and what's what's important about this metric is that the row sums of the sums along each row are all equal to one and of course they're all positive I mean excuse me between zero and one it could be some could be zero or one our sums are positive equal to one okay so this defines p is called such a p so such a metrics that has these properties is called oops okay p is called stochastic metrics if all entries are positive and the rows or sums equal to one yeah or not negative okay so here's here's the example that to start with so the example is this inventory inventory problem that's number 8.1 okay and it has to do with I guess inventory of aquarium aquariums or aquarium I don't know what's the plural here but what happens is at the end of each week there is an inventory that's that's being done and depending on how many were sold during that week certain certain strategy for ordering for the next week are used so the policies that if there is nothing left so everything if everything was sold at the end of the week then there's going to be three aquariums so there are going to be three aquariums ordered right for the next week if there is any you know if the if the there's at least one aquarium in stock then there's nothing ordered the policies based on the observation the store only sells an average of one aquarium per week okay so it's saying that if even if there is only one in store at the end of the week it's likely that's no more than one on average is going to be sold so they're not going to order okay so so why is this system sort of modeled using the Markov chain or you know I guess in its simplest the simplest approach is to think about this number of number of aquarium at the end of the week okay so there could be or at the beginning of each week okay so the beginning of each week you would have one two or three can you have zero well if you had zero at the end of the week the next day you order three right so you really never have zero that's why zero is not a state so this is okay but that's the question okay but that's the question the question okay so so okay but but let me let me clarify so so these this represent the state the inventory at the beginning of each week okay so you cannot have zero because you bought us free if if you had zero okay then you bought us free so you started week always with something the question though the question asks is what if during that that next week your the demand is actually higher than what your inventory is what's the probability of that happen so we're assuming that the time interval between order and receiving yeah we're ignoring that no yeah we're ignoring everything yeah simplest possible as soon as we order we have stuff yeah it's next door okay garage sale whatever okay so all right so so there is there's gonna be probabilities of something happening for instance what's the probability that you start one week with with inventory of one right and you move the following week with the inventory of two I'm sorry I'm sorry this is it's a little bit I mean to get to the property to the to the actual probability it's a little bit we need to go through another another step so that is the following so so let's call xn to be the number aquarium aquariums at the beginning of a week n okay and let's call dn to be that the demand for aquariums during that week during weekend yeah so trying to figure out what's the probability of moving from one state to another okay all right so I have we have the following there's a probability that you are in state one and you are in the next week you're gonna be in state one okay well that probably is obviously gonna be equal to the probability that the demand during that week is zero okay so we'll talk a little bit more about this what's called conditional probability but clearly you're gonna start with you know one aquarium and the following week you're gonna have one aquarium if there was nothing sold right so the first thing to to decide is what should be the probability that the demand is zero or what is the probability that demand is any any given number okay well so the question is that this random variable which is the demand for aquarium is a discrete random variable right so this is a discrete random variable and can take values zero one two right it could be the demand could be huge right so and in theory it could be any any an integer right so the question is what is the probability density distribution so the probability that distribution is assumed to be exponential or excuse me plus on because it's discrete and it models this kind of time over I mean number of arrivals if you want during any given week okay it's gonna be very low probably that that this d is gonna be high it's gonna be a reasonably high probability that this d is gonna be small so so this is gonna be our assumption assumption is that the probability that dn takes value any value k is gonna be e to the minus lambda lambda to the k over k factorial now the question is this is Poisson what is the rate so recall the expected value of the Poisson distribution is actually lambda right so if in our case we're talking about one what was it one right observation is Thorne sells an average of one aquarium per week right so so the assumption is that this is equal to one in which case the probability that dn is any k should be considered to be e to the minus one over k factorial right so so let's just write this down so what is the probability that the demand during weekend is gonna be zero and it just it's e to the minus one right one over e whatever that is what is the probability that there will be one so same right e to the minus one probably there's gonna be demand of two this should start to drop right e to the minus one over two by the way these numbers are zero point three six eight I believe zero point three six eight well this half of it right so zero point one eight four right and so forth what's the probability that that the demand is gonna be three well right so it's gonna drop is this is this a fair assumption well depends on the market right I guess conditions or something that that is an assumption to work in a working assumption okay so this probabilities will basically affect this transition probability so this p11 now we know what the p11 is right we'd have to compute okay so now let's go back to the transition probability so p11 is going to be n plus one is one given that xn is one so this is gonna be probably the dn is zero so this is zero point three six eight let's compute the the next one which is I guess well these easy ones are the ones where xn plus one is two given that xn is two this is the same so the event basically it's can be can be characterized by there's no demand during that week right so that is going to be the same zero point three six eight but unfortunately p33 is not going to be that easy so the p33 is the probability that during week three there are three the beginning of week three the weekend plus one there are three given that xn equals three so here there are two possibilities there are two mutually exclusive or disjoint events right so is the probability that there is no so there were three and there's no no sales so that that is one possibility that there is no demand right but it's also the probability that there is everything was sold right all those three were sold during that week and then you had replenished right so there's a probability that no because but then you replenish to three so you can never start the week with if you sold everything during the previous week you always replenish with three okay so so this one okay so now the question is how do you compute this well it's obviously going to be a series okay so you can do it as a series or it's possible to think about the compliment event that is complimentary event that is it's going to be one minus the probability that neither of these happens right which means is the probability that dn is one and the probability that dn is two okay and now now this is you can compute this so it's going to be 0.368 minus and the other one was half right 1.84 and I think this ends up being 0.448 of course first three decimals okay all right so so again I'm gonna leave this because it wasn't clear in beginning how to compute this but now now we can start to make ourselves an image of this Markov chain so it basically says P11 P22 and P33 okay now of course the other ones need to be computed so for instance what's the probability that you start with one in the next week you have two zero right it's impossible to end up with two during any given week unless you started with two or the previous week yeah even here this one well remember the well you have a run a variable that takes this couldn't amount of so it has a probability distribution right so it has a probability that it's zero that it's one that it's and so forth right so basically every each of this being is a probability of an event right in these events you know the unit of those events are mutually exclusive and they add up to the certain event because something has to be right that probably the D has to be something yep at each at each week D has to be equal to something so all we do here is say if the sum of these things equals one well and we're only looking at the basically at the one which is zero so I'm going to use the red to indicate so this is zero one two zero is here right so it says the probability that of the event that that this happens right and this happens is one minus the sum of those two probabilities that's all so the sum of everything is is one right that's and these are mutually exclusive events right yeah well so again this this this is just to avoid writing computing a series right that series that that would be a series from three to infinity of those exponentials right okay all right so so there's nothing between one and two so you can draw and draw a zero or you just avoid drawing that let's see something that actually is not zero so from one to three so as a probability that you start with xn plus one is one is three when you started with xn equals one how can you write that event in terms of D greater than or equal to one again demand could be very high right you can only sell what you have but the demand could be very high with low probability the body could be very high okay so this is again the same thing this is one minus what what what even what probability of what event the DN is zero right so this is one minus zero point three six eight so this is whatever zero point six three two okay so that's a pretty high probability that from one you move to three and then it can't keep doing this so let me just leave this and write what tap what the metrics is so that's the nice thing about putting everything on metrics you can see everything at once without having to draw those error so this is zero point three six eight zero zero point six three two second row is we haven't computed p2 one but it ends up being when the demand is one and that's zero point six three eight three six eight we haven't really computed the probability that you start with two you end up with three but again that translates into an event of relate to the random variable D and the last row is this one okay so this is a three by three stochastic metrics okay and now the question is what do you do with this so what do you what is this metrics used for or this probability transition probabilities used for so I guess that I should talk a little bit about the conditional probabilities so so let me make a parenthesis here and so we don't necessarily see this you know ugly numbers here so let me just simplify so consider a simpler three-state Markov chain as follows so I mean same three states of zero one two and three okay and let's imagine that we have the random variable that has basically just is the state at time n and he has the following distribution so it is state one with a certain probability and I'm going to take you know just different numbers let's say it's it's equally likely that this state is one two or three so the probability that x1 is is one is one third that x1 is two is one third x1 is three is one third and consider so consider the the transition metrics to be the following so let me write this down one third one third one third seven tens three tens zero zero one zero okay I guess I should have said what x0 is so x0 I want x0 to be one so I want to start at one and I want to move in the next state so state of time one with equal probability at being one third back one third probability is that back at one one third probably is is that two one third probably is one third right so the question is what will x2 be so what is the state at second time going to be okay well so here we need to obviously is going to be one two or three with probability that x2 is one which we need to compute with x is going to be two with a probability that we need to compute and it's going to be three with a probability I want to compute okay so let's take it one at a time so what's the probability that that the second state or the state of the system after second iteration is actually one let's see today okay so I give the transition metrics here but it's not that clear you know because you can actually be in let's see state the second time you're in state one but you could be staying in state one right and just between one and two you go you stay in state one right or you could be going to state two and then coming back to state one what is that probably that probably seven tenths or you could be going to state three and then come back to state one with with with well that's actually zero probability right so I think this is accurate diagram no there's there's a probability that if you're in state two you come back to state two no state three to one is zero just the way I wrote here oh one zero zero okay so it's so then that changes what three two is zero right so three one is one right so first if you're in state three with with well probably one you go back to one right so you can you can actually x2 so x2 is a state of time to it could be one as follows so here's here all the possibilities these are probably that x2 is one given that x1 was one right so let's see this is times the probability that x1 was one plus so I'll say in a second this conditional probability is the probability that x2 is one that x1 is two times the probability that x2 x1 is two plus the probability that x2 is one given that x1 was three times the probability that x1 was three okay so here we knew we use the following notation so p of a and a condition b or a given b is a conditional probability of the event a given b and in discrete probability the there is a way to define this actually I think in general by definition this conditional probability is the probability that both events and b happen when the sample space is reduced is kind of is is reduced or is restricted to b right so so here's a picture in general if I have this is my sample space and I don't know I think we use it as for discrete right if this is a sample space and this is the event b okay and I have some event a let's say it kind of overlaps partial overlaps with this right then what are we what are we doing we're saying that the new sample space is just b because given that b happens what is the probability that a will happen right so at least when you do counting you know when it's equally likely outcomes you can see that the probability that a will happen knowing that b already happened or that b happens is the number of element of outcomes in the intersection divided by the favorable possible outcomes right so there's the ratio between p of a intersect b and b okay so now why do we actually do that products that I wrote there well so imagine that I have for instance that the event a is the event that x2 is 1 okay in the event b 1 is the event that x1 was 1 b2 is the event that x1 was 2 at the previous time 1 and b3 is that event x1 was 3 so I have these three events by the way they are mutually exclusive what does that mean it means they're the year their union is actually the whole space the whole sample space and they're there they don't intersect each other right yeah so we have three events so now at state at time 1 now at time 2 I'm looking at the event that x2 is 1 so what can I write I can write at the probability that if the event a actually just the event they a I can I can split event a into the following I can split the event a into three sub events I mean if I have three colors right so this is this is b1 this is b2 b2 here and b3 here right and my event a is somewhere in here right so it's going to be is the disjoint union of three of three sub events if you want event a right okay being disjointed means that the probability of the event a is the sum of the probabilities of the sub events and now all it's left to do is to say if you can you write this in terms of conditional probabilities well so if the conditional probabilities the ratio between the probability of the intersection and the probability of b1 for instance then this intersection is going to be probability of a given b1 times the probability of b1 right plus and so forth right probability that a given b2 times the probability of b2 plus probability that a given b3 times probability of b3 okay so that's exactly what it just this is just a set theoretical if you want computation except here I use this concept of conditional probability okay so that's actually what what we wrote here so there's a probability that the event a is split into three probabilities and we use conditional probability because these are just those transition probabilities right so let's let's just compute again that p of x2 is 1 so what's the probability that x2 is 1 given x1 is 1 that's p1 p1 1 right times the probability that x1 was 1 that was conveniently chosen to be one-third okay but then is the probability that p2 is 1 given that p1 is 2 so that's p2 1 probability that x1 is 2 and p3 1 probability that x1 is 3 so what was p1 1 1 1 3rd what is the probability that x1 was 1 that was 1 3rd right p2 1 was 7 10s and then 1 3rd plus p3 1 was 1 and this was 1 3rd okay so what is that 17 over is it 90 47 over 90 let's not even trigger 10 21 and 30 621 yep okay so that's how you then what's the probability that x2 is 2 it was going to be p1 1 probability that excuse me p1 2 probability that x1 is 1 plus p2 2 probability that x1 is 2 and probably and p2 3 no p3 2 probably that x1 is 3 okay so it seems like something that becomes tedious but fortunately there's going to be a pattern here which very soon we'll write down but so let me just do this one so 1 3rd and 1 3rd plus 3 10s and 1 3rd plus 0 1 3rd right okay so that's this is 19 over 90 right 1 9th and 1 10th right okay and finally p probability that x2 is 3 is so you see why it's easier to work with this number than with those coming from the inventory problem p p3 1 and so forth right and since we look at the metrics we can tell this is 1 times 1 3rd plus 0 times 1 3rd plus 0 times 1 3rd so this is 1 3rd or if we were to put it over 90 I want to know p3 was 1 right p3 1 was 1 is it like 30 over 90 so does it up to it should add up to 100 right 100% yeah why is it it should be 1 okay we can figure this out right it's not 6 to 1 here but it's so it's 1 9th I think should be 41 now now because this is the probability that x2 is 2 given x1 is 1 so it's moving from 1 to 2 so what does it add up to so what is this okay so looks like they say this is 1 9th why is this 1 9th yeah okay thank you that was the problem that's 1 3rd thank you okay so this is 10 over 90 so 1 9th right alright okay so so all this combination was to just figure out what is the probability distribution of this x2 random variable which is this is with so it's equal to 1 with some probability equals to 2 equals to 3 with some probability right with probability 61 over 90 with probability what was it 19 over 90 and this is with probability 10 over 90 right which should add up to 1 so then you go to x3 right okay but at this point you don't you don't want to do this tedious computation again but instead what you want to do is the following so you can you cannot you notice the following thing so you notice that the probability that xn plus 1 is some value j well j from 1 to 3 he is actually the summation over all i of pij times the probability that xn was i which it would be what p1j probability that xn is 1 plus p2j probability that xn is 2 plus p3j times the probability that xn is 3 okay this is in the case of three states but if you have n state you know five states it would be summation over all five states and so forth right so here's kind of the magic thing that happens is denoting pi n of i to be the probability that xn is i what you're getting is you're getting a row vector which is pi n of 1 I don't know let's say pi n of 2 pi n of 3 then you can write this as a matrix multiplication okay you can simply write that that as being the rope of of of pi n right multiply by the columns of p so then we can write the following that pi n plus 1 is simply pi n multiply by p okay so in specifically if I have three that I mean it's a 1 by 3 row so this is pi n plus n plus 1 state this is going to be the three the the row at pi at time n so pi n and then I have multiplied by this columns right yes I was looking at the wrong so you should be multiplying the values the current values the probabilities that the you know xn is in any of the states by the columns of this of this matrix so that's why when we look at the metrics above somewhere here it was always multiplying by this columns right so now if you if you have to do x3 then you would just take the probability distribution of this x2 put in a row and multiply by p and you're going to get the new row that's going to be the probability distribution of x3 and so forth so in general you're going to get a probability distribution of so this is going to be the probability distribution so pi n is going to be the probability distribution vector, if you want, of xn. Meaning that xn is going to be 1 with probability pi n of 1, 2 with probability pi n of 2, and so forth. OK. So finally, we can go back and talk about what happens with our original inventory problem. But so let me introduce the last thing, which is called the limiting probability, OK? So the limiting probability, so if we start, let's say, start with pi 0, then compute pi 1, which is pi 0 times p, pi 2, which is pi 1 times p, right? And this is pi 0 times p times p, or p squared. So in general, it's going to be pi n. It's going to be pi 0 times p to the n. So it's going to be a sequence of distribution vectors, right? So if pi n converges to some vector pi star, then pi star is called a limiting probability, or state. Actually, so it's a limiting probability, probability for the Markov chain, or for the Markov chain. And the most important thing that's going to be satisfied by pi star is that pi star, so remember pi n plus 1 is pi n times p. And if pi n goes to pi star, pi n plus 1 also goes to pi star. So pi star satisfies this relation that pi star multiplied by p to the left, I mean, pi star multiplied to the left of p is the same of pi star, right? And that means that pi star is a steady state distribution for the Markov chain. Turns out that actually you can compute this fairly easy because all this pi star is an eigenvector corresponding to the eigenvalue 1 for p, or p transpose, right? So pi star is a left eigenvector for p, which corresponding to eigenvalue 1. Or if you don't like the left eigenvector, then our pi star transpose is a right eigenvector for p transpose, OK? So in MATLAB, you can actually, when you put in this matrix and you look for the eigenvalues and eigenvectors, well, if p is a stochastic matrix, you're going to always get an eigenvalue of 1. And the eigenvector that corresponds to the eigenvalue is going to be the steady state distribution. Last word is, how does this reflect back to the inventory problem? So if you do that, we computed that matrix. So back to inventory problem. Pi star turns out to be the following. 0.285, 0.263, and 0.452, OK? So I'll talk about how to actually compute this in practice. But with MATLAB, you can find these numbers by just putting that matrix, p, capital P. And the conclusion of this model is was to figure out the probability that the demand exceeds the inventory, right? So it was the question of, what is the probability that, in weak n, for instance, the demand exceeds inventory? So here's what you do is, again, you split into the case when the conditional probability that the demand exceeds inventory when the inventory is 1 times the probability that the inventory is 1, and so forth, right? Probably that dn is greater than xn when xn is 2. We've done the second xn is 2 and n3, right? And another question is, what numbers one needs to put in here? Well, what's the probability that the demand exceeds n when xn is 1? Well, that's the probability that dn is greater than 1, right? And that we know how to compute. It's basically 1 minus 0.0368 minus 0.368. But the question is, what is this one? And that's where one uses the limiting probability for those values. The probability that xn is 1 is 0.285, right? This is 0.263, and this is 0.452, OK? So when you plug in these numbers, and you have to trust me on this or the book, you get something like 0.105. So this is 10%. So 10% chance that following that policy, you're going to actually have the demand exceed the inventory. So on Wednesday, I'm going to show you kind of, I guess on using MATLAB, how you can actually generate this limiting probabilities, either through the discrete dynamical system or through eigenvalues. Yep? Just go back up when you were talking about the pi star as a left eigenvector for b. What's the word right after b? Corresponding to eigenvalue 1. OK, so we'll talk more on this on Wednesday. Thank you.