 So change the R, the values of R, and then change the value of the initial condition. And I should say a word about the Runge-Kata that some of you implemented in back in one of the homerogs and use symbolic variables, okay, and that, you know, I said, you know, I didn't really, maybe I made a note that it's not ideal, but what I'm suggesting is to kind of stay away from symbolic computations. And you don't, the whole point is Runge-Kata is you want to do numerically, okay. So, so if you have troubles coding the Runge-Kata in numerics, let me know, okay. I mean Runge-Kata is available, the MATLAB implementation many places. That's why I didn't give you one implementation, but let me know if you need, you know, implementation that's not symbolic. Doesn't use symbolic capabilities. Yes or no? Okay. Now, it will not be hugely wrong that you use symbolic, but it's really defeats the purpose. I mean, as you will see from this, if I, if you use the final time, t equals 100, for instance, right, with the other values of the parameters and initial condition and so forth, you're going to have to pick an h that's relatively small, okay, but not too small. So, part of your, part of, part of b, part of the problem b is, is you have to decide on how small the h should be, right. But don't start with like 0.0000001, okay, because that's going to take a million iterations, okay, to get to the final time, okay. So, maybe start with h equals 1, okay. h equals 1 may be actually too big, right. So, how do you, how do you see, and again, you don't have to pin the 100 plots for different values of h, but just, just experiment and see what h would actually, so the largest h that you can pick so that the trajectory looks like the trajectory of the continuous system, yeah. Because remember, if h is large enough, what's going to happen? It's not going to look like a butterfly, I mean, like the, that, at all, right, it's going to look, yeah, it's going to look like a weird dynamics, right, and that would be a discrete system, but here we're trying to approximate it to continuous dynamical system, right. And again, if you use, if we use too many, too, too small h, that's not necessary. In fact, for rangikara, the whole, the whole point is that you, you don't need a very small h to get a good accuracy. And again, OD45, excuse me, the OD solve has, you can, I can show you that if this opens up. But you can choose the different method. So, for instance, you can choose the rangikara fourth order, or you can choose rangikara second order, or you can choose other fancy methods. And you will see that what happens as you, as you choose different methods, you, you will have a slower or faster computation. So, so here's everything so today. Okay, so what I'm saying is you know what you should get. Okay, it's not a problem that you have no idea what it is. So in the gallery, you have the Lorentz system. And I think it, it has different numbers here, but I think the initial conditions are about right. So, okay, so here's, so here's the solution, right. And notice that this is OD45. So this is based on rangikara fourth order, but it's actually fancier than what you will have to do. Okay, because what you have to do is you will do it on fixed time step. So you, you do iterations just like Euler's method, right? This Euler's method with fixed time step, except instead of one line, you only have, you're going to have to do four lines of code. Okay, but you saw how, I guess, how quickly it actually computed the computation, right? Well, maybe it didn't, so let me do it again. Okay, that means it's quite fast, right? If you choose a slower, I think two, three, it's probably harder to see here, but if you change this to like 28, which is that famous chaotic behavior, the time it takes, right, it's, well, it's much slower than the fourth order rangikara. This is second, we could time it, I guess, but if you change it to the fourth order rangikara, you will see them. It is faster, okay? And why is that? The h is larger here, right? So you, yeah, you can choose h really small and do fourth order rangikara, but you don't have to, it is not necessary, okay? Okay, so the last part is actually probably the main reason why I made you do this implementation on your own, because what I'm saying is to pick two different initial conditions, one close to the other, and compare the two, okay? Something you cannot do in OD solve, okay? Any questions on this first one? And by the way, I think I only say, but it's not, you know, it's not necessary, but it's I only ask you to plot x versus z, in this case x1 versus x3, I believe, okay? Just so you see something. We could plot 3D, but then it's, well, again, you don't use OD solve, yeah, 2D face plot, 2D face plot, the output could be, so the output could be time plot, and the time plot I see, I think I'm just saying use the second one, you know, just so, okay? All right, let's see. So problem number two, so I'll go through this, and then talk a little bit about Markov chains. So problem number two is very similar to, well, it's connected to the cause-fire problem. It's, in fact, it's ridiculous, embarrassing, it's similar. All I'm asking is to do the computations instead of two standard deviations from the mean that gives you 95 confidence interval to three standard deviations from the mean, okay? Yeah. You can use anything. Yes. So let's see. So number three and four are probabilistic models, again, that the continuous, the random variables that are involved have exponential distribution, so that basically means it's the exponential distribution that is in problem two, okay? And I think that's, we talked about it in the chapter as well, exponential distribution for continuous random variables for problem number three, right? So it's, again, a manufacturing process. There are two different processes. One has an expected life length of 100 hours. The other process is expected life length of 150 hours. The, the specific problem here has basically have a cost per fuse for one of the process, for the other process is twice the cost. And the cost is not specified. I mean, the cost, it's just a constant, right? It's a known, hold on a sec, okay? Right, the cost, the cost of manufacturing is known. The only thing that's not known is that how long is actually each fuse going to last? If it lasts less than 200 hours, then there's going to be an extra cost of k. So all these are constants, right? These are parameters in the problem. So what you need to set up is sort of the cost as a random variable, right? For each of the process. What is it going to be the cost for the, say, process one? The random variable is going to have two values, right? It's going to have the cost c if, if the, if the fuse lasts more than 200 hours. And that comes with a probability, right? From the exponential distribution or cost c plus k, right? That's for process one. And process two is, is two c instead of c. Okay? In part b, I would just, and then you have to find the expectation of that random variable, right? So expected cost. And then you can compare the two, I guess. And part c is just, if you want to introduce a new variable that is for each additional 50 hours of life length, you can, you can design the manufacturing so that it, you double the costs and you gain 50 additional hours of life length. Question is, what would be that optimal sort of life length for the, for the fuse in order to minimize the expected cost? Okay? So it's a little bit of, I mean, it's, it's a one variable optimization in part c, I think. Any questions on, on this? Yeah. Well, when you say exponential distribution is with some rate lambda and, so I think you have to figure out the length, the rate lambda for, for each of the processes. No, it's not Poisson. So Poisson will be for a discrete, you don't have a discrete random variable here. It is a Poisson process if you want to, but just think about the random variable as, as having that distribution. Okay. Problem number four, it's very similar to the diet testing problem, except here you have some population that you can test in individual, individuals who are in groups, right? So all I'm going to say is that it's, it has that flavor of the, of the diet problem. And actually we even, we didn't specify what Xi should be. So as far as the, the random variable, right? So, and then it's just a discussion or study of the optimal number of, of a person in a group that they need to be tested to minimize whatever that expected value, expected number of tests, right? And also the other side of the problem is to look at the different values of P, P being the probability of a person being infected or diode being faulty, right? So it's same, same thing. Okay. So, so you only need, I believe some, some MATLAB, I mean, you only need some computer, you know, you cannot, I don't think you can do this by hand, especially the last part, which is 5,000, you know, some numerical values. Okay. Any questions on number four? Okay. So, a few words about number five, although, well, so, so the most important thing to, to realize here is that we're having a simple Markov chain. The number of states is some capital N, okay? So, when you start looking at this problem, just start with N equals 3 or 4 or 5, okay? But keep in mind that this actually should, you know, should be any, any number N. And then answer part A, part B, right? So, so again, my recommendation is start with a low number, with a low value for N, for capital N, yeah? So you can draw a state diagram, transition diagram. So basically the arrows of what is the probability of going from one state to another. Then in part B, you have to find transition metrics. So that's going to be a metrics, right? But the important thing is to kind of see a pattern that happens, so that if you change N from five to six, how is that going to affect the, I mean, how is that going to reflect into the transition metrics? So if you can do this for a general N, that's wonderful, okay? In part C, though, the question is, well, actually, even in part C. So, so basically in all this, all these three parts, you can, you can start by assuming N to be some, some small number, right? But the real question, which maybe I should have put here as a fourth question is, what happens is N goes to infinity, as N gets very large, okay? And the answer is when N gets very large, the stable equilibrium vector, which is, which would be the probability that you are in any given state with some probability, right? Or you visit that state with certain probability that actually, that has a limit so that you can, you can think about the limit as N goes to infinity of that. So maybe I should put that as an extra query problem, but anyway. So the point is you want to do as general N as possible, okay? If you cannot do it, for instance, if you have to do, probably if you have to do it on a computer, I think in part C, you should do it on a, on a computer, right? So that you do the simulation, I'll show you now how to do it on a math lab, but you will have to pick a P and you have to pick an N, right? But the code itself should be, you know, should be, you should start with this is P, this is N, and then the code should be for general P and N. So you just change the P and the N, okay? Okay, so we'll talk about that last piece of how to simulate this with math lab, I guess. Any questions on number five? I didn't go through the details of what the transitions are, but you can read there, right? The transitions through left or right are equally likely except at the end points, okay? And by the way, that has to do, that has something to do with the temperature distribution, like diffusion in a, if you think about it in a, in a uniform rod that has some prescribed values at the end points. So for instance, if you have a temperature that's set to be zero, I'm not sure if it's zero is the right boundary. Yeah, I don't, I don't remember exactly. But inside what happens if you have a temperature distribution that's, say it's like localized in the middle, see if you have this long, just like diffusion we've talked about, and you have some localized, you know, high concentration, right, of the thermal energy or something, right? Then you have this probability distribution of going to the neighboring states with equal, or equally likely, you can go left or right or equally likely. But the probability to stay in the state is different than the probability of going left or right, yeah. So we can do some of the heat sources located in the middle of the rod? Pretty much. So, so you can start with an initial condition, the point is. So, so let me, yes, you can, you can start with a location in any, yeah, the source anywhere in the rod. And I believe that the boundaries, it's insulated. So, so it would be insulated boundary conditions if you know what that is. But it's basically saying that it doesn't, you know, temperature doesn't, there's no, no exchange of energy, thermal energy with the, with the environment at the endpoints. So guess what is going to happen in the long term with the temperature? It's going to level off. Right. So, so you should find something similar with this, with this problem. But again, we're not saying anything about diffusion. It's just, it's just simply a Markov chain. And again, with a finer number of states, but that number of states can increase and increase. So it would be, the temperature distribution would be a continuous process. What you do here is you discretize the space into more and more, as n gets larger and larger, you're going to have more and more sites. Okay. Okay. So let me, let me talk a little bit about this simulation of the, of the Markov chain, Markov chains. Last problem, I leave it as, as I said, you can read the problem. I don't think it's, it's a particularly easy problem, but you can certainly set it up using the Contragrammaximal principle. And, you know, again, if you're, if you want to try it and see how far you can go with this. Okay. I'm not really expecting that you can explicitly write down what the optimal control should be, what the optimal, well, if you know the optimal control, even if you know the optimal control, you can see the equation in V, which is the second component of the velocity is not a pretty one to solve explicitly, right? Because on the right-hand side, there is, there is this exponential of H, right? But certainly you can, once you have the, the optimal control, if you had the optimal control T and T is the control here, then you have a system of three equations with three unknowns, right? With, I mean, with three variables, right? State variables. So you could, like, run a cutoff fourth order, you could actually run this, yeah? Because you have the initial condition you have. Initial height, I don't know, it's taken to be one. Initial velocity is taken to be zero. Initial mass is, is, is known also, right? So you have the initial conditions, you have the system, the only thing that you don't know is the T, right? So the T comes out of the capital T, comes out of the, from the maximum principle, right? So that's, that's the whole difficult part, right? So if you can get T, that's wonderful. So any questions on this? Friday 12, 10 to 12. And by the way, I should have the homework, well, you're going to turn the homework on Friday, most of you. So on Friday I have the other homework graded, but you have the solutions to, to set number nine. And I'll try to grade the set number 10 as soon as possible, yeah. Even if I also do that for a moment. They're not out there, okay. I did hand it for everything. So I did, I did hand in the solutions to number nine, eight and seven. The only thing about, I think seven and eight were like together or no? Anyway, okay, so I'll post it on the companion. I'll make sure they're there. And by the way, I'll, I'll post also the Ranga cut up as part of the homework problem that in chapter six, okay? So don't, I don't want to see symbolic Ranga. That's, that would slow down this computation like tremendous learning. You will, it will take forever to do something like this, right? If you do it symbolically. Okay. So here, here, I just want to kind of remind you of what's, what the main things about the Markov chains, what we've talked about. And again, we haven't really talked about all details. But one thing that I mentioned last time is, is if you put this probabilities of going from one state to another. So Pij is going, is the probability just, it's kind of bad to write like this unless you specify, but it's just a way to say this probability of going from i to j. If you put this in a matrix and if you have three states, then it's going to be three by three. Otherwise it's going to be whatever the number of states are. It's going to be a square matrix and it has this extra additional property that it's, it's a stochastic metric meaning that the summation of Pij, the row sums are all equal to one. So let's see Pij summation over j, right? Because i is fixed. That's the number of, that's the row i. So this is the i-th row sum. This is an i, not a j. Excuse me, that's a j. Okay? And of course all the Pij are positive. And so there's, there's one fact about this kind of stochastic matrices which says, it's a theorem. It says that if P is a stochastic matrix, which is also, and in addition it is irreducible and say a word about this, but so it has an additional property, then one is an eigenvalue. It's a left eigenvalue, if you want. It doesn't matter because left or right is an eigenvalue for P and and there is an unique left eigenvector pi star corresponding to lambda equals one. So remind you what eigenvalue is. So eigenvalue is, if I have a matrix, what's an eigenvalue? Px equals lambda x. This would be a right, x is a right eigenvalue, right? Right, eigenvector. Yep, and this is supposed to be a column, right? Okay, so if I say lambda equals one, it means that P of x equals x. The only problem here is, so that's fine, but what we're interested in is we're interested in left eigenvectors. So similarly, P transpose has an eigenvector, right? Possibly a different eigenvector, but with the same eigenvalue. So maybe I should call this different x, x, y, okay, for lambda equals one. Does anybody know why the eigenvalues of a matrix and of its transpose are all the same? Because they're appropriately determined. So eigenvalues of P and P transpose coincide, right? It's not true with the eigenvectors, obviously, but the eigenvalues are the same because how do you find eigenvalues of a matrix? You take the term of P minus lambda identity, right? And you set it equal to zero. Well, the determinant of the transpose of a matrix is the same as the determinant of the matrix itself. So if you transpose this, you're going to get P transpose minus lambda identity. So you're solving the same polynomial to find the eigenvalues, right? So it means that if one is eigenvalue for P, it's also going to be eigenvalue for P transposed. This theorem says that there is an unique eigenvector, you know, x or y. Yeah, there's a unique left eigenvector and there's a unique right eigenvector. For us, it's important to left eigenvectors because if you transpose, so y was a column, right? If you transpose this, what you're going to get is pi star P equals pi star. So basically what this is doing is saying that if you consider the discrete dynamical system, pi n plus 1, I think we use k plus 1, but if you consider this, you know, start with some initial state, which would be a row, you know, and then do this iteration, which is very, obviously it's just powers of P applied, oops, pi naught to the left times the powers of P, right? Then this one has an unique steady state, right? So this has an unique equilibrium, right? Which is pi star, right? So what's an equilibrium for this discrete dynamical system? Well, it's one for which pi n is the same as pi n plus 1 for all n, yeah. So it ends up being two alternative ways to compute pi star, right? So one way to compute pi star exactly is to look for the left eigenvalues of P or the eigenvalues of P star, of P of P transpose, eigenvectors, corresponding to eigenvalue 1. The other way is to, well, I didn't say yet, is to think about this pi star as being a steady state for this discrete dynamical system, but the important feature of this is that, so it turns out that pi star is a limiting, is a limit of this pi n with any, so it turns out that pi star is going to be the limit as n goes to infinity of pi n for any initial condition pi 0, okay? And again, this still assumes that this transition matrix is irreducible, so it has an additional property that needs to be checked, okay? If that's not the case, then everything I say is not quite, it's not always true. So I had a question for you. Yeah. If you go, okay, right there that, so I see it, right there that pi star, P is a pi star, that makes sense to me, as far as being an equal number, right? Or steady state. Yeah, if you go up, though, I don't see that there, I don't see that P transpose and P major. Oh, so this just, if you transpose this, you're going to get Y transpose P equals Y transpose. So Y transpose is that, yeah. So if you want to do MATLAB, well, one way, I guess most obvious way is to try to find the eigenvectors for P transpose. Well, right, because so in MATLAB, IEG is, so let me write down this metric. So whichever one you want, maybe it's one-third, one-third, one-third that we're toying with last time, seven-tenths, three-tenths, zero, one, zero, zero. Okay, so make sure that's the right one. Okay. And irreducible basically, so this metric turns out to be irreducible because you can actually, so you're picking any pairs of states, so let's say pair one, one-two. You can transition from one to two with positive probability, right? You can also transition from three to two, maybe not in one step, right? In one iteration. But in, so if you do P squared, you see in two iterations, you can transition from any state to any state, right? In at most two iterations, right? So that's kind of a way, but if it's not P squared, then some power of P should have basically non-zero entries. Okay, so this metric is satisfied with that property. And all we have to do is we have to, maybe stay with Pi. It just started with an initial, any initial condition, which could be that, we know that we are in state one at first iteration, right? Before the first iteration, we are in state one. So this is the probability that we're state one, that's one, probably that we're state two, that's zero, and probably that we're state three, zero. Okay, and all we have to do is we have to iterate this, oops, times P, right? And as we iterate, what's going to happen is, that's a bad way to do it, right? For I equals one to 10, Pi equals Pi. Again, don't do this in the command line, right? Better to write a little code. But you see that after 10 iterations, it already kind of stabilized, right? So based on that kind of theory of the mark of the stochastic matrices, we know that this is going to converge to the Pi star, right? So this is going to be actually the Pi star. Well, you know, if you want it more accurately, you have to run maybe more iterations, yep? But how about if we do, so maybe, maybe I should put in a code, you know, all right, so let's say I have this as my P, and my initial state is 100. Remember, this should be a row, and then for I equals one to 20, Pi is, and I don't have to memorize the previous ones, I just always override, okay? So I can, I can display all this so you can see it. Well, okay. So you saw this, format long, Pi, anyway, so you can see more decimals, and probably not all of them will be accurate for the Pi star, but, okay? So that's, that's kind of a quick, I mean, of course, knowing the property of the matrix P allows you to get this through these iterations. But what's the alternative way? The alternative way is to just use the eigenvalues of the transpose of P, okay? Why is it transpose? Well, first of all, if you just do eigenvalues of P, you get eigenvalues, and one of them is one, right? But if you want the eigenvectors, what do you have to do? You have to extract basically the output to, and you see the output, you see the matrix U that has on columns, it's always on columns, so that's how this function works. It's always on columns, right? The columns are eigenvectors for this matrix P, okay? So how come it's not the same as what we got before? Because we didn't do P transpose, so the gain P transpose gives you this vector, right? Again, this is a column, but it should be transpose, so that should be the, well, or maybe not. This should be P star, Pi star, or it's obviously not Pi star, right? What was the Pi star that we had there? So never work in the command line like I did a bit. Now you can see that, right? So Pi should be Pi star Pi should be what? U of the first column of U, which, by the way, is not always the case that the first eigenvalue is going to be the of eigenvalue one is going to be the first one. It might be here or here, in which case you have to, you can ask the MATLAB to find where the eigenvalue one sits and then pick the corresponding column, yes? Yes, so let me transpose that, and now let's see what it looks like. Okay, so I'm just going to, okay, so let me, so this is going to be Pi from iteration, and this is going to be Pi from eigenvector, right? Okay, so let me clear this so you can see that too. Yes, so obviously they're not the same, right? So what's the additional property that we have to, that this row satisfies and that the limiting probability vector should satisfy? I mean those are, they should sum up to one, because those are probabilities of being, of visiting, eventually visiting a state, state one, state two, state three. So this is not adding up to one, so what you're going to do is you're just going to, sorry for the notation, you're just going to divide by the sum, you know they are the same, okay? By the way, if you do format long, oops, I don't know, I will look ugly, but it's hard to compare, right? But you could, you could see, you could make the difference between Pi, you could see the maximum between the difference between Pi iteration and Pi from eigenvalue, and that's 10 to negative 6, okay? And that's just because you do 20 iterations, but if you do 30 iterations, 10 to negative 9, right? So it's converting very fast actually, no, relatively fast. After 50 iterations, it actually got into the state of state, you know, 10 to negative 14. Question, those questions? Okay, so that's why it's useful that we had this discussion on discrete dynamical systems. This is a very special and it's a linear dynamical system, right? So it's, in a way, you would say it's, should be a piece of cake, but it has this very important connection with, you know, with Markov chains. So you think about it as everything, that steady state attracts every initial condition, right? In the state space, if you want. So it's, it's stable, right, asymptotically stable, but it's asymptotically stable and it has the basin of attraction and everything, right? So that's kind of nice about this. And let's see, what else? So as far as your exam is concerned, in Part C I would say you do this through, you know, you simulate and you see if indeed your iterations converge to the eigenvector, right? To the left eigenvector of that matrix. Now if you can do it by hand, it's kind of paradoxically that you probably can do, you can probably find the formula for the left eigenvector corresponding to eigenvalue one when n is general rather than when n is five or six, okay? But again, I'm not necessarily asking for that, so the general n, okay? And let's see, so the last one that I wanted to show is there is a very nice important application, if you have not seen it before, the Google page rank is the idea behind ranking pages on the web is based on Markov chains, and let's see, where's, I'm sorry, and this is part of a book by Cleve Moeller. So there's a, you can see the actually the whole textbook, and chapter number five, or one of the chapters, linear equation, chapter number three, two, two actually, okay? It talks about solving linear systems, and as an application it has, oh yeah, I should have said that you could actually try to do this also by solving the linear system, so maybe I should say this. Well, let me show you the, okay, page rank, so it has, if you've never seen this, it's a very nice application, so it kind of shows how this matrix of connectivity of different websites is created, and then that's the matrix A, in this case, and then it's looking for this limiting probability, so the, it's looking for probability, so it wants to assign rest of the probability that you're going to visit that page eventually, right? So the pages that have high probability are higher ranked, and let's see, there's a, there's a third way to kind of find those, when you have a huge matrix like this, so when you have a huge matrix just calling the function EIG from MATLAB is not necessarily the best way to go, right? Because in the end you just want one eigenvector corresponding to one eigenvalue, you don't want all eigenvalues, right? So this is actually an implementation of that, oops, sorry, the code is very simple here, where you just, back, you know, from the, the last step is to kind of normalize it so that the sum is one. Here's where you have to solve that. Identity minus the matrix A, yeah, and there are some reasons for using this, okay? But this is actually kind of a fast way of just finding the solution to that linear system. So I just say that which is something probably, if you've read the book, you saw it there that um, so to find pi star one can also solve a linear system which, which it would basically be the following, say pi one, so let's say pi star is pi one pi two pi three. So then it would be pi one p one one plus pi two p two one plus pi three p three one equals pi one, right? Two more. The three equations in the fourth one is pi one plus pi two plus pi three equals one, okay? So this is just saying pi times matrix P is, is, so these three are just another way of, I mean, writing on components, right? What this matrix equation is, and the last one is the sum of the components is one. Um, okay, so you can see that this is the same as putting the identity minus P of times pi, right? And then the sum of pi equals one, okay? But the, the, the tricky part is you how do you solve this system of the first three equations, right? When this system is, let's see, it's homogeneous and it's, so it has infinitely many solutions, right? Because, because the determinant of this matrix is zero because P has eigenvalue one, right? So, so this is a, this is an implementation of that, okay, of this, of how to solve this system efficiently or quickly. And let's see, so I think it comes out with a very nice illustration of what this looks like. So if you have a web of like six, just six, six websites and different probabilities between the, assigning different probabilities between kind of linking, linking from one website to the other gives you a ranking, okay? And you can do this for lots of, lots of, lots of, you know, a web with, with possibly, you know, millions and millions of, of websites, websites. Let's see, I don't, I'm not going to have time to show you, but if you follow this, you can, if you follow this syntax here, you probably will be able to surfer, you have to download the software that comes with it, surfer, okay, so, so basically the, if you put a website, let's say you see CCSDU and I want to create a 100, web of 100 sites, then the first step is to create what's called a connectivity matrix. So for the, you can see here kind of bookstore and other places. So this would be sort of the places, so it, it, life it goes to that website and it looks for the links and then, then it goes to those websites and it looks for those links and they create this matrix and this matrix is, I believe G, so I'll let you do this. And has anybody seen this? Then if you go to, you go to the next, okay, then you can spy on, okay, so you get to spy, okay, you can spy and then you can page rank, okay. So let's see, is it done? It's almost done. Okay, it looks like it's done. So you can spy on G, which I forgot where it should show somewhere. Okay, so that's, so it just shows you that. And now you can page rank U and G, I believe. Page rank, okay, so this is all the connectivity. Oh, I didn't, is it mine? No, it's mine. It's embarrassing, sorry. I couldn't have waited two more minutes before the, I mean, the class to end. Wait, somebody's listening what I'm saying because I didn't turn it off. Okay. So I'm sorry, so page rank is supposed to, I support, I'm supposed to download this and run it. So, okay, so I'll be done in a second. So, okay, so basically that's, it's ranking all those websites and you can see, I think it's hard to see because I put so many, but it kind of shows the page, the rank, like the UCCS, the DDU is the first on this rank, right. Then portal is whatever. Right, so you can see the, in decreasing order of the rank, what is the most, you know, most popular in that web. And it has to do with the connectivity of the metrics, of the metrics. You can try any of your favorite websites. It's crazy, okay. You can try like my website, I've tried my website. And guess what? It's not the most popular in the web because I'm linking to the UCCS website. So the UCCS website is going to be top rank, right. Mine is going to be somewhere. Okay, but the strategy for ranking higher is to have a high ranked page linked to you, which is not easy. You can link to all the famous websites, right. That's not going to make you, it has to be incoming. All right, thank you. Yeah, the whole book is free, so you can at least download that chapter talking about it. I'm sorry, I'm going to pick up this. 20 in that, you know, I had a question about the chaos, the, because I didn't get all that off because I didn't, uh, yes, yes, yes, yes, yes, yes. And I was looking for that.