 So regarding the homework and the grading important important announcement So for the the grading we've decided that the exam is going to be 50% of the grade and One problem from the homework just one problem is going to be the other 50% And the reason for that is I think it gives you a chance to try two different things The grade doesn't just depend on the exam I'll tell you which problem during the tutorial and you have the whole weekend and To try and solve the problem yourself if it doesn't work Then I'll sit with you and with the ta's and I'll make sure that everybody here has solved that problem Right, so I'm just giving you 50% of the grade right now Yeah, so you should be happy with that Okay, okay, so don't get too excited Okay, so don't worry about the size of the the writing here This is just mainly for my notes, but it's it's for me to at least go over what we discussed in the morning We finished a lot of stuff in the morning, right? So we finally we first started with the idea that in traditional chemistry You have a chemical and chemical synthesized with some rate F degraded with some rate G and in stochastic in stochastic Chemical kinetics these F's and G's take on a different interpretation Instead of saying the chemical X is created at rate F and degraded rate G you have to say in your mind Molecule X which can take on discrete values is created with probability F DT and is degraded with probability G DT in a time interval DT Right, so it's very important that the difference between F and G is correctly respected It's not just F minus G that matters We then took that description and we realized that therefore F and G represent Poisson Processes and F and G are propensities for events in those Poisson processes We wrote down this thing which is called the master equation It's not named after somebody named master the first paper where this kind of thing was used This was the equation from which they derived everything else. So it was called the master equation We took that equation and then we did a Taylor expansion of the equation We cheated a little bit we Taylor expanded where the small number was one right F of i F of plus or minus I plus or minus one and so we took this equation These terms cancelled with the first terms in the expansion of this We expand it to second order we ignored all the rest of it and therefore we get this equation Which is called the Fokker-Planck equation the Fokker-Planck equation looks because it is like a diffusion equation, right? so now we went from this variable which is Remember the number of cells that had exactly I copies of that molecule. That's how I motivated this whole equation But you can also think of it as the probability That a cell has I copies of the molecule that or the probability of the system is in a state where there are I copies of the molecule We then moved over to thinking about P as a continuous function of its variable and some people were uncomfortable with differentiating by I because I is an integer. So I put it back as X Yeah, so don't it's these I is an X is of the same thing discreet and continuous. So we now have partial P With respect to T and on the right side. We have two terms the first term is a drift term That's why it has a minus sign there and the second term is a diffusion term If you've ever derived the diffusion equation using fixed laws of diffusion you realize that the thing inside this partial represents a flux so all this term does is it moves the probability as a whole so this is X This is T so if you start off with some value of X then this term is the deterministic part, right? It moves up by an amount F minus G Delta T in a time step delta T That's this part this part is the diffusion part. It's the part. That's random and at least as a conditional probability distribution starting at X1 at time T1 and Ending up at X2 at time T2 You will have some sort of Distribution like so right the diffusion part is the part that makes that happen Right and the sigma Over there will go as square root of delta T. In fact, it has exactly the value Sigma squared is exactly the value F plus G Delta T So that's This equation this equation captures the idea that there's a deterministic piece and there's a stochastic piece Which is the new piece for chemical kinetics? We also tried to capture this kind of intuition using a different notation People are comfortable writing down ordinary differential equations, right? So we can write down a form where we say dx dt is some F minus G like the traditional equation But we have to add a noise term now this noise term is remember. It's the derivative of one of these curves And it's a derivative of a squiggly-jiggly curve. In fact, it's a sum of delta functions. It's not a very well-behaved quantity Okay What we did was we integrated this equation for some amount of time delta T and Therefore got a difference equation for x and you're able to use this difference equation to predict where x is going to be After some time delta T Yeah, which is given like so it's the first piece which works just like ordinary calculus delta x is F minus G delta T Plus the second piece which captures the idea that you're going to introduce some noise This is a Gaussian random variable With mean zero and variance one But it needs to be multiplied by a pre-factor. This pre-factor is in fact Just like over there. It's F plus G delta T under the square root sign Okay, because that's the square of the variance. That's the variance and this is the linear part. Okay, so This is a recipe for simulating the process It's a very simple recipe started some x and then add some delta x depending on your delta T some small delta T 0.001 Some delta T times F minus G you add that part to the original x and then you add a random component And that'll get you where you are Yeah, and then you do it again and again and again and thereby you simulate a stochastic trajectory Right and by doing this over sufficiently large numbers of delta T You're going to have a fairly decent approximation for the final Conditional distribution of where you end up given where you started if the system happens to reach an equilibrium Then after sufficient time that final distribution will be independent of where you started and it's called the steady-state distribution of the system Fine. Yes, this is ETO. This is ETO. So This recipe is actually independent of thinking whether it's ETO or not If you want to make sense of an equation like this you have to say it's ETO, right? So what I've been trying to tell people is you know this kind of equation I don't mind if you ignore it because it has to do with a lot of subtleties of how you integrate This stochastic noise term. This is just for the people who are interested Fine so now I'm going to tell you one so this how many ways have I shown you how to think about this I've shown you one way master equation Which is like a it's linear in its dynamical variable the dynamical variables are the heights of all these bars All right, that's the dynamical variable How many dynamical variables are there infinitely many they're all coupled because the PI plus one and the PI minus one coupled to the equation for PI But it's linear so you can write the whole thing as a matrix and formally you can just exponentiate the matrix to find the solution So in some sense the master equation is a very well-behaved thing This is an approximation and those of you who like solving PDEs you can use all the tricks you want to solve this PDE And like I said this this is called a Langevin equation and this can be solved with the use of this random number generator Okay, so I'm going to teach you one more way to solve this system Which has to do with the following thing Imagine I draw a random number Which is R and this random or let's call it you you is uniform in zero to one So I just draw you and then I make a transformation and I say the variable well Do I want to call it T? Okay, the variable theta is One over you natural log of one or sorry one over alpha natural log of one over you Okay, so far. I'm not explaining why I'm doing all this stuff. So I've taken a uniform number This is a uniform random number between zero and one Right, so one over you will be a number that goes from one to infinity Yeah, log of one over you therefore will go from zero to infinity and One over alpha log of one over you will just be the same thing but compressed a little bit right now it turns out that the distribution of theta right the probability density of getting theta is Exactly this Yeah, P of theta is exactly Alpha e to the minus alpha theta d theta and the way to check this is your standard This is the same trick you use a cumulative probability distribution To convert a uniform random number into a non-uniform random number. That's all I've done We discussed this on the first day. It's a way to convert Simple variable u into a variable theta that has a different distribution. Are there any questions about this? Okay, so this is important because this distribution is exactly the distribution of waiting times between successive events of a Poisson process Yeah, so this gives me a very visceral a very tangible way To draw a Variable of interest a random variable of interest in the simulation of a stochastic process And now I'm going to show you how to implement such a simulation So what's going to happen? We're looking at the time axis We start off at some time and at that point in time there's some x of T equals zero amount of stuff in the system and now what we want to do we want to see When the next event happens and how many kinds of events are there in the very simple equations that I wrote down here There's only two types of events. There's the creation of x and there's the removal of x Yeah, in principle you can do the same thing if you have large number of chemical species Each one of them will then have a creation event or a degradation event that could possibly happen Yeah, each one of those processes each one of the things that could possibly happen has some propensity some probability per unit time the F's and the G's Okay, and those F's and G's are the things that go over here It's one over F log one over you or one over G log one over you one over F of the 15th reaction log one over you And each of these use are obviously independent random numbers. So what do I want to do? Assuming for this very simple case is only two things that can happen. I'm going to draw two numbers I'm going to draw one number from this distribution and that tells me that the next creation event Will probably happen at time at some time t plus And I'm going to draw another random number which says that some degradation event is going to happen at some time t minus and If there were many chemical species, I would draw many different random numbers and label which chemical They're all talking about a chemical fire chemical 15 and so on Remember what I told you because this is chemistry There's no way to exit the left side of this curve, right? So if you already have zero molecules the propensity to lose a molecule is zero In other words, G is zero if X is zero if that's the case then this time will be infinite So if you already have no molecules then the next time you're going to lose a molecule is infinitely far away You never have to worry about So here we go and in general if there were many chemicals then with different colors Maybe I could have t plus and t minus of some other chemical I have t plus and t minus of a blue chemical and I have t plus and t minus of a green chemical And so on is the setup here. So how many times have I used the random number? I've used it the number of chemicals Times two because each chemical could either be created or destroyed Now here's the key between this point and this point what happens nothing right the system hasn't changed And because the system hasn't changed and it's a memoryless process all these statistics are still perfectly valid Okay, nothing changed in the underlying rules therefore what I can do is immediately jump to the earliest of all these random times that I wrote down Right in this case. There are six possible times when something could have happened. I go to the earliest one There it is and I say well, let me look at this and say what happened in this case chemical Three was removed by one, right? So then we have to say x3 Goes to x3 minus 1 and you just update the state of your system from wherever it was a t equals 0 to wherever it is Okay, then what do you do? Well, then you know you have drawn all these random numbers But I'm sorry all bets are off now as they say right because the number of X's has changed So in principle the propensities of the other processes have changed Right, so you have to remove all those other guys. They say too bad and I regenerate the next set of T pluses and T minuses If I find these colored chalks again, yeah, and again nothing is happening between here and here So I go to the next closest one which is right there And then I say in this case x2 which is green Goes to x2 minus 1 okay, and I keep going so is this process clear It's a very simple way to simulate stochastic processes There is another way to do it which is in every little interval You could draw a random number to see does something happen or not according to these rules Yeah, but then most of the time nothing is going to happen because it's a rare event if you've chosen your delta t sufficiently small Most of the time nothing is going to happen So this little trick enables you to jump straight to the final event fine a few subtle points here If I'm starting here and I could only have a single event. Let's say x is zero so it can't get destroyed Let's say there's no other X's in the system There's no other x2 x3 x4. There's only x1 and x1 is being created at a rate alpha Right, then I draw a random number whose average is going to be like 1 over alpha And I go straight to it Now here's a little question. Suppose there are two X's x1 and x2 and let's assume for the moment that they're both being created independently at numerically the same rate alpha So I draw a random number For the chance that x1 is created and let's call that t1. So there are two two species x1 and x2 and for the moment the propensity for creating x1 is alpha and the propensity for creating x2 is alpha So the time to the creation of the first one t1 plus is Some 1 over alpha log 1 over u1 and the time for creating the other guy is some 1 over alpha log 1 over u2 Yeah, so we know that the average of this is going to be 1 over alpha Right because this has exactly the form of this distribution whose mean value is 1 over alpha alpha is a rate So 1 over alpha is a time the units all work out the mean value of t2 is going to be 1 over alpha But which one of these are you going to pick? You're going to pick the minimum one But you're going to pick the first one that you hit. So simple question What is the expectation value of this? What is the expectation value of that? I draw two random numbers according to a distribution. I Draw two random numbers according to a distribution and I pick their minimum Now I want to know how big that time is going to be Right, so that it's going to be smaller than 1 over alpha, right? Because the chance that at least one of them is sort of small is sort of high So how small is this going to be? What is the expectation value if there are two events and you're waiting for the first one of those to happen? So of course, you know, you can actually solve it, right? You're going to solve a joint distribution of t1 and t2 and Over here t2 is less than t1 And over here t1 is less than t2 and from this you can project the minimum And you know the joint probability distribution some sort of exponential and you're trying to work out the distribution of minimum, but You can go and do that. You can go work that out. This will be some sort of Two-dimensional integral with appropriate limits on the integral sign, but I want you guys to just think Very very intuitively. I draw two random numbers and I take the minimum. What's going to be the expectation value of the minimum? Just just look at it. Is everybody clear that the expectation value of one random number is going to be 1 over alpha? Yeah, because it has this probability distribution the mean of this probability distribution is 1 over alpha So in this case, I have two of them Any any guesses at least any guesses? Okay, the answer is actually staggeringly simple It's 1 over 2 alpha 1 over 2 alpha Right in fact, it's even more interesting than that Right if I draw a bunch of random numbers theta 1 is 1 over alpha 1 log 1 over u theta 2 is 1 over alpha 2 log 1 over u and so on and so forth and I look at the minimum of theta 1 theta 2 theta 3 and the expectation value of that is Is actually 1 over alpha 1 plus alpha 2 plus alpha 3 plus In fact, it's even better than that right the distribution of this variable theta bar right The full distribution of that variable is simply alpha 1 plus alpha 2 plus e to the minus alpha 1 plus alpha 2 plus tau d tau theta d theta Okay, I'm saying something quite interesting I'm saying something quite usually if you look at the distribution of the minimum of two distributions That is the distribution of the minimum two random variables drawn from their own distributions, right? The answer is some ridiculous thing to calculate If the individual variables are drawn from this exponential distribution, right then the answer is very simple The minimum itself is exponentially distributed can somebody give me an intuitive reason for this What's the reason for this if you think a bit correctly, this is actually quite simple. Yes. Yes in a way Okay, yes Well, it's not because they're independent, you know, if I just give you a standard collection of you know Let me ask you a simpler question. Suppose all these were uniform distributions And I took the minimum of n uniform distributions, what distribution would that have it's not uniform distribution, right? So these distributions don't behave themselves under this kind of transformations It'll be some crazy thing. Yeah, so the reason this works out so well is because it arose remember from a process Remember how we calculated the distribution of waiting times I said some event happened and some other event happened and in the intervening period the chance that nothing happens is 1 minus alpha dt And the chance that something happens is alpha dt Right, that's how we derived this waiting time distribution now Suppose I added other processes Suppose I added other processes, right? Which is essentially what this is I'm adding many many processes and I'm taking the waiting time to the first one Yeah, if you didn't have color vision all these would just look like White arrows and what you have in the end is just a single Poisson process with the total propensity Which is just the sum of all the individual propensity Yeah, and therefore the waiting time for the first event Right is just like the original one except that the total propensity is all added together. So so this is quite interesting, right? Okay, are there any questions about this so there is an algorithm called the Gillespie algorithm Okay, let me say which is an exact stochastic simulator which actually uses this idea in and I don't know why it's It's Better to do than the thing I'm saying the thing I'm saying is for me very straightforward write down all the reactions that could happen Just draw a random time where you think all of them are going to happen and just take the minimum Yeah, instead the Gillespie algorithm Uses fewer random numbers because in 1976 random numbers are sort of expensive. They aren't anymore So the Gillespie algorithm all it does is calculate the waiting time for anything to happen Right, which is given by a sort of pseudo reaction where you would add up all the propensities of all the sub reactions And once you decide when anything is going to happen after that you have to decide which thing happened and Which thing happens just happens in proportion to its own rate its own propensity Yeah, so if you're going to read about the Gillespie algorithm, you'll find that it's phrased in that way first Find out when anything happens and secondly find out which thing happens. I Don't find that to be a particularly interesting way to look at things I prefer that you look at things my way Which is you write down all the creation and destruction events that could happen You write down all the rates of propensities associated with each of them all the F's and G's you write down all the random times Where they could happen according to this distribution Yeah, and then just pick the minimum And once you pick the minimum, you know, which row you're talking about and that row corresponds to say molecule 15 is destroyed you you Take X 15 and you make it X 15 minus 1 and you start the whole process again. Are there any questions about this? Okay, so in your homework the problem that is graded. You're going to have to implement this thing It's called a Gillespie algorithm. It's very straightforward. It's in fact the most straightforward way. It's essentially a Markov chain update of the stochastic system So let me add that here Gillespie one over alpha log one over you is The little thing you have to keep in mind, right? If you remember this That's how you get the next waiting time if you remember that everything else is easy Okay, fine Okay, let me Erase this and keep going so right now then I've given you four different ways I've given you four completely different ways to interpret this thing one is a linear Cascade of equations the second is a partial differential equation which you can solve using standard methods The third one is in fact a collection of random updates that looks like an ordinary differential equation Integrated numerically with this little random component and the fourth one is the Gillespie algorithm The best way to do it in my opinion is the fourth way, right? The Gillespie algorithm is exact It gives you the exact answer Limited only by the precision of your uniform random number generator Yeah, it never fails Okay, any any questions Now suppose I'm not interested in generating an entire stochastic trajectory But suppose I'm interested for whatever reason on looking at the steady-state distribution for some system Okay, suppose I am I could be Then you don't have to go through many of these steps you might actually be able to derive the steady-state directly Let me show you how for a couple of cases so case number one mRNA synthesis Remember I told you yesterday you have a gene and the gene gets translated to this molecule which is called messenger RNA and The way this happens, you know is reasonably complicated There's a machine called RNA polymerase which binds to the upstream element of the gene and once it's triggered It moves across the gene takes about a minute to do it and once it's done it threads out the RNA Which gets released in the environment? Okay, so what I'm going to keep track of is the moment the RNA molecule is actually released And I'm going to count that as mRNA goes to mRNA plus 1 right m is equal to m plus 1 Okay, now what happens to the messenger RNA the RNA floats around but with some probability per unit time another protein which is called a restriction endonuclease It doesn't matter comes and choose up the RNA Another protein which they usually draw as little Pac-Man comes and choose up the RNA and that happens with some rate gamma Let's see. This is some rate alpha. So Let's write down a few equations first of all the equation if the RNA was not being created at all But it was only being degraded. I have n RNAs in the system Each one has a probability per unit time gamma of getting degraded and therefore the whole thing looks just like radioactive decay So you get d in this case m dt is Minus gamma m that's this piece Right because this gamma is a rate constant that has been multiplied by how many m RNAs already are in the system Right. So in this case, this is g. G is gamma times m Yeah, and what is f? f is just alpha. Okay, very very simple looks almost too simple And if you were to plot as a function of m by the f or g, this is what g looks like. This is what f looks like Right and the intersect at the value m star is equal to f of alpha Over gamma any questions. Okay, so the standard chemical kinetic description of this whole process This is the central process that determines how genes are Expressed in every cell on the planet in your body as well, and it's actually a fairly good description of how it happens And I'll expand on it later, but the point is now what do we expect from this? We expect that if the creation rate is constant Then the system will reach a steady-state number of MRNA where the creation and the degradation rates equalize If you are very far to the right here, then the degradation rate is much higher So you lose MRNA molecules if you're very far to the left then the synthesis rate is much higher So you'll gain MRNA molecules. So this is actually a stable fixed point It's actually a stable fixed point at the end of this class We're going to look at a system that is multi stable or by stable So it'll have many stable fixed points and many unstable separatrices separating So this is this is the equation right Okay, so now I want to try and solve For the full chemical kinetic version of this the full stochastic Description of the same thing right and I can do that. It's just this equation, right? So it's d dt P That has a certain Number m is equal to i Let's call that m also Is equal to minus so f plus g is alpha f minus f plus g is alpha plus gamma m P m plus Alpha P m minus 1 plus gamma m plus 1 P m plus 1 So just stare at this equation for a second and see how it corresponds to the general case that we derived earlier The general case has these two terms which are the variables that add to this bin These two terms sorry that take away from this bin these two terms are the variables that add to this bin F plus g is alpha plus gamma m f i minus 1 is alpha because alpha is independent of i and G i plus 1 is gamma times m plus 1 Because it's coming from a higher mRNA number P m plus 1 questions okay, so How would I solve for the steady state of this any guesses, okay? Let's set the left hand side to be zero then what happens you get a recursion How do you solve the recursion so the recursion involves three steps, right? So of course one thing you could do is you could set P of zero and you could set P of one and then solve all the others That's one way, but there might be an easier way to do Okay, so let's let's work out the equation right so what do you get you get minus are there enough terms here? Yeah, sure. So you get alpha P of m minus Plus gamma m P of m plus alpha P of m minus 1 Plus gamma m plus 1. I'm just writing the whole thing out and then I'm going to move things around Equal zero so now I want to move things around so that things look like a reasonable recursion So can anybody suggest a good way to do this? How should I move things around to get reasonable recursion? Oh? Thank you Okay, so how do I move things around to get a reasonable recursion we could even use machine learning, but I'm saying as much easier way Just look at it. Yeah. Yes exactly. So there's There are minus ones okay So the reason this is not a very nice recursion is because it mixes three different levels right usual recursion Usual recursion just mixes one level with the next level so all we need to do is to convert this massage into a way They convert one level to the next level so let's just see you get minus alpha P of m right plus let's say gamma Alpha P of m plus Let's say gamma m plus one P and plus one on this side and that's equal to Minus alpha P m minus one. Sorry if you can't see this. I'll write it again later right plus gamma m P of m Okay, sorry if you can't see this but what it says here is that this term that this function h of m which is Alpha P of m Plus gamma m plus one P m plus one it says that that function Has the property that h of m is equal to h of m minus one. Yeah, I Just I mean you just have to solve it this way right. There's no there's no insight to be gained here Now you have a very nice recursion right because if h of m is equal to h of m minus one then the whole thing Must be equal to h of zero Okay, and if you realize what these terms are actually saying I remember what they're saying zero one two three and so on right this term is Gamma times one P of one right and This term is gamma times two Times P of two This term is just alpha times P of zero This term is alpha times P of one right. So what is this H? This H is just the net flux It's the net flux moving up between these two things. So the difference of the two right maybe up to a minus sign Right so the H is negative means your alpha is high So H is the net flux moving in the left direction. So what is the value of H here? zero Right since you can't cross the left boundary H must be zero and I was just evaluated for the zero term There's nothing coming here And there's certainly nothing going back So that means that all these are equal to zero and that's how we managed to convert a three-step recursion relation From a two-step recursion relation to a simple solution to steady state Right if this thing is equal to zero then we know that Alpha Pm is equal to gamma m plus one P m plus one All right, so let's just work it out completely right so alpha P zero is equal to gamma times one times P one Alpha P one is equal to gamma times two P Two right and so on in fact, I can work the other way Three is equal to gamma times three P three Right, so if I start from P three and work my way backwards Sorry for P two question Yeah, so then I just have to work my way backwards So let's in fact do that very explicitly so that means P two is Equal to which way do I want to go? upwards Let me go upwards P one Is equal to alpha over gamma P zero P two is equal to Alpha over gamma squared times half times P zero P three is Equal to alpha over gamma cubed times one sixth P zero and so on Right, I'm just reading it off. So does that look familiar? Right, that means in general P of m is equal to Alpha over gamma to the m P zero one over m factorial I'm just just going through the math right this should look familiar because it's just the Poisson distribution It's just the Poisson distribution. How do I find the value of P zero? I know that the sum from i is equal to zero or m is equal to zero to infinity p sub m must be equal to one and Therefore P of zero is e to the minus alpha over gamma Because this is an expansion for the expansion. So at the end of the day what we have in steady state in steady state is one over m factor is Alpha over gamma to the m of m factorial e to the minus alpha over gamma. So this is interesting This is actually very interesting because it's unexpected this distribution Happens to have the same mathematical form as this distribution Yeah But it's derived in a completely different way It's derived through a series of you know recursion relations zero steady state flux and all kinds of stuff Work the ladder up. Okay. So this puzzled me for years. Why is it that? This Poisson distribution remember how we derive this or how I failed to derive it in the morning But nevertheless this Poisson distribution is just the limit of the binomial distribution That it can be derived using High school methods. Is this Poisson distribution the limit of some kind of binomial distribution? Right. It's not It's some other crazy thing, right? But whenever you see the same mathematical form appear in two different places you might want to tease apart why that happened So it turns out the reason this is happening is pretty fascinating and I won't Prove it to you now, but you can read a paper. I've written about it. What if I mean so remember here The assumption is that the mRNA is being chewed up By this Pac-Man protein and the chewing up process is happening with a constant probability per unit time Right. So if you look at the mRNA lifetime Tau mRNA How long will this mRNA survive before it gets chewed up, right? The expectation value for this For a single mRNA molecule is in fact one of a gamma and the lifetime distribution of mRNA is Exponential with value one of a gamma on average so this equation Turns out to be exactly correct if you substitute alpha times tau mRNA Where tau mRNA in this case is a very simple thing. So this is sorry expectation value and let me try and explain this to Let me try and explain this to you and then you can see if this is surprising or not. What is actually going on here? Suppose I started my system at some time in the past and here I'm looking in the present at t equals 0 and Suppose mRNAs are being created at these various times. Those are all the random times when mRNAs are being created Now that's a Poisson process, right because it has a propensity of alpha So the average waiting time between these will be like one over alpha and We expect that the number of waiting time events In a time interval t should be alpha times t With the variation which is a Poisson distribution Which is what this is Now this seems to measure. What is this actually measuring? This is measuring the number of waiting time events In a time interval, which is exactly the length of the mRNA lifetime on average And so why would that be? So let me explain why for the moment suspend what I told you about the way mRNA is killed Suppose, you know it could happen. In fact, it probably does that as soon as an mRNA is created Every mRNA in the cell survives for exactly 10 minutes and then is degraded Suppose that's the case, right? Then is the system Markov anymore? It's not because you have to know how long the mRNA is being alive. You can't just count how many mRNAs there are Yeah, so this is a very different kind of stochastic process in the one I showed you but let's just humor me for a second This mRNA as soon as it's born survives for a fixed time This mRNA as soon as it's born survives for the same fixed time This one survives for the same fixed time. This one survives for the same fixed time This one survives for the same fixed time. This one survives for the same fixed time These are all just and these days you can literally look in a cell in a microscope and watch these kinds of events happen Okay, so let me ask If I'm observing at t equals 0, which mRNAs do I care about? It's only these three None of the others, right? Okay, so that means these might as well have not even been created. So what is Which is the window in which I would possibly catch an mRNA? 0 till Minus mRNA lifetime in this case the mRNA lifetime is exact. So expectation value of that is nothing, right? So only these so if the mRNA was created during a time window of size This guy then I would see it today And I know exactly how many mRNAs are created in that time window because it's a Poisson process and The answer is a Poisson distribution. So Poisson distribution on this axis becomes a Poisson distribution on that axis Okay, and that's why you get this equation. Well, it turns out and this is really fascinating Turns out that even if the mRNA Lifetime is some arbitrarily complicated thing Right as long as it has finite mean Doesn't matter how complicated it is right? It can even be changing with time as long as it has finite mean This equation is still valid Okay, so this is a an interesting little piece of history about the gene expression literature when people first measured the distribution of mRNAs in single cells This happened in the early 2000s And they first measured that they found that the distribution of mRNA was really a Poisson distribution and They immediately then said this proves that mRNA synthesis and decay are themselves Poisson Markovian processes right in fact that deduction is not valid Because even for a highly non Markovian Highly complicated mRNA decay process you get exactly the same steady-state distribution So this is very interesting and the proof of that. I'll leave you to read a paper of mine I forget written three years ago. It's called universal Poisson statistics of mRNAs and biophysical general Maybe I'll email it to you and you can you can read Fine any any questions about this Okay So let's keep going So what's the mean of this? the mean value of m is alpha over gamma and The variance of m is also alpha of a gamma because we know it's a Poisson distribution So indeed the mean value is alpha over gamma, which is kind of neat So let me step back a little bit and now ask Under you know standard chemical kinetics. Yes It doesn't due to integration by parts. You can read the paper. I don't want to get into a time It just goes over. It's beautiful Does somebody want to know how? Catch me later catch me over Some free time and I'll derive this for you. Okay. This is taking me slightly aside. Okay, so One important matter, right? Suppose I measure the steady state Suppose I measure the steady state mRNA distribution Suppose I measure the steady state mRNA distribution in single cells, right? One of the things we know is that it's centered around M star which is alpha over gamma. We also know it has the shape of the Poisson distribution Yeah, so one interesting thing to ask is what is the relative spread of this, right? So we know that delta m squared square root over m is equal to 1 over root alpha over gamma It is equal to 1 over root m which goes to 0 as m goes to infinity So so there's a sense in which standard chemistry is recovered in the limit of large numbers Because all your distributions in relative proportion collapse around their deterministic expected value Okay, now this statement is true not just for the steady state But in a sense for other time points in the distribution as long as the history has been erased and as long as that Time point also involves high numbers, right? So in other words everything they taught you in chemistry With these kinds of equations is not wrong Mostly it's wrong. It fails in certain pathological cases But mostly these equations in a sense capture most of what's going on because most of the distribution is centered around the deterministic value You would have expected and that centering is very very narrow not just centered around but just bunched around them So that's the first way in which the deterministic law emerges, right? They call this the emergence of the deterministic law Emergence that is deterministic law. Okay, so that's the first way but in Cases of interest in cell biology you're never in this large number limit limit So in those cases in what sense does the deterministic law emerge is a reasonable question So now I'm going to show you another way the same thing happens So for now, let's not go to steady state All right, let's just take this equation absolutely seriously, right and just ask what happens what happens To the expectation value of m as a function of time Okay, if I ran this system again and again and again and again Right if I start off with let's say m is equal to zero at t equals zero and I ran it I get something I ran it again. I get something else. I ran it again. I get something else right and After doing this experiment many many times I take the expectation value at all times The question is what is that blue curve look like? All right, so any ideas on how we can understand just that blue curve from this equation Remember, this is infinite stack of equations that blue curve is just One variable as a function of time. Yeah, just multiply it out, right? So what do we want m is? In fact sum of m P m from m is equal to zero to infinity Yeah, that's fine. In fact if I can just Abuse the notation a little bit. It's not an abuse. It's in fact the sum from m is equal to minus infinity to infinity because P of m for m is equal to minus one and all negative numbers is zero So I haven't added anything to this So I need to know what happens to that sum. Okay, so I'm going to multiply this equation by that quantity Let me erase this So I'm gonna multiply that equation by this quantity help me out here because I'm very close to the board But let me see how it goes. So you get d dt Sum and I'm just going to drop the limits of the sum. It's minus infinity to infinity m P of m right is equal to minus Let's do all the pieces separately alpha sum m P of m minus gamma sum of m squared P of m Because there's already an m here and I'm multiplying by m again Okay, then you get plus alpha. I'm going to run out of space. So let me hope for the best. You get plus alpha times sum of m P minus one m times P of m minus one not P minus one but P of m minus one right and the last piece which is plus gamma and The sum m m plus one P m plus one Okay, so far so good. This is all completely correct and accurate. So this kind of thing Hint is going to be a question in your exam Well, your exam is going to contain a very simple extension or a reduction of this problem Your exam involves working out what happens when alpha is equal to zero and you just have gamma So when alpha is equal to zero and you just have gamma It's just radioactive decay and you're going to have to do tricks like this to work out the answer very simple so now This is not very nice. This is not very nice because I know what this is This is just expectation value of m. I Don't really know what that is. Well, it's expectation value of m squared But okay, and I don't know what to do with these two things So what should I do if you've seen this before it's fine if you haven't seen this before I want somebody who's not seen this before to try and tell me what to do. Yeah You haven't seen it before very good. So the the the problem. I'm trying to articulate is when you write down these master equations This This is the variable that we are trying to calculate the behavior off Yeah, it's written down as a certain sum. It's a very simple sum. I recognize that some here But I don't recognize it anywhere else. So the goal can we get a closed form equation for just this guy So how do I massage the right side of the equation to contain other terms that look just like this guy Changed into dummy dummy variables, right? So this is in fact equal to alpha sum of m plus 1 Pm Right, I just increased the dummy index by 1 since the limit goes from minus infinity to infinity It doesn't matter This is just equal to gamma times sum of m m minus 1 Pm Okay, so now I'm in good shape. It turns out then what I have is I get d dt of Expectation value of m which is this guy is equal to well this minus alpha m Pm Cancels with this Yeah, this gamma m squared Pm Cancels with Well m yeah cancels with one of these terms, right? This is m squared minus m So this gamma m squared Pm cancels with this guy And the only thing you're left with is in fact alpha sum of Pm right minus gamma sum of MPM Right, and if you stare at that for a millisecond, you'll realize this is equal to 1 Because the system is normalized and this is equal to m Right, so let's write the whole thing then d m dt is alpha minus gamma We went through a lot of Circus to get this very simple result Right here. I wrote dm dt is alpha minus gamma m and here I have dm dt is alpha minus gamma m But I've taken you through a long arc This is a sort of guarantee This is a guarantee that if you're interested only in the expectation value It's going to fit your original equation But this doesn't always work It only works if these terms are linear in M Okay, alpha is linear in M trivially gamma M is linear in M Yeah, so if all the terms of your equation are linear in the state right then The expectation value will obey the deterministic law. That's an interesting thing Secondly, you can actually work out what happens to all the other moments Now there's no guarantee that the other moments have closed form equations like this in general the equation for higher moments require knowledge of all higher moments It's only in very special cases Right that you can write down closed form equations for the moments of the distribution Even though you have the entire equation sitting right in front of me any any questions So I find this to be quite nice because this is another sense in which the deterministic law has emerged From the underlying stochastic chemical kinetic process. It's another reason why your chemistry teachers didn't lie to you They just didn't tell you they were taking an average Okay Fine, let's keep going Couple more things to do before I wrap up Can I erase this? Can I erase this? Okay You'll need to use this trick for your exam right in the exam You're going to have to do the same thing except you're going to put an M squared and you're going to have to do all these Cancelations and see what happens Giving you the answer guys. I'm giving you the answer. Okay Okay, so I'm going to now Talk about a very important Test case for this kind of modeling approach because it has been applied to a real system been applied to a real genetic system in Bacterial cells and human cells and various other kinds and it stood the test of time This is the idea of a bistable switch Okay, the bistable switch or a flip-flop To state system you can call it many many things except this one is made out of genes Okay, so in order to understand this I'm going to develop The model a little bit. I'm going to derive that I'm going to derive the deterministic equation for you and Your graded homework problem is working out the stochastic consequences of the same thing Okay, so the bistable switch is It's interesting it has the following property if this is x then this is f of x and This is g of x Right and wherever f of x is equal to g of x you have a deterministic steady state Creation balances degradation right there are three points for example where this happens Now if you look at this zone all the way to the right their degradation is higher than Synthesis g is higher than f therefore you're going to move to the left, right? But in this intermediate zone Synthesis is higher than degradation so you're going to move to the right in this zone Degregation is higher than synthesis you're going to move to the left and maybe there's a tiny little zone here We move to the right So this is a stable point. This is a stable point. This is an unstable Yeah, how many of you have seen this kind of thing before Okay, how many of you have not seen this kind of thing before Enough people. Okay, so I'll spend a little time talking about it so There could be a parameter That controls the shape of this f curve And depending on the values of the parameters, there's a lot of things that can happen to it, right? So for example, I can develop a series of curves that have the following property right initially The curve intersects only once right Maybe it gets a little closer Maybe then just as tangential Then it crosses three times it becomes tangential again and finally it has Yeah, so this is just I'm saying that f depends on some unknown parameters kappa kappa looks too much like x Phi right f depends on some unknown parameter phi and as I increase phi As I increase phi the system goes from having One stable fixed point one stable fixed point and one critical point Two stable fixed points one unstable one stable and one critical And one stable fixed point another way to write this down remember this is x right right another way to write this down is to write as a function of phi As a function of phi What are the values of x? Which are the stable points and you notice here there's just one low value of x right for theta for theta values of phi values of one two three four five six At value one there's only one value of x at value two there's only one value of x At value three it's interesting this value of x is here, but a new one has just emerged Then you have four where there are three values Then you have five Where there's only one value down here and there's one up here and you have six You have six where there's only one value up here Okay, now it looks a little odd, but if you stare at it trust me, this is the curve you're going to get okay This is just the solution to The equation f minus g is zero as a function of phi That equation in general it's a non-linear equation. You can have multiple roots. These are its multiple roots This traces the path of the unstable fixed point And these two trace the parts of the stable fixed points right this is called a saddle node bifurcation This is called a saddle node bifurcation In this intermediate zone of phi the system is bistable In these zones it's called monostable sometimes this is called a low fixed point. This is a low slash high This is a high fixed point low and high to measure the amount of the value x that there is in the system Okay Now i'm going to do one of those rotations again, right? So here i'm putting phi on the x-axis and i'm putting x on the y-axis one of these curves and The deterministic expectation is if you start off with some value of phi like so You're all going to be sitting over here and as you move the value of phi down you stay up here Because you don't have any reason to access that fixed point and then there's a catastrophe Boom you stay up here stay down here moving the other direction you stay down all the way till here There's a catastrophe you move up Right and this is called hysteresis called hysteresis Okay Another way to think about the same thing Is you can actually think about A sort of energy well representation Where you might want to say dx dt This is where i'm going to kill myself for using phi So let me not use phi somebody give me a letter quickly. I can't think of any letters What letter have we not used q? You can let q be the control parameter Oh, we've used q something give me some other letter I'm totally The english letter c, okay fine no problem control parameter c Yeah, very good So you might want to write down an equation of this type dx dt is minus d phi dx right so What is this this is the behavior of a system in a potential well Right, but in a highly viscous medium It's a behavior of a non inertial system in a potential well Yeah, so in other words, it's just a ball rolling down a hill in a highly viscous medium It has no inertia So it's not going to flop around at the bottom of the hill The potential wells that correspond to all these states in this case The potential well has a well on the low side and nothing on the high side Here it has a well on the low side and something is starting to happen on the high side Well on the low side it becomes flat now it has two wells This well goes away One well and then you have only one well right the These points where the wells exist Are the stable states of the system These points where the peaks exist Are the unstable states Right, so the unstable state corresponds to the peak of a well So I can even draw that green thing If I could draw many many of these trace them through you will find a sort of thing like this So the deterministic steady states can be thought of as the troughs of a well And the unstable fixed point that separates them can be thought of as The peak of a well and if a ball starts off on the peak it's going to fall either to the right or the left But if it's in one of these troughs it has no reason to jump over to the other side This is the deterministic case, right? It's called a two-state system By stable Double well potential. I mean it has many many names Okay fine So and hysteresis happens because once you're stuck in a well There's no force in the deterministic world that's going to push you over to the other side Unless you have an external control parameter Okay, so fine Let's see how to treat this Let's see how to treat this using This representation Okay, so what I'm going to do now is move to a Is it the same way that I solved the steady state of the master equation? I'm going to solve the steady state of the Fokker-Planck equation for f and g Yeah, so let's do it the usual way So if I'm solving for steady state Then I get minus d dx f minus g p Is that um minus half d dx F plus g p That term is equal to zero Yes A little larger. Yes. I'm I'm yeah, I don't want to erase anything. This is my problem I wish this board was many times larger. Let me try a try. Okay, so I have If I set that term to zero I have minus d dx F minus g p minus one half d dx F plus g p Is equal to zero Right, I've just set this Fokker-Planck equation to zero in the same way that I had set the master equation to zero so far. It's the same move Yeah, but I've deliberately taken the partial Inside the bracket Okay, because I want you guys to interpret it as this is the derivative of a flux So we do fixed laws of diffusion. This is the flux j That's moving through the system Right, and we already knew from the master equation idea that the flux must be zero The net flux must be zero because there's no guys exiting or entering from the left side Right, so this thing itself must be zero must be zero So then you get the following thing you get f minus g p of x Is equal to half d dx f plus g p of x Okay Now what do you do right painful painful So by the way these partials are now redundant because it's just a function of x because time has now gone to infinity. You're in steady state I'm going to define this thing as a new variable q of x Right, so therefore the left hand side becomes f minus g over f plus g p of x Is equal to one half. Oh sorry q of x f minus g over f plus Yeah q of x Is equal to one half d dx f plus g Oh just q Have I missed any minus signs? I think not okay. Let's hope for the best. I hope this q doesn't look like a phi Okay, how do you solve this thing rather trivially because you get on the left side to f minus g over f plus g and on the right side you get d dx the natural log of q Because moving this to that side you get a logarithmic derivative Cool any questions And yeah, oh what's the reason to set the flux to zero remember h was zero earlier H was the the difference of cells passing left and right And you can't cross the left boundary at zero you can't go below zero molecules, right But that matches the flux from one to zero because everything is in steady state Yeah, there are many kinds of steady states, but this is a zero flux steady state Right, it's it's also like what you knew about detail balance, right everything balances exactly. Okay It's a zero flux steady state But not for the detail balance reason it's a zero flux steady state because it's chemistry and you can't can't cross to negative numbers Okay, so this is easy. How do you solve this you integrate it, right? So you get something like q of x Is something like the integral or e to the integral Of two f minus g over f plus g Right dx And p is just that over f plus g so p of x Is equal to one over f plus g times this thing Okay, so let me in fact I can get rid of this And let me write that down in steady state For the bistable system in fact let all this go away for the bistable switch in steady state You have Not even for the bistable switch for any kind of f and g, right you have p of x Is equal to one over f plus g Right times e to the integral Two f minus g over f plus g dx, right now I want you to stare at this briefly You might ask what the limits of the integral are Right think of it as an indefinite integral the indefinite integral brings a constant up here The constant is used to normalize this equation Yeah, so you can derive the constant by saying the integral of p Uh is equal to one and that will give you the correct limits of this Yeah, so don't don't worry about the constant if you worry about the constant I'm just going to put it as here a Okay, so this is a correctly normalized probability distribution It's a normalized probability distribution that depends on certain things Right, um, okay Now remember how do I get this potential well? What is the actual value of phi that gives me curves that look like this Right if d dx is mine if dx dt is minus d phi dx, right then phi must be equal to Minus the integral of f minus g dx Yeah, that's how you get a potential If phi is minus this then d phi dx is just f minus g And minus d phi with a minus sign and minus d phi dx is f minus g itself, right? So this is correct So here This guy looks a bit like phi Looks like a potential right so I can even write a minus sign You can even write a minus sign And put minus over here Two minuses Okay, so this looks like Some e to the minus phi Okay, not five x it looks like some yeah, so it looks like some e to the minus five five x but Okay, so it's starting to look a bit like you're well known. I'm very familiar Boltzmann distribution Okay, but where's the temperature? The temperature is obviously coming from this term f plus g Now I'm not saying this is exact it looks like A potential term and this other piece right so let me Try and make this analogy a bit more exact And I'm going to do it for a very specific case which I erased This is why I hate erasing stuff, but I'm going to do it for the case of d x dt Is alpha minus gamma x For this simple case f of x is alpha And g of x is gamma Right and for this simple case We know That f looks like this And g looks like that And I know that the system is probably going to be spending all its time around here because that's the steady state Right and in that zone, let's look what f minus g and f plus g look like Right in that around that value of x star Around that value of x star x star is alpha over gamma Around the value of x star f is equal to alpha because it's always equal to alpha and g is equal to It's also equal to alpha because it's a steady state Right and if you don't move too far from that zone f minus g are going to be f plus g is going to be equal to about 2 alpha Right so that part of the thing becomes constant So this integral becomes something like Minus let's just look at this piece That's what becomes equal to 1 over 2 alpha Right times 2 Minus integral of f minus g d x Okay And we already called this thing phi Right so this is exactly equal to Phi of x over alpha Right so this thing in fact goes to e to the minus Phi of x over alpha Okay times some constant over 2 alpha One constant one more constant doesn't make a difference right So you might as well just call that 1 over z. Okay, so this is very pretty This is very pretty. So what I've shown is in A certain regime in this case regime small enough that f plus g doesn't change much over the domain of integration Okay The distribution of mr and a numbers Right looks very much like a Boltzmann distribution sitting around a potential And the potential is precisely given by f minus g minus integral of that But this is the temperature so now what you have to do is actually work out You know further implications of this right so let's let's plug that in So we already know what f and g are right so f so integral minus of f minus g is integral of minus alpha minus gamma x d x Okay, and the answer to this is some alpha x minus Let me draw the diagram If that is f that is g then what does phi look like phi looks like A parabolic well Right phi looks like a parabolic well If I write if I expand this thing around x star Right then this thing is just Gamma of x minus x star or gamma of delta x Gamma of x minus x star because I can write I can write alpha minus gamma x star plus delta x Which is alpha minus gamma alpha over gamma Plus delta x Which is just minus gamma delta x So in units of delta x The value phi just looks like e to the minus gamma delta x squared phi is equal to Gamma delta x squared and this potential well looks like e to the minus gamma delta x squared Okay, and you have an alpha on the bottom and I'm just going to move that as alpha over gamma and rewrite that as x star The flg went because we're saying that around this point where much of the probability lies both f and g are numerically equal to alpha Yeah, sorry. Let me start again. Which which bit here? Okay, so I'm just looking at this Okay, I'm looking at that term and I'm saying that by observation f and g are both equal to alpha Around this point, so I'm going to make that into two alpha F minus g So I take the two alpha out Okay, then I have minus two f minus g now this f minus g integral The limits I haven't been specified, right? So I'm free to move the integral around in particular. I'm free to integrate around the center point I know what the center point is the center point is alpha over gamma So I define a new variable Which is delta x which measures deviations from alpha over gamma Okay, and I'm free to integrate using that delta x because it's just a linear transformation with no slope Yeah, so in that in those coordinates The value of f minus g is just minus gamma delta x Right, it's just saying that around this point f minus g is just a negative line with slope gamma And when I integrate that I just get minus gamma delta x squared over two. Sorry, so phi Is equal to minus gamma delta x squared over two. It's just integral of delta x arbitrarily, okay So when I plug everything in when I plug everything in This whole thing becomes e to the minus delta x squared over two alpha over gamma Right times some times some constant The alpha came from f plus g the factors of two cancelled out here The factor of two came back when we integrated and therefore it's sitting over here So look at this. What is this? This is a Gaussian distribution This is a Gaussian distribution The Gaussian distribution has a mean of alpha over gamma The Gaussian distribution has a variance of Alpha over gamma and you know what that distribution is Is the Poisson distribution Right, so this whole rigmarole actually comes back and says this thing Exactly gives you the limit of the Poisson distribution, which is the A Gaussian with the correct mean and variance Right, so this is quite nice Okay question only around here No, but The probabilities go very low for all other values So in in the range of integration. This is a very good approximation and outside the range of integration. You don't even care Okay, so in general by the way if f and g are not linear Okay, you can still say that maybe the stochastic distribution is quite tightly focused around Some fixed point you can linearize around there and do the same trick And that's called the linear noise approximation Okay, in this case, it's exactly linear So You're right. Look if I had solved this exactly I would have got something that shapes more like a Poisson distribution. It would have had some skewness It would have had some correction terms Because of all that stuff Because the Poisson is sort of skewed to the skewed to the right Okay Fine Good. So what I want to say from all this is the following thing We took this master equation approach and from the master equation approach, we learned two things We learned that for the simple mRNA creation and destruction problem The solution is a Poisson distribution All right, we also learned that the expectation value of the mRNA number obeys The traditional deterministic equation We then Got more ambitious and we did the same thing for the Fokker-Planck equation because this allowed us to solve things Even when the coefficients were not linear in n We did that and we derived this very nice formula For the steady state distribution of an arbitrary chemical kinetics stochastic process In a single chemical variable This trick will no longer work once I have x and y and z Too bad, but for just x it works quite nicely Okay, and I derived it now to check my derivation I checked what it says about my favorite equation dx dt is alpha minus gamma x What do I get? I get exactly a Poisson distribution. I get the Gaussian limit of a Poisson distribution It's kind of cool, right? So this whole thing is reasonably accurate The homework problem that you you're given, okay, I'm going to derive a certain form for f and g You need to take that form you need to plug it into this equation and plot what the curve looks like And you need to normalize it Then you need to Run this kind of simulation Many many many times and see if you get the same histogram in steady state Right, then you need to run a Gillespie simulation many many many times and see if you get a Poisson and see if you get the correct steady state So your homework involves just using the many different descriptions. I've shown you Okay to work out how closely they approximate each other and you'll find that they're all pretty good And one last optional piece of the homework optional But very nice Is the following thing Look at this situation when the system is in a double well When the system is in a double well Deterministically it's just sitting here, right, but stochastically it's going to be bouncing around And occasionally can actually do this Right, so if you plot x as a function of time, it's going to be sitting around the low fixed point Then occasionally it'll go up there Then it'll come here Then it'll go up there and come here and so on Yeah, and so by doing one of these These simulations or these simulations you can actually work out The average time it takes For a heart cell to become a brain cell Right because this is really how cells maintain their state of gene expression. It's an autocorrecting loop Okay, so to tie a ribbon on this and finish it off in the next 15 minutes. I'm going to show you really how Sigmoidal equations like this, okay can arise In a real cell, okay And I'm going to show you maybe a picture of how How this will work. So here's the general intuition In fact, I can even use this the picture is the following you have a gene And the gene the product of the gene is some sort of Output previously I was talking about that output as mr and a but in this case I'm going to say that the output is directly the protein the gene becomes RNA the RNA becomes protein And the intervening processes we are going to ignore It's not really legitimate to ignore them But just for the purposes of argument humor me This x has two binding sites a and b In the promoter region of the gene there's two binding sites And you can bind to both those independent Well, let's let's see whether it's independent or not right, so If you have a system Which is just the promoter And I'm going to call the promoter d for dna Right One molecule of x can bind to it to make d x a That's if x occupies here Another molecule of x can then bind to it to make d x a x b That's if x occupies over here Right and since these are stochastic Processes and sort of reversible equilibrium reactions. There'll be some k plus and k minus Right, so k one plus k one minus k two plus k two minus right and similarly you could act the other way right It could bind to b And then It could bind to a Right and this is k three plus k three minus k four plus k four minus So you might have seen something like this before you might not have but this is back in the world of traditional chemistry You can think of this system as a large number of molecules And there's a lot of copies of d. There's a lot of copies of x Copy of x joins and makes d x another copy of x joins It makes d x a x b a copy of x Also leaves here Copy of x also leaves here and similarly here i'm not going to draw all of them So How do you how do you work out this you need to make a few assumptions? Let's assume that the concentration of x is so high That the bound fraction of x is very small Right, so we're going to assume that the concentrations d x a d x b And d x a x b Are very much less than the total amount of x Therefore i'm free to use the total amount of x as my As a constant In the system and now the way this development goes I mean this is the kind of thing you will see if you ever take a course on how gene expression Is regulated and and so on i'm just trying to compress all that to motivate the homework problem So we have the following equation we have d Plus x gives d x a k plus k minus Right and the solution to that equation The right hand rate is concentration of d times the concentration of x times k plus Is equal to the concentration of d x a Times k minus you've seen something like this before Yeah, stochastic chemical kinetics, but sorry mass action chemical kinetics This is the traditional thing And you'll have to draw you'll have to write down versions of the same thing for all the others right so generally what you'll then find is d x a Is k plus over k minus d And i'm going to call k plus over k minus big k sub one Okay, this is the equilibrium binding constant And then you'll find d x b d x a x b Is equal to k two times k one times d Times x sorry sorry x times x squared But how did I get this d x a x b is just k plus times g x a times x k minus with these k minus isn't k plus So that's k two Right similarly you'll also find that d x b Is equal to k three times d times x And you'll find that d x a x b Is equal to k four times k three Times d times x squared These square brackets mean concentrations. That's how you do standard chemical kinetics Okay, a few observations here Since the same chemical species d x a x b has two different mathematical expressions One of which is k one k two dx the other which is k three k four dx squared Yeah, uh k one k two must be equal to k three k four And this turns out to be simply the implementation of detailed balance for this thermodynamic system Yeah, there's a whole other way to think about this which is just different states of a thermodynamic system And there are various energy levels associated with each of these the energy levels Uh the free energies and e to the minus free energy are related to these k's somehow So if you've ever done a Renius theory That's where everything comes up So now we're going to take a certain limit, right? So we know that k one k two Is equal to k three k four Right, so now let's assume that k one Is very much less than one Or k one is very much less Than k two and k three is very much less than k four Right in other words these k's are small So this reaction is highly in the left side These k's are large so the reaction is highly on the right side Okay, these k's are small these k's are large If you work out what this means you just write down all the terms And you keep track of one important variable which is that d total must be equal to d Plus d x a plus d x b Plus d x a x b If you do that These conditions mean that these things are very small if I divide this by d total But it means that these terms are very much less than the total Right and it's easy to imagine why so the system is in this state initially It's highly unlikely for the first molecule of x to bind. It's highly unlikely But as soon as that molecule x binds it's highly likely another molecule of x binds And this is a thing called co-operativity In protein binding it means that the binding of the first molecule has somehow changed The conformation changed the internal state of the second binding site And made it much more likely for another copy of x to bind Okay, it's called co-operativity and it's a very common feature of how these promoters work Yeah, all that to say that these terms can be neglected Yeah, so then you find that d total is just d plus d x a x b We already know the formula for d x a x b Right, so we find that d total is equal to d plus k1 k2 x squared d And this is equal to d 1 plus k1 k2 x squared so If you want to plot if you want to just write down what the total amount of So therefore d Is equal to d total over 1 plus k1 k2 x squared And d x a x b Is equal to d total times k1 k2 x squared over 1 plus k1 k2 x squared So Fine, so what do we have we have a formula much time do I have five minutes we have a formula for As a function of the free amount of x As a function of the free amount of x How much of the system is in this state? And for low values of x the system is mainly in this state Because the x is in the bottom Right and for high values of x the system is basically in this state I'm going to make one final Step, okay, I'm going to say that if you're in state d Right, you're going to create RNA with some rate new sub one And if you're in rate state d x a x b you're going to create RNA with some rate new sub two I'm also going to assume that The volumes and so on all get Look, you know, this is a concentration. So there's a volume involved. So I'm going to make a change of variables To the same variables that we're using all class where I'm talking about molecule numbers in a fixed volume It's basically just amounts to changing the numerical values of k one and k two Therefore the rate of synthesis Of whatever thing is being made here, which happens to be x Is equal to new one Plus new two k one k two x squared over One plus k one k two x squared Right, so you're synthesizing both from here, which is that term And from here, which is that term, but the more x you have already The more likely you're going to be in this state and your standard Decade And if you plot that it looks precisely like this Right, this is f So what I this whole thing is just to say that In a real genetic system There are very many systems that behave exactly like this. In fact cells are designed to behave like this They're designed to set up the rates of synthesis and decay of some protein So that it creates a double well potential And the double well potential If the wells are sufficiently deep Then once you stick a cell in a well, it's going to stay there for a long time However, because of the limit of low molecule numbers and there is noise You will occasionally transition from one well to another Which essentially amounts to a cell type changing In our bodies those kinds of things don't happen at all, but for bacterial cells those kinds of changes do happen You can probably keep a cell in a well for tens of cell lifetimes, but eventually it'll flip out Because of that and I'll show you on tomorrow afternoon Some experimental data that supports this kind of picture So there we go. I've told you a few things. So we're going to grade 50 percent On one problem in the homework, which is problem four problem four involves taking this equation Plugging in certain values of the parameters that I've told you to plug in Then simply plotting this as a curve and correctly normalizing it And doing simulations using this And using this right to the best of your ability And checking to what extent all these different ways of predicting The state of the system agree. That's the homework That homework counts for 50 percent Then in the exam The exam has a few bits The first bit is just using random number generators to generate other random numbers Just like your problem first problem in the homework Another question in the exam involves Taking just like I've done I've erased it the master equation And trying to calculate an equation for one of the moments We calculated an equation for m. I wanted to calculate an equation for m squared. That's all right So you just have to go through the same steps that I told you Okay Fine. Thanks. I'll see you with the tutorial Have you has everybody signed?