 Okay. Thank you all for being here at this late point in the course. I know you've had a lot of material to digest so far. And here in the last few days, I'm hoping to introduce material that relates closely to some of the things that you've talked about already in this course. So we're going to talk about morphogens and pattern and growth and so on. And you've had a lot of exposure to those areas now. But my goal for today is to try to get you to think a little bit differently about some of the questions and some of the problems that arise in the context of those important developmental issues. So today's lecture is going to be a little bit different. It's not going to be so much focused on individual topics. We're going to get more to that tomorrow and the next day. But rather, I'm going to try to focus on a viewpoint that I hope to convince you is a very important viewpoint in approaching all kinds of biological systems, but in particular those biological systems that have to do with development. And that viewpoint comes from thinking about things in terms of control. So it's always good to start by making sure we know what our terms mean. And so we should begin by asking what I even mean when I say A controls B. And if you look in the literature, if you do a PubMed search and you look for papers that say A controls B, you will find thousands of them. But what they actually mean by that varies tremendously from one paper to another. So I've put down a sort of quick list of some of the things that A controls B often means when you look into the scientific literature and particularly into the biology literature. And sometimes that simply means that someone did a study in which they manipulated A and something happened to B. And so they say A controls B or sometimes they'll mean that A mediates B, mediates is probably the softest word in all of biology. It really is a nice way of saying you have no idea what you're talking about. Or A is necessary for B or A is sufficient for B that sounds much more rigorous. Or A is somehow an explanation for B. Sometimes people will say A regulates B. That has the sense A does something more than just A influences B. Everyone agrees with that. A coordinates B. That sounds even better. So the thing is when you take all of these and sort of try to look at them in terms of Venn diagrams, you can say, well, the weakest one is just influence. A affects B. And you'll find lots of papers saying A controls B. That's all they really mean. And then what's the difference between A regulates B and A affects B? Well, some people will argue it has to do with quantitative effects. If small changes in A give small changes in B, then we can call it a regulator. And then we can call something a coordinator if it regulates and it regulates relationships within B. So it kind of gets things to correlate with each other. For example, that would be an example of coordination. So all of these are things that in the literature people call control. Except me. I will never use control to refer to any of those things because I don't think control properly means any of that. To me and to most of the community of engineers out there, although I'm not an engineer, I'm a biologist, but occasionally they allow me to come to their parties, is A enables B to perform desired tasks and achieve desired goals reliably in the face of uncertainties and disturbances. This is completely different from any of that. There's a very, very different character to what we actually mean here. And one way to think about the difference between this and all of these things is to think about what I was on yesterday, which is the airplane. And if you think about all the mechanics of the airplane, the flaps and the engines and the wheels and the shape, all these things do all of these things. The flaps regulates the lift versus the thrust and the engines regulate the speed and their systems that coordinate and so on. But there is only one controller on this plane and it's this person over here. Because he's the only one who's actually doing those things I had on the previous slide. He has goals in mind and he's trying to get the plane. He's trying to steer the plane to follow those particular goals. So by that I mean that control is all about goals and purposes and strategies and consequently control is engineering. And usually I say not physics. Although here I'll say not your usual physics because I think technically physics includes everything. Anything about the world can properly be called physics. But this type of physics is about purposes and strategies. And that's something you don't see in the physical world outside of biology. When you're dealing with mountain ranges and planetary systems and weather and so on, you can't say what's the purpose of wind. It exists. Everything is about mechanisms and what follows from mechanisms and that's some of what's beautiful about physics. But in biology, for reasons that primarily have to do with evolution and selection, biology is full of things that are there for a reason. You heard a lot about evolution in the last talk. So this distinction between thinking about things in terms of how you achieve goals versus thinking of things in terms of how behaviors follow from the laws of physics is an old argument among biologists and other scientists that has gone on for at least two centuries. And I just put up two quotes to sort of bookend this argument. This is the one that's very popular among physicists and molecular biologists. It comes from Francois Jacob back in 77. And he emphasized how it's really, you shouldn't think, it's not that helpful to think about biology in terms of engineering. Because really the dominant force in biology is accidents or contingency and then the consequences of the physics, the limitations that follow from that and that this is a better way to look at biology. But in fact, a quarter century earlier, Paul Weiss, a very important developmental biologist in the middle part of the 20th century, he in fact argued quite the opposite. He said in particular, if you're interested in developmental biology, which he referred to as morphogenesis here, that in fact the performances of engineering were far more important than chemistry or physics. So you can see these are fighting words. And I don't want to get into fights with anybody. But one thing I will say is that in the present era, which some people would call the systems biology era, there's been a big move from here towards here. Philosophically many more people are beginning to ask questions in more of an engineering way. And of course, there is no right or wrong here. These are just views. But I think it's important in terms of operating in science today to be able to understand the merits behind both kinds of views. So let me give you an example, a very simple example of what I mean by thinking about the engineering in a very, very simple way and how that would really change the way you'd approach observations in a biological system. And I'll give you that example in terms of something you've heard about multiple times now. Several people have introduced the concepts of the morphogen gradient. You heard it from Frank Ulaker. You heard it from James Briscoe. The idea that you have graded expression of things and that imposes essentially positional values from which tissues can differentiate in particular ways. And we'll talk a lot about morphogen gradients tomorrow. But many people point to this very early paper by Lewis Wolpert back in 1969 as kind of the landmark paper in biology that got biologists to really pay attention to this idea of morphogen gradients. And there was an interesting comment in the very beginning of that paper that Wolpert made. He said it's been a great surprise and of considerable importance, at least he felt it was, that most embryonic fields seem to involve distances of less than 100 cells and often less than 50, right? Which I guess it is a surprise, right? Why don't you have morphogen gradients that pattern things that are this big? Instead they tend to pattern things that are on the order of 50 to 100 microns, rather short physical distances. And he simply put that out there in his paper. And no less than Francis Crick in less than a year decided he had the answer to this and dashed off a quick note which became a nature paper. Back in the days when you could do some back of the envelope calculations if you were Francis Crick and you would get a nature paper out of it. And he said, oh, I understand the answer to this. It's simple. It has to do with the physics of diffusion, okay? And the basic argument behind that and I assume most of you are familiar a little bit with this. What is this here? Fix, okay, good, fix second law. So this is basic physics of diffusion. And the idea is if you take a drop of something and you let it spread out over time and you follow the spread, you can see that the amount of time it takes to spread a certain distance goes with the square of the distance, right? That's a characteristic of free diffusion. And so consequently, to pattern twice as long, long takes four times as long to spread stuff out there, to pattern another twice. And now you're up to eight times and so on. And so the idea, Crick argued, is that the reason why patterning happens at short range is it would take too long to spread a morphogen over a long distance. Now in order to actually show that in the paper, he couldn't simply appeal to the math behind this, which is just the spreading of individual spots that's placed in space. He used the prevalent model at the time for morphogen gradients, which is a source at one point and a sink at another, which at steady state makes a line. But if you solve that in pre-steady state, which was more cumbersome to do in those days than it is now, right, you can watch the evolution of that line, right? And then you can ask the rate at which the sort of 50 percent point, 50 percent of the steady state value moves out across the tissue or across the field that you're modeling here. And you can see indeed it goes with the square root of time, up until it levels off now when you get close to the steady state. So based on that and then doing some back of the envelope calculations involving what seemed to be reasonable diffusion coefficients, he came up with the idea that in order to pattern it more than say 50 or 100 microns would just take too long for most embryos. And that's why things were short. And that's to this day still cited as a reason. And it's cited as a great example of applying physics, thinking, and biology. The only problem with it is it's wrong. And it's wrong for two reasons. And one of the reasons is very understandable. And it's not really Crick's fault. In those days no one had seen a morphogen gradient. And so people drew them as a source-sink model, just a straight line like that. And indeed that model does evolve with the rule that time goes with the square of distance. But in fact, as we'll discuss a little bit tomorrow, morphogen gradients aren't shaped like that because they're not formed by simple diffusion in free space. And consequently they don't evolve according to the same time law. Crick didn't know that. But the second reason why the answer is from a theoretical standpoint wrong is that it was based on an assumption. And the assumption was the usual assumption that, you know, why put something into a system unless you have to, right? So he assumes simply that morphogen turns on, is produced at a certain time, and it's produced at a constant level. And that's how you get these curves where the morphogen relaxes to some steady state at a certain time that's related to the square of distance. The problem is, why make that assumption? Why not make this assumption? That for the first you know, 0.04 time units on this scale here, this system generates morphogen at seven times the rate that ultimately it needs to to get to steady state. And then when it hits that time point, it stops and goes back to the lower rate. Well, then you would evolve according to that curve, right? So is there now any real physical limit to how far out you can pattern? Not if you are willing to do that, right? So the assumption that the system was simple, right, was equivalent to the assumption that biology would not try to do something like that. Because that kind of feels like cheating, right? But that's the point. Biology is cheating all the time and will cheat at every opportunity it has to cheat. Okay, that's what evolution accomplishes. If there's a path to cheat, to get to something by changing the rules, changing the assumptions, not changing the physics, but just changing the underlying assumptions, if that's possible, evolution could potentially select for it. So in some ways, what our problem is with this thing called Occam's razor that we use a lot in physics, right? The idea that you shouldn't build a system more complicated than is necessary to explain the data you have. That's one way to phrase Occam's razor, right? You have a pile of data, you want the simplest system that fits the pile of data, and that's the one most likely to be correct. And the problem is in biology that's just plain wrong. And that's scary, right? Because it's like, well then how do you know how the system is built? And we'll talk about that a little bit at the end. But the point is you can't use parsimony in terms of fewer working parts as a justification for thinking the system is built the way you think it is. So here, this is what you should remember. Keep calm and expect control. Okay. So because what was going on here is I was suggesting that the biology knows what it wants to get to, which is this goal here. And so the biology is going to go there as fast as it can and then shut things down. By the way, this strategy of getting to goals faster is not unknown to engineers. It's very common. It's actually even known to medicine. There are many drugs that have slow half-lives, they're absorbed slowly. And so it takes a long time for them to get in your system. So what does the doctor tell you? Take two pills on the first day, right? If you've ever taken these Zithromycin pills, right? You always take a double dose. They could even have a name for it, a loading dose, right? This is a very common strategy. Cells in theory could use a loading dose. So I've argued that the way you get these things is somehow evolution and selects for them, right? But then we have two issues, which is, is this really possible to select for it? And second is this really a good idea, right? So, you know, what would you think would be some of the problems of selecting for this strategy as a way of spreading a morphogen gradient very far? Any, any ideas? What would you think would be some of the negative factors that would come of, of having the, the embryo use a strategy like this? May, sorry? It may not be possible, right? So it may not be possible to go seven times higher. So there may be physical limits. Sorry? So you could, you could overload those receptors. Is that, yeah. So you could, although you could say, well, fine, we don't care what they say until a later point. But yes. Yeah. Okay. So that's, that's an important point, right? It's like, if you're slightly off, if there are errors in what you do, right, you could way overshoot your goal. And then at what speed would you relax back to your intended goal? That same slow speed. Okay. So you'd have to be very careful. And in fact, you probably wouldn't want to shoot all the way to the goal. Maybe you'd want to shoot part way there. And so, for example, when you take a loading dose of, of a drug, you don't take so much to go directly to the goal because you might easily overshoot. So there's this issue of errors, right? So when you think about control, you have to think about what is it you're controlling for, right? What you're trying to achieve. You may be trying to achieve speed. You may be trying to achieve insensitivity to errors. And so engineers like to sort of list all the possible objectives that you might control for. And so they include things like stability. You may have systems that are unstable. A classic example is cellular proliferation. Cells like to proliferate. If you don't stop them, you know, we will all be masses of cancer, right? So you need to somehow stabilize. Speed, we just talked about. How quickly can you get to your goal? So in terms of uncertainties, we like to divide systems into parametric robustness and disturbance rejection. Two related concepts. But parametric robustness means if one of the parameters of the system were wrong. So for example, in that previous strategy where you have to go seven times as fast and then level off, what if you were missing one copy of the gene for the morphogen, right? You'd have a huge effect of that, right? Because then you'd only go three and a half times as fast, presumably. So your parameters are often things like the rate constants of things, the rate of production of things, the numbers of copies of genes, and so on. Disturbance rejection, we usually think about disturbances from outside the system. And in biology, those are numerous. The temperatures variable, the amount of energy coming into the organism is variable. The sizes of things are variable and so on. Nowadays, we also sometimes lump together the intrinsic stochasticity of systems. The fact that, you know, the laws of mass action really are approximations of the true stochastic nature of things, and that's almost like adding a constitutive disturbance into everything. It's not quite identical, but it's similar. Sometimes you may want to have structural robustness because often in biology, not only what varies is the values of parameters, but sometimes the links between things may change, right? You might suddenly have a mutation that allows this protein to bind to that protein, and now the, you know, the structure of the system is different, and you may want to be able to deal with that. At the same time, you don't always simply want systems that always do exactly the same thing. You may want to have adaptable systems, systems that can change their output depending upon, say, inputs from the environment, right? You may want to make a bigger structure or put more energy into a certain activity in a different environment, so you may need adaptability. And then two terms that you'll see that both relate to control. One you'll see more often, homeostasis, and the other one is homeoresis. How many people have ever heard the word homeoresis? Yeah, it's usually as close to zero. It's actually a useful word. So homeostasis, right, is the idea of preserving things at some type of equilibrium or steady state in the face of all these other disturbances, right? But that's not the only thing you control. Sometimes you actually want to control the approach to steady state, right? You want to control the trajectory. It may not even be a steady state anywhere. It may just be a system undergoing a trajectory, and you want the trajectory to be right. And so technically that's not homeostasis, that's homeoresis. A good example, again, alluding to airplanes, is homeostasis is a system that keeps the plane flying at 10,000 meters, right? Homeoresis is a system that gets the plane from London to Frankfurt, right, and controls that trajectory. And obviously you want both things to occur. Okay. So sometimes in control, in discussions of control, you hear the term what's optimal control, okay? And so there's a whole theory of control. There's a lot of control theory out there, and I'm going to avoid almost all of it today, because it's a bit arcane, and a lot of it is so based on linear systems that the applicability to biology, except in a, you know, sort of very general way, is not as great as it should be. So if you're looking for lots of transfer functions and things like that, I'm not going to talk about that today, but I'll talk in more of a conceptual way about control. But one of the things that comes out of control theory is the idea that you can calculate the optimum control given an objective. So for example, if your objective is to get a morphogen gradient spread to any given distance in any given interval of time, the optimal control strategy is to express as fast as you can up to that point and then shut it down. And we're not shut it down, but shut it down to the level that's appropriate for the steady state you want. So in engineering, they call this bang-bang control. That really is the word they use, because it's like bang, and then you switch to another strategy, bang. And you can show for a lot of things that bang-bang control is the optimal control strategy. And we'll talk a little bit about that on Friday. I'll talk a bit about a bang-bang system. But as someone mentioned, the problem with optimal control is it presumes you know what all the workings of the system are, and you're aware of any disturbances, and so you can make a fine-tuned system that simply achieves the right control strategy. And the problem with fine-tuning a system is that biology operates in a very, very noisy world. And so you also want a system that can deal with the unexpected, right, and the unmodeled problems. And so for that you really need feedback control. And that's the kind of control that most people focus on in biology. And feedback control is of course control in which the properties of a system are themselves influencing the control, right? So in effect people like to draw diagrams, you know, you have some kind of machinery. The machinery generates some kind of performance. The performance, it's not the same as a phenotype, right? You'll notice I'm not using the word phenotype. The phenotype is what you observe. The performance is what achieves some objective, some goal for the organism. And the performance is subject to perturbations and maybe the machinery itself is subject to perturbations. So somehow you have to actually be measuring the performance and determining whether it's suboptimal. And so some measure of the difference between performance and optimal performance is used to generate a control signal, some type of information that flows back into the system and in a sense neutralizes the disturbances, okay? So that's the basic idea behind feedback control. And of course feedback control goes way back before biologists got into the game and most of the control theory we have was worked out by electrical engineers who realized early on the importance of control. So this is from a lovely review from Mustafa Kamash that came out this year on robustness and control in biology. But Mustafa's an engineer and so he begins by talking about things like simple amplifiers. And this is an actual schematic for a common amplifier circuit that's used in a lot of machinery today. And so he points out that you have a model here where the output y depends upon the input here v. The potential across here is giving you the output over here. And without this feedback loop, you would have this very simple system where the steady state of y is just proportional to the inputs here u here and you have y it turns over or decays at some rate c. So basically without any control you get a system whose steady state depends upon all the internal parameters and all the inputs and outputs of the system. And the relationship between the steady state for y and in this case the input u this proportionality constant is what the engineers call the gain or the steady state gain of the system. So an amplifier's job typically would be to have a high value of the gain so that y is much bigger than u. In this case it's going to be negative feedback. And you see this negative sign here right then in fact in a feedback amplifier u is multiplied by something that I'm sorry u has subtracted from it so this is negative feedback something that depends on y itself. And because of these two resistors you get this average here beta which multiplies y. And so when you calculate the steady state for that you see that y also depends upon u and beta but if beta is very large compared with in this case 1 over a you get a steady state result that's essentially u over beta. In other words the output of the system now becomes insensitive to two of its parameters the same two parameters that the system was very very sensitive to before. And so that's how you get a feedback amplifier that's insensitive to the design of all the circuitry in here. Of course it is sensitive to these two resistors there that's what gives you beta but it's very easy to build a reliable resistor and it's much harder to build a reliable amplifier. And so the net result is you trade off one for the other and you get a system that's very very robust. Now that's how engineers do it with amplifiers and circuits and so on it's very difficult to find a direct analogy between that and what goes on in biology. But a reasonably good analogy is to think about feedback control in gene regulation. Okay so we'll essentially generate the exact same scenario here but forgetting about electronics and resistors and think about gene regulation. And here you can imagine that if you're if you're neglecting feedback the typical way you think about gene regulation is that the production of some protein of course I'm conflating together the mRNA and the translation is all one step but you can make multiple equations if you like. The production of the protein depends on a some production rate maybe that relates to the number of genes how methylated you know the histones are who knows what's under A but that tells you about the production rate and then everything turns over in biology nothing lasts forever so P has to turn over at some rate and we'll have a rate constant for P destruction which is C. Right and so you can see right away that system reaches a steady state the steady state is A over C right which we'll call alpha all right and of course you can see that this is not a robust steady state if the the amount of capacity to produce changes if the rate of degradation changes it changes P right so this is a very sensitive fine tune system. Now what happens if you put a negative feedback loop in here in this case we could make A go down when P goes up you could put another kind of negative feedback loop you could make C go up when P goes up right but we'll just consider this one A goes down when P goes up so now A is going to be modified by something that's a function of P and in biology we very often will make that function a hill function not because hill functions are the greatest thing in the world but hill functions are very useful in so far as you know they describe curves that saturate that's very biological things don't go on forever right there's always some saturability they have some some number that characterizes kind of where their operating ranges and they have a slope okay so you can have hill functions with different slopes and the slope that's usually in the hill coefficient right so in this case we have a declining hill function right so that one starts here it goes down to zero and this is related to the slope and this is related to that and so it's just a very convenient way of encapsulating different forms of feedback functions right most of the results that come out of studies of feedback in biology are rather insensitive to the actual form so but having some form at least enables you to to get a few analytical results so for example in this case we see that now we have a steady state we can again lump together a and c as alpha we can solve that steady state because you can see the steady states on both sides here when at least when n is one we can solve it to this and you can write away c that as long as b which is the strength of the negative feedback is large then p is roughly proportional to the square root of alpha and the square root inversely proportional the square root of b so that's an improvement because previously p was just proportional to alpha so now we've made p less sensitive to alpha now what if we change the steepness of this relationship the some people called the aggressiveness of the feedback so for example we made n equals two well now p is going to have this form well that's getting a bit ugly right so is this better is this worse well one thing you could do is you could take the limit as b gets large and you can actually find that p goes to about the cube root of alpha okay so that's getting better and you might imagine maybe when we make n equals two it might go to the fourth root of alpha but the question is how are we going to get those results when these things are are so messy and so we need a formalism for talking about robustness yes okay so p is proportional to the square root of a and so that means now a has to change by a factor of four in order for p to change by a factor of two whereas before a only needed to change by a factor of two in order for p to change by a factor of two in fact that's the thing that we need to formalize is that that that concept and the way that we formalize it the way engineers typically formalize it is through what they call a sensitivity coefficient sometimes people write that with the s sometimes people write with a sigma don't worry about that but the sensitivity coefficient is simply the relative it's the relative relative derivative or basically the slope of a log log plot okay so it's the fold change in a that comes from a big fold change in b so for example if something is proportional if something is if p is proportional to alpha then the sensitivity of p to alpha is one okay because a one fold change in alpha gives a one fold change in p if p varies as the square root of alpha what's the sensitivity coefficient one half right if p goes as the square of alpha two so it's similar to the notion of the order of a of an interaction right but we in engineering people talk about a sensitivity coefficient so a sensitivity coefficient of one describes a linear relationship below one a sublinear right lower order relationship and very commonly and we'll talk about this a little more tomorrow engineers when they talk about building robust systems talk about getting sensitivity coefficients down below around point three which means that a two fold change in alpha would cause about a 15 percent change in p okay see how a system imagine how the rate of production is going to be sensitive to lots of biological parameters to be in a given order to be given certainty given development is that change is that just put into the system right right okay so this is something i'm going to get to in a minute but but so you'll notice here yes the the p is now sensitive to the square root of a that's an improvement right but p is also sensitive inversely to the square root of b whereas over here there was no b right so you paid a price for achieving robustness right you achieve robustness to alpha but you paid the price and now you have another parameter that you could be sensitive to another way in which the system could be perturbed and you see that also even in the in the engineering case of the amplifier right that you can make the system more robust but only by putting in these resistors here and the feedback loop and they have values right and so now the system is very robust to the internal values here but it's completely non robust to the values inside the controller okay and that's typical that's unavoidable right that controls it control involves trade-offs you build a system that's less sensitive to one thing but you pay the price by making it more sensitive to something that wasn't there before remember in the case of the amplifier this was not a problem because resistors are more reliable parts than amplifiers right they keep their properties for a very very long time you can measure them make sure they're fine before you put them in the amplifier is much messier and so control involves moving sensitivities around and putting them where you want them right so if you ever read a paper that says how to in fact I even went to a meeting once how do we maximize robustness and everybody you knew anything about robustness laughed at the title of the meeting because there's no such thing as maximizing robustness you can only maximize robustness to certain things at the price of generating fragility the opposite of robustness to other things okay exactly yeah exactly so if you make be something you can rely on right then you don't have to so an example is suppose your perturbations that that are causing uncertainty in your system are coming from stochastics due to the fact that you have low numbers low copy numbers of something right you want to control the low copy number a process that depends on a low copy number enzyme or transcription factor you could build a negative feedback loop and in that loop could be a high copy number b right a high copy number controller which would be completely immune to those kinds of stochastic fluctuations okay there'd be other perturbations you'd have to worry about but you wouldn't have to worry about those okay so sensitivity coefficients give us a way to formalize this idea of robustness right we're trying to reduce sensitivity coefficients and since sensitivity coefficient is defined as the derivative with respect to a or alpha times alpha over the value of the thing you're taking the derivative of right that's what's here right the derivative of a with respect to b times a over b then you can actually get a analytical expression for the sensitivity and you can show indeed that as this single parameter here goes up the sensitivity to alpha goes down and approaches point five which is square root sensitivity which is exactly what we're talking about okay so now if you go to n equals two you get this uglier form and indeed you can calculate the sensitivity for that that's really ugly right that's no help but here's how sensitivities can be a help is you can very often use sensitivity analysis to derive sensitivities without having to get an analytical solution for the thing that you're trying to establish the sensitivity of and so here's an example I'll just take you through this briefly remember the sensitivity is related to the derivative of the steady state to alpha but let's say you can't solve for the steady state for example if you have this form where the hill coefficient is given as a variable you don't know what it is there is no analytical solution for that you have to put in some value and it gets very very difficult when that value gets large but let's just take the general case when it has the value n we don't know what it is we can take the derivative of the equation itself both sides with respect to alpha and so we now get this form to turn that derivative into a sensitivity we simply have to multiply it by alpha over the steady state value of p and we can simplify that and that looks a little bit simpler but now the trick is to realize that from the original system we can simplify it for example we can solve that system for b right and then stick that in there and this simplifies even farther and now you get this result okay so this is an implicit result for the sensitivity it's implicit because it has here the value of the steady state in there which a priori you don't know from the system but for biology that's actually a good thing because often the values of the steady states are known much better than the values of the internal parameters so basically what is this telling us that the sensitivity to a will vary as one over one plus n as long as p is very much below alpha now remember alpha is related to the gain of the feedback how strong it is so p is always going to be less than alpha alpha is what's driving p down if alpha if if it drives p down a lot then essentially this term vanishes and you just get one over one plus n and you can do that same trick for all three of the sensitivities in the system sensitivity to alpha sensitivity to beta sensitivity to n itself you can get these nice implicit expressions and you can say whenever the feedback is strong right you can show that both alpha and beta sensitivities vary as one over one plus n which is what we expected from that from that one one half one third right but you can also now calculate the sensitivity to n itself how sensitive is the system to how steep the feedback is and one of the things you can see here is this again this notion of tradeoffs so first of all there's the tradeoff that you picked up a sensitivity to be but of course you can make both of these very low by making n very high but then what happens when you make n very high here then this simply becomes log p so now it's very sensitive to p if you want to have an output at all then you're going to have um a substantial sensitivity here so again you have that notion of tradeoffs and you know tradeoffs is not something that is unusual you see it in engineering all the time if you want to build a really really fuel efficient car it's not going to be very tolerant to crashes if you want to build a really really crash tolerant car like people like to buy in california where the freeways move at 75 miles an hour it's going to look like that and it's not going to be very fuel efficient if you want to build an animal that retains heat really well it's not going to fly and if you want to build an animal that flies really really well it's going to be shedding heat like crazy again there's just thousands of these tradeoffs in biology and the tradeoffs all arise out of the need for control okay so up till now i've talked about parametric robustness how to be robust to parameters and local sensitivity analysis local meaning we're varying one thing at a time that's not the only kind of sensitivity analysis one can do in a system one could also look at the conjoint variation of all the variables but i'm not going to get into that today what about some of these other objectives we started off talking about speed what does feedback control do to speed so if you remember again this very simple system of a gene circuit without feedback where it's produced at a constant rate and degraded with a constant probability you can easily integrate this and solve it and find that the dynamic solution is just one over it's just alpha times one minus e to the minus ct so all the dynamics are captured here in c right c is the relaxation time or relaxation rate constant for the system okay so now what happens when you do it with feedback so we'll do the same thing with the negative feedback on a right we'll use our hill function with with b and n in there and right away you can't solve that differential equation to get an analytical solution for that but you know you can of course you can plot it and so this is what it looks like as a function of having n b2 but varying b and as a function of in this case b is 8 and i'm varying n okay and so what do you see negative feedback does makes things faster right essentially speeds the approach of the system to its steady state now this may seem counter-intuitive how can being negative make something happen faster you're reducing the amount of production right well the trick is that these things are plotted not as the absolute amounts but as the percentage of the way to steady state obviously if you crank up the negative feedback you'll have a lower steady state but you'll get there sooner okay that makes sense right if you think about your approaching steady state and now as feedback kicks in you level off sooner okay and so this extremely common reason for using negative feedback in engineering is to shorten the time constants associated with processes to make things happen faster the price you pay is the amplification goes down right you don't get as much protein out as if you were open loop without feedback control but you get there sooner and usually adjusting the amount of something is easier than adjusting the time constant and so that's one of the reasons why negative feedback control is often used for that purpose that's right on the dynamics no it does not right so or if you increase the not the a per se but the v right you could increase the production I'm sorry this a not alpha but if you increase a you'll affect the steady state but you won't affect the dynamics so you can keep them separate from each other okay now another control objective we talked about is rejecting disturbances right so there are a lot of ways to look at disturbance rejections this is one of the simplest not always the best but it's a way that engineers like to do that they'll stick a sine wave into a system as a as an added input and then they'll see what comes out how it changes right so here we're just going to put a little sine wave into this and solve that equation that should have a steady state of one because we've normalized to that and then we'll just see how the sine wave changes the output of the equation and one of the things you can see is often that which produces parametric robustness produces disturbance rejections okay in the sense that as n goes up right n equals zero means no feedback you get less and less perturbation due to the sine wave right and that would be the same if I put a random noise term here it would pretty much be the same if I just built in the intrinsic stochasticity into the system but my point for showing this is to point out that when you're dealing with disturbances they have a temporal quality to them right as opposed to a parameter adjustment disturbances change in time and so you can decompose any disturbance into some series of sine waves right that's just a Fourier transformation and the point is that the amount of attenuation due to feedback depends upon the frequency of the disturbance so you can see for very high frequency sine waves feedback is having less of an effect than for lower frequency sine waves and the reason is because even the system without feedback is filtering out the high frequency disturbances so you don't really need the feedback so much for those disturbances and so when you're interested in disturbance rejection there's a a formalism called a bode plot this is not the bode plot for that this is for a different system we'll talk about on Friday but in which you plot the sensitivity to disturbances when those disturbances come at different frequencies and so you can often see there are many systems for which the ability to reject a disturbance that is have sensitivities below one only occurs at certain frequencies and sometimes may even flip around so it becomes actually enhancing a disturbance at other frequencies we'll talk about an example of that on Friday okay so then I talked about this issue of adaptability right that's another objective that you're trying to achieve a control this is a little bit more subtle but again think about the system of simple negative feedback right without feedback the steady state of the system is just alpha right a over c and the dynamics are this so from that it's not possible to change the dynamics of this system without changing a if you want to keep the steady state the same right so let's say evolution needed to make this happen faster perhaps you evolved an embryo that had to evolve in fewer days and so you needed to speed up this time constant the problem is you'd have to actually make changes at two different genes the gene that regulates the time constant and the gene that regulates the production level and you'd have to match them to each other and that might be tricky right and then there might be other consequences for changing a a the production rate maybe a's involved in producing something else and so on so you end up with a coupling right between parameters when it comes to adjusting in terms of which ones you have to adjust but in the feedback system you have an extra degree of freedom that's provided by the feedback that would allow you for example to generate these two systems that have the same steady state but completely different dynamics but exactly the same a right now the reason is because you have b to play with right and so now you can adjust b as a way to avoid having to adjust a so there's nothing no free lunch here you still have to have some things adjustable but maybe b in this case is related to the environmental change that caused you to need to adjust c in the first place right so the point is you get to take things that are coupled uncouple them and put the thing that's coupled somewhere else where it can relate to something that's potentially useful so that's a way in which feedback creates what some people call design degrees of freedom the ability to change the design of something okay now let me talk about one limitation of the type of feedback we've been talking about which engineers call proportional feedback and really what it means is any system in which the feedback equation depends simply on the the current time value of the system okay so a better term for that would have been Markovian feedback right it doesn't depend upon the past history of the system doesn't depend on the future of the system it just depends upon the current value and that's what you're feeding back into the system and there's a an intrinsic problem with all feedback of that type and you can see it right here when we looked at the sensitivities associated with this simple system here you notice that the only way to drive those sensitivities to zero would be to have n be infinity okay which is not really possible right you'd have to have a perfect switch like a hill function and so this is a categorical problem with proportional feedback and I like to illustrate that with a very you know personal story which is what happens every morning when you get on the scale and you're not happy with the weight you see right which some of us have that problem and so we'll have a goal in mind that we want to be this particular weight and based on how far away we are from that goal we change our behavior right we eat less and eat healthier food and and now our weight will come down right but will we ever reach our goal right what's what's the experience in the real world we never reach our goal right and the reason is as we get closer and closer to the to the goal we have less and less motivation to go there right we have a negative feedback controller in place but it's proportional to the difference between the present and the desired and consequently you know as you get closer to the desire the feedback gets weaker and weaker and just as in the previous example in the steady state there's no way the proportional feedback can actually get the error signal down to zero because if it were zero there'd be no error and therefore no more proportional feedback okay so basically the goal is never reached so the question is how would you modify this activity so that you actually do reach the goal and so I came up with a good example and anybody who wants to try this I encourage you to do it and that is keep a record and for every day that you're not at your goal and for every pound or kilogram that you're not at your goal you pay me ten dollars okay and you just keep paying me ten dollars per kilogram per day that you're not at your goal what do you think is going to happen either you're going to go broke or you're going to get to your goal right so the difference is that the error signal here right is not your motivation about what you see on the scale the error signal here is related to your bank account which is dropping and continuing to drop and will always drop as long as you are not exactly at your goal okay so what's the bank account doing right the bank account is integrating the error over time right each time you're not at your goal your bank account goes down and down and down and so this is what people call integral negative feedback control okay instead of just using the present value of the error using the time history of the values of the errors that is the integral of those values as your feedback signal okay and integral control is extremely important in biology the classic example of integral control or a classic example as you'll see there are many examples but a classic example is in bacterial chemotaxis has anybody talked about bacterial chemotaxis yet in this course so usually you get it in courses that blend biology and physics because it's a really great example of physics in biology but anyway I won't be able to do it justice except to say the way that bacteria swim up a gradient and this is their ability to swim based on gradients with just a one percent change in concentration over over the distance of a bacterium I'm sorry not the distance of I forget what the distance is here but it's a very shallow gradient you're able to reliably swim up those gradients is because they swim based on these flagella which whip around and the flagella can be in either of two states when they rotate counterclockwise they all bundle together and they form a tail that swims the bacterium forward when they rotate clockwise they fly apart and basically make the bacterium stop and not go anywhere and so these are called runs and these are called tumbles and the way bacteria chemotax is they randomly move and if there has been an increase in the concentration of the thing that they're sensing right then it will suppress tumbling and so they'll keep moving whereas if there hasn't been an increase or if there's been a decrease they'll increase tumbling which means they'll stop randomly switch direction and go somewhere else so essentially they're taking a random walk but the random walk is biased by these periodic evaluations of running and tumbling and if they're walking in the right direction they run longer and if they walk in the wrong direction they run shorter and consequently the whole population will swim up the gradient now to do that you must be able to detect the difference in concentration between where you are now or where you were a little bit of time ago and where you'll be in a short distance of time short amount of time from now which might only be a one percent difference in concentration nevertheless the bacterium can do that over orders of magnitude of absolute value of the chemo attractant and the way they do that is with what's called an adaptive circuit I think you've probably seen some adaptive circuits already but an adaptive circuit is one that in response to an input gives a signal but then declines right back down to where it was before and this is an example of what happens if you just have a whole flask of bacteria E. coli and you dump in a one micromolar concentration of chemo attractant and measure something we'll call system activity we'll get to that in a moment where you'll see that there's a transient change and then it adapts back to baseline and then when you take it away there's a transient change in the other direction and that's back to baseline three micromolar five micromolar seven micromolar you can't even put these on top of those and you'll continue to get this behavior over many orders of magnitude so the system has a circuit that senses gives a response and then adapts right back down to baseline and in that sense it's able to detect small changes over a very wide range of magnitude okay the way it works is a little bit complicated but basically there's a receptor that recognizes the ligand and the receptor has some things stuck to it but we'll call that all together just E for an enzymatically active receptor and this receptor acts by in so so this is the motor that actually causes things to go into the tumbling mode the clockwise rotation and when the ligand binds receptor that leads to phosphorylation of this which leads to decreased activity of the of I'm sorry the receptor gives you phosphorylation of that which suppresses tumbling when the ligand binds the receptor it decreases the receptors activity so it decreases the tumbling and so you go more in the direction that you were doing before now that where the feedback comes in is that the activity of the receptor is also regulated by this methylase and this D methylase okay so they put a methyl group on or take a methyl group off the receptor and in addition the receptor affects by phosphorylation the activity of the D methylase okay and only the methylated receptor is capable of doing any of these things so if you take this and you put this in this very very simple schematic diagram you have something here the methylated receptor which controls the output but the methylated receptor also controls the demethylation of itself okay so there's your negative feedback loop here and then there's also the methylation of the receptor and the input changes the amount of activity associated with this receptor okay so if we try to put that in the form of equations what Barca and Leibler pointed out a number of years ago is that under two very special conditions the equation for the rate of change of methylated receptors has an interesting form and so those two special conditions are the following the first that the rate of demethylation of the receptors so that's going from here to here depends upon the activity level of the cell and so or the activity level of the receptors so not just on the number of receptors by how many are active which is a function of how much ligand is present how much of the chemo attractant is present so the chemo attractant enters through here and in a lot of cases this is probably a big number so the denominator is probably negligible so it's really just this is multiplying the rate of demethylation of the receptors and then the second assumption is that the regular the methylation of receptors is operating at saturation right that is to say that it's done by an enzyme but there's not enough of the enzyme there so it's only going at a constant rate independent of how much receptor you have and so what you get here is an differential equation for the methylated receptors right which is the actual thing that's controlling how much running and tumbling you do which has a steady state when you calculate it that you'll notice is independent of the chemo attractant right the chemo attractant enters by affecting a but a at steady state depends only on the parameters of the system and so consequently you get a system that will always go back to the same steady state it only has one steady state and yet it is affected by a dynamically okay so in a sense because of the saturation of the methylase right and the activity dependence of the demethylase the equation for e rate of change of e depends only on a and not on e itself right as long as a is not in steady state this can't be zero so e will grow indefinitely in other words this is integral feedback because the control signal will always be changing except when the system is at steady state okay now this contrasts for most biological components for negative feedback because many things turn over rates proportional to their own concentration see notice em doesn't appear at all in here right and so that's key for certain types of integral negative feedback that you have things occur at rates that are independent of themselves whereas most biological systems are like this right where you have things that turn over and you get what's called leaky integration kind of like filling a bathtub right with a open drain it will only fill to a certain level and that will stop so for a short period of time even these systems any biological system will integrate a signal for a short period of time but it will then usually level off it will relax to a steady state the trick for integral feedback control is to get some system that doesn't relax to a steady state as long as there's a continual error signal going into it and so I'll just mention a few of the known integral negative feedback systems in biology there are a lot of them and the number keep growing because it's really the only control system we know of that's that achieves perfect homeostasis right perfect adaptation you can or set point control you can adapt a system so that achieves a set point so for example in the level of calcium in your blood right that's set by a hormone called parathyroid hormone okay and if you need more calcium because you've changed the level of calcium in your blood that affects the level of parathyroid hormone that affects the level of vitamin d and vitamin d affects the mobilization of calcium out of your bones so for example if your calcium starved you'll start pulling calcium out of your bones to keep your blood calcium level very very close to the sort of magic number about 10 milligrams per hundred mils and most people will have calcium within 5 or 10 percent of that level in in their blood right so it's been observed that the level of parathyroid hormone right is directly related to the difference between the calcium in your blood and the some desired set point exactly how that's implemented isn't really known right but the result of how that regulates this system is that the vitamin d that's the controller is produced at a rate that depends upon the parathyroid hormone and it also depends upon vitamin d because vitamin d turns over but it's thought that vitamin d is a very very stable thing and so it turns over very slowly and so under most conditions this becomes negligible so it actually isn't true integral feedback it's called leaky integral feedback but over short time scales it's kind of like just the first term of this equation so that essentially says vitamin d will go up as long as there are any difference there's any difference between calcium and its set point so that's an example of leaky integral negative feedback where you get negative feedback kind of for free by working on shorter time scales another example of that actually is something proposed by Boris who's here in the audience back in 2005 which is mechanical feedback and cell growth you may have already heard a little bit about this the idea that when cells are under compression they grow more slowly right and when they're stretched they grow more quickly and there's a lot of biological data to suggest that that has to do with pressure sensitive activation of transcription factors so as Boris pointed out that this is essentially an integral feedback system if we assume that the cells respond to the pressure the pressure is integrating the difference between the local growth rate and the average growth rate right if a system is growing isotropically there's going to be no increase in pressure but if a small bit of the system is growing faster than everything around it pressure will increase there right so the thing is as long as that growth rate is above average pressure will continue to increase indefinitely there's the integrator but it's a leaky integrator because pressure does eventually relax in biological systems if you put cells under pressure they eventually just become smaller they lose material and they relieve the pressure so there's there is a relaxation term so it's a leaky integrator another example of this is something that Frank Ullaker referred to last week and we'll talk a little bit more about this tomorrow which is morphogen gradient scaling so there's this observed phenomena that as tissues grow the morphogen gradients on them often scale to match the tissue and there's a lovely example of a feedback mechanism that was proposed by Nama Barkai a few years ago in which the morphogen represses something at the far boundary of the morphogen gradient which then diffuses back in to the gradient and expands the gradient right forming more repression so there's the negative feedback loop the repressor expands the gradient right which then represses the repressor okay and as part of their model they assumed that the turnover of the expander was essentially zero right at least zero on the time scales over which this was happening and so consequently the only steady state that could be reached in the system is the steady state in which the expander is completely repressed and that it gives you perfect scaling of these morphogen gradients another example of integral feedback control is osmore regulation in yeast so this is a lovely system where you know if you put cells that have cell walls right into very high salt right the water will start coming out of the cells we put them in very low salt water will start flowing into cells right just by free diffusion and so consequently cells inside will either expand and get a high what's called turgor pressure or shrink right and that's not good the yeast wants to counteract that it has receptors that recognize the internal pressure or the turgor pressure and it uses those receptors to activate a a map kinase pathway that phosphorylates the transcription factor called hog one that transcription factor causes the cell to produce glycerol lots and lots of it and then that acts as an osmolite to suck water back into the cell and so it's a nice negative feedback system as stated here this doesn't say anything about whether it's an integral feedback system or not this is actually showing how you can visualize hog one going in and out of the nucleus and what you see is that in response to a change it goes in but then it adapts back to its original level and that suggests that it is an integral negative feedback system right and again what you have here is that the production of hog one which is a key controller here depends upon the pressure difference between desired and actual and the turnover of hog one that should make it not an integral controller and then the production of glycerol which actually counteracts the pressure depends on hog one and its turnover but it's thought to be the case in this system that the pressure also simultaneously suppresses the turnover of the things that make glycerol and that would make this as an integral like system because now this becomes an integrator of the error and we'll see in in a few more slides just why we think this is the case so those are all examples in which integral control is achieved by doing using negative feedback control but essentially suppressing the turnover of the thing that's doing the negative feedback so it essentially integrates the error again what's the big limitation of that right it's time right if you want to if you want to compensate for a long lasting disturbance you'd have to suppress the turnover of something for a very very long time that's very difficult to do in some systems and in fact in growing systems it's impossible because turnover includes things like dilution you can't grow and not dilute so there are serious issues with that that's not the way that bacterial chemotaxis works it works by using enzymes that operate at saturation which is another way of getting around this problem of making feedback independent of the thing that's the thing that's being measured itself so for example in this case of plants which also do homeostasis of their nitrogen oops sorry right you have these efflux transporters and they they depend upon the level of nitrogen inside the cell so if you have too much nitrogen in the cell you activate efflux transporters so that nitrogen goes out right so that's a standard proportional negative feedback loop if the efflux transporters turn over right but if they operate near saturation right so that this now so that the efflux transporter concentration is much higher than km then this cancels and now you get an expression where the rate of change of the efflux transporters is insensitive to the state of the efflux transporters it's just sensitive to the state of the nitrate the nitrate in the cell and so consequently you also get integral negative feedback you'll continue to change the state of this as long as there's any error in that okay so that's another way of of getting this i think the last way i'm going to talk about is is this one because it's very cute it's in in a very recent paper from estafa kamash he pointed out another negative feedback mechanism that achieves very good perfect adaptation and integral control that isn't technically based on any of these and the idea here is that you have this factor in bacteria called sigma 70 which binds the RNA polymerase and together regulates a lot of genes but you want to control the level of this very very tightly or control the level of this very very tightly the amount that's activating genes and so it also produces something called anti sigma 70 which binds to sigma 70 and the two of them essentially annihilate each other and we don't mean annihilate in the you know anti matter sense of the word but that they bind each other and either become permanently bound or target each other for destruction and so consequently you can't come back from this you have a one-way reaction and see the one-way reaction that's the key to getting this integral negative feedback so you can visualize this in this way with the parameters here the internal workings of the system really are irrelevant here is just some path from x x 1 to x 2 right which then comes back and makes something that annihilates the thing that turns on x 1 and here's where your disturbances might come in suppose you had a disturbance to sigma 70 how would you use feedback to make sure that the production of this which mirrors the production of that is held constant and so again we can just write those in terms of very simple equations right that x turns over x 1 turns over x 2 turns over this is the message say that makes that protein but x 1 leads to the production of x 2 you can write those as differential equations and you have four equations for this for this for this for this you can easily solve those for the steady state and one of the things you notice right is that the steady state of all those equations has one equation z 2 the equation for the thing we're actually trying to control here that doesn't have the disturbance in it it's completely independent so regardless of how much disturbance you have z 2 will always go to something that's related to the parameters okay so that's that's the sign that you have integral feedback control because z 2 is going to reach a set point all these other things won't they will depend upon the parameters now what's the reason why z 2 is reaching a set point well one way to think about it is what you're trying to control here is the difference between z 1 and z 2 that's the thing you're trying to drive to zero you're trying to make z 1 and z 2 match that's what this this annihilation does when they don't match right the system will just generate more z 2 until they do match so if you subtract these two equations from each other right you can see these two terms cancel and you just get z 1 minus z 2 varies as mu minus theta x 2 if you put in anything close to the steady state value of x 2 you see that goes to zero so again when x 2 reaches its steady state you can drive that difference to zero but essentially that difference which doesn't depend upon mu or either of the z's that difference will integrate the error over time because it doesn't depend on itself okay so coming close to the end here let me talk a little bit about trade offs and integral control so feedback in addition to giving to giving rise to trade offs in terms of you know you gain robustness to one thing and expense for robustness to another also has dynamic trade offs you'll gain speed but what's the price for speed often the price is instabilities and integral feedback which uses time dependent information is particularly prone to triggering those kinds of instabilities and this is one example is john doile had a recent paper on glycolytic oscillations that talks about the the inability to default inability to avoid these instabilities if you want to have robustly efficient circuits but here's a good example from that antithetical integral feedback of just how easily that feedback is can be made unstable so here we're just kind of going to vary some of these parameters and you see as I vary the disturbance mu I still come back to the same steady state that's showing you how great the system is at being an integral feedback controller giving you set point control but now I'm going to change some of these internal parameters of the system like eta and k so that I come back to a similar steady state but now I'm going to start playing with mu and look what happens okay the system can go into permanent oscillations all right now the interesting thing about kamash's paper that talks about this particular system is they point out that actually the noise in such a system does a fantastic job of eliminating the oscillations because the oscillations depend upon a coupling here that noise can easily destroy and so this system actually works better as a controller in a noisy system than it does in a non-noisy or deterministic system and so that's just an interesting feature about that controller but the point to remember is it's very easy to get instability when you use integral feedback okay let me just end up with a few comments that are a little more on the philosophical side but I think they're very important when you think about how to incorporate ideas of control into the way you approach biological systems and of course this is the classic picture of blind monks examining an elephant which is exactly how I think of how we work in biology right we only feel a certain part of a system we don't know what's really under there you know in the whole system engineers call that the identifiability problem and what I've plotted here for you is the gene circuit without feedback and the gene circuit with feedback with different sets of parameters showing what they give and no biologists collecting data would ever be able to tell these two curves apart right they're so close to each other that if you applied you know Occam's razor and you've got data like this you would assume your system is built like this but not only could the system be built like this if it's built like this it's parameters could be wildly different from the parameters if it's built like that so the point is it's very difficult when control is inside a system to take data and figure out whether control is present or anything about that control but one of the nice model systems in which some good experiments were done to show you one of the ways to interrogate control is in that yeast osmol regulation system so in this paper from van Aardner's group not only did they look at the ability of the system to do perfect adaptation which is what you're seeing here for different levels of stimulus it's also robust perfect adaptation which means that it will adapt even if you vary the internal parameters of the system right the exact workings of the system don't matter as long as the feedback loop is in place so that's called robust perfect adaptation but the other trick they did which is another cute things that engineers in the electrical engineering field like to do but biologists almost never get to do this which is instead of giving a disturbance that's constant or oscillating you give a disturbance that has a ramp and so it has a slope to it you keep increasing over time and so what you find is that if an integral feedback loop will cancel out a constant disturbance and bring you back to baseline a constantly increasing disturbance the integral feedback loop essentially takes a derivative of that and gives you a constant change in the steady state so in other words with respect to a constantly increasing disturbance the integral feedback system behaves the way a proportional feedback system would behave for a constant disturbance so essentially each system is sort of like taking one degree of derivative and in fact if you wanted to have a system that could deal with a constantly increasing disturbance you'd have to have two integrators in there right one integrator to bring it down so it made a change and then another integrator to make it so they would be stable so this is one way to actually show that something is is actually an integral feedback system it's a tricky thing to do because we can't always arrange for disturbances that go up linearly in time their disturbances salt concentration in the medium so that's kind of easy to do even more generally though when you look at a biological system how do you even know that there's any control in there at all and i put this picture up because you know if you were looking down from mars and you saw this guy walking between the twin towers here you know on the tightrope how would you know that this was difficult because all you're seeing is a person doing this right you'd probably think oh human beings do that all the time right no big deal right how would you actually know that this is an extremely difficult task and there's a tremendous amount of control going on in order to make this possible if you know there's control you could but how would you even know there was control in the first place you'd have to make a perturbation okay so but if you didn't know what to perturb right i mean you could perturb take his watch off you know you could perturb you know change his hairstyle you know many perturbations would have no influence here you'd have to know what to perturb you'd have to know that if you tipped him right if you gave him a push obviously you'd find so you'd need to know that there's something about motion that's difficult here something about balance the reason why i said what if you were a martian is because as a human being you already have a a set of presumptions about what's difficult and what's hard right and so you can apply those you have prior knowledge right and so you have to use prior knowledge right so this is kind of a plea don't take a pile of data and assume the data can tell you what a system is because you could take the data of this guy walking and you'd learn nothing about whether there's control but if you have prior knowledge that human beings have trouble doing this then it's very easy to figure out there must be a controller and so the same thing is in engineering right how does an engineer know this is a voltage regulator because if i show this to an electrical engineer they'll say oh yeah it's a voltage regulator did they do a perturbation in their head no right so how do they know that that's a voltage regulator they know what a voltage regulator should look like right there's an architecture right there's an architecture associated with being a voltage regulator and that's what the engineer is using not doing experiments right not knowing that voltage needs to be regulated but simply saying oh yeah that's what a voltage regulator should look like and this is a big push right now in systems biology right is this idea of the design principles movement that you could take the structures of systems and look for these things and say well if i see something that looks like an integrator maybe there's integral feedback control going on okay and that's in some ways is a very good thing to do but in other ways it's a very tricky thing to do because you don't really know what something's for until you know what it's for remember we talked about a simple negative feedback loop can be thought of as something to speed things something to make things more parametrically robust or something to make more degrees of freedom in a system so that the system could be adjustable right so even though we can find design elements knowing exactly what the purpose of those design elements is still is going to require some type of some type of prior knowledge the last thing we can try to do independent of sort of knowing what all the perturbations are is to look inside systems and to look at the fluctuations that happen inside systems because if you have a system that's performing very well right in response to various perturbations you'll see that there's a very poor correlation between the perturbations and the performance which is why you won't notice that the performance is actually controlling for these if you look inside the system though you should find internal things in the system that are mirroring the perturbations that are providing the error signals and they should be poorly correlated with the output so this is very counterintuitive right the sign that there's a controller in your system is there's something in your system that has influence on the output but it isn't correlated with it rather it's correlated with things going into the system that aren't affecting your output if you think about this in terms of the usual way people try to identify systems is exactly backwards from what anyone would think of you'd look for things inside the system that are related to its behavior but not in a control system you should be looking for things inside of the system that are related to the behavior that's not happening that maybe should be happening okay so it requires a very a very keen sense of what the problems may be in a biological system and so i'll leave you with the fact that you know in biology physics is great but we also have to anticipate that there's going to be control wherever control will evolve and we have to anticipate the effects that will that will have and i'll just jump to the end and leave you with that remember to keep calm and expect control okay so i'll take some questions