 So, welcome back, so you are, you have already a question, Roa? No. Ok, so welcome back, so it's a pleasure to have Silvina here to give this seminar in spite of, well, she will tell us in spite of what. And Silvina is a scientist from, physicist from Argentina. I think she is a role model for also female scientist. And I think she will say something at the end about this. And today she will talk about her research on biological systems and information transmission in cell signaling. Thank you, Silvina. Ok, thank you Mateo for the invitation and thank you all of you for being here. I know that you just had an exam so it's not so easy. We can go through the talk without pain. And so I tried to prepare it. My name is Silvina Ponce doson, I am from Argentina. I work at the University of Buenos Aires. And I structured the talk in sort of two parts. The first part is the introductory because I knew this was part of the course. And then the second part is more about the work that we've done with Alejandro Coleman Lerner. And Alan Gibre, Alan is a PhD student who is about to defend his thesis in a few months. And Alejandro, he is a biologist who works on systems biology at the University of Buenos Aires. So what is this thing about information in cells? Basically cells have to sense their environment. All the time. And they have to react to changes in the environment. And in some occasions, on many occasions, those changes of the environment, they are related to changes in the concentration of certain substances. Let us call them effectors because they generate the response of the cell. So how do cells detect those changes in the concentration of the effectors? When they do that through binding, cells typically have on their membrane some proteins that are receptors with binding sites to which these effector molecules can bind. And those effector molecules once bound, they generate changes inside the cell. This is a scheme. So this is the receptor. The receptor is a protein that transverses the membrane of the cell that's called a transmembrane protein. So part of it is outside the cell, part of it is inside. And that's why it can relate what's happening in the outside to what's happening in the inside. And so once something gets bound to the receptor, the receptor can change its conformation and it can induce changes inside the cells. And that generates what is called a signaling cascade, which is further changes that induce changes and so on. This is an example that I don't want to go through, but sometimes you increase the production of some substance or you activate some enzymes or in other cases you repress the activity of enzymes. And on many occasions what happens is that something changes inside the nucleus, which in eukaryotic cells that have nuclei and that have their genetic content in nuclei. And so things change in the nuclei and that induce those changes in what is called gene expression, so the production of proteins, and that's how the cell mounts its response. One example is yeast cells, which is the system in which Alejandro works on, he does experiments. So yeast is with which you make a beer, basically. And when you have enough nutrients, those cells just divide, simply cell division and grow in that way. But if you deprive them from nutrients, then something happens with those cells. They become two types of cells, they start to differentiate two types of yeast cells so that they can mate and reproduce by mating. And so how do they mate, where they sense the substance that the opposite sex cell sends out to the environment? And so, for example, this mat alpha cell secret, this substance, which is called epheremone, that is called alpha factor, that is detected by the others that mat A cells. And mat A is cell secret and A factor. And when experimentally what they do is they grab this mat A cells only and they put them in a dish. And then they throw pheromone, the pheromone they react to. And so you induce the response, the mating response, even though there is not the opposite sex in the system. And once they start to sense these pheromone through binding to receptors, cells start to change, they change their shape. And for example, this is a picture where you see that the cell develops this, what is called the schmoo, which is directed under physiological conditions. It's schmoo that goes in the direction of the growing amount of pheromone because that is going to lead the cell to the mating partner. And so pheromone induces changes in many genes, in the expression of many, many genes, in the large percentage of the total genome of the yeast and they develop this schmoo. And then the cells come into contact, they mate, and they reproduce in that way. And as I told you how this is studied experimentally and you can modify genetically the cells so that some of the proteins that they express are fluorescent and you can look at them in the microscope. In that way you see which proteins are expressed in response to the appearance of pheromone. Sorry, so at the beginning you have two cells, right? Two types, yes. There are two types, like mat A and mat alpha. Then they mate and then they produce, what is the end state? Well, they mix up the genomics. So they become one? Yes, they become one. Because you can have diploid or haploid type of cells. And so they become one. And I guess they choose one of the two types. But essentially so this is not growth, no? This is from two cells you get one. But the others stay alive. I mean, well, then if you produce one, that's true, yes. You produce one, yes, yes, yes. The thing is that most experiments, as I told you, which is what I work mostly with, you don't have a partner. You just have a pheromone and what you see is that they develop the schmur and they stay. OK, so I don't want you to look at all the details, but the pheromone attaches to a receptor and you have all the signaling cascade and that induces the production of this schmur. And this induces an asymmetric growth because the schmur points towards the region where the pheromone concentration seems to be larger. And this is a chemotropic type of growth. The cells don't displace themselves. They just grow in that direction. And now, so this is not just detecting that there is something in the environment that wasn't there. It's also pondering where it is at the larger concentration. So it's not exactly measuring concentration, but at least it's comparing concentrations. And so how do cells do that? In this case, they would have to measure the gradient across their own length, for example. And what are the smallest differences that they can detect through binding? So one way to approach this is through information theory. And this is something that Bill Bialek has done for many years. He has an interesting book called Biophysics. It has a subtitle that I didn't write there. And our work has drawn from all the work of Bill Bialek. And so what I'm going to tell you about information comes from his book, basically. So let me see what information is. I guess most people here are from physics, but there are some biologists, so I wanted to introduce also this idea of information, quantitative idea of information. So let us suppose that we ask a question that can have any n-different answers. What is the level of uncertainty that we have before having the response? Once we have the response, we no longer have any uncertainty. So we gain an information which is equal to the uncertainty that we had before. And a way to measure the uncertainty that comes from statistical mechanics is the entropy, which is an idea of... It's related to with which probability the different responses occur. Because I know from starters that one of the responses is going to be chosen with probability one, and then I don't have any uncertainty. But if they are all equally probable, then I have a greater uncertainty. So the uncertainty is related to how probable the different responses are. And, well, all this quantitative theory of information was started by a seminal work that was written by Shannon in the 40s. And he proved that the entropy was the only definition of information that abides by some simple rules that I'm not going to discuss here. But the expression that the information has is similar to the entropy, is this one. So here, pn, I'm going to a little bit use this. So pn is the probability of occurrence of the answer. So if I do a sum over n of pn and whatever function of n of the state of the answer, what I'm doing is computing the mean of this function weighted by the probability with which the different responses can appear. So here, this function is minus log 2 of pn. Minus log 2 is the log base 2. This means that if I do log 2 to the log 2 of pn, then this is pn. This is and wraps 2 to the log and wraps the log base 2. And since the probability is less than 1 always or equal to 1, the log is always less or equal to 0 with the minus, this is greater or equal to 0. And then what we see also is that the smallest pn is the largest minus log 2 is because this is a negative number but the absolute value is larger and larger as pn gets smaller and smaller. And so this is telling me that what I am, when I'm doing this what I'm averaging here with the mean that I'm computing is of a function that is larger the smaller the pn is which is measuring the uncertainty because the responses that are less likely to occur are those that give me the most uncertainty. Ok, so this is the idea of the information. And then with this choice of log base 2 the information is measured in bits. And in one case it is particularly simple to understand. Please interrupt me if you need some clarification. If you have suppose that the number, n was counting the possible responses the lower case n and we had in total n responses and if n is the power of 2 then and all the answers are equally probable that means that each of them has the probability of occurrence 1 over n capital n which is the total number of answers then oh, that's my thing I think so when I do this pn this is n equal to 1 to n 1 over n minus log 2 of 1 over n which is 1 over 2 to the n and this is minus m because this is the same this is 2 to the minus m so if I take log 2 I have minus m minus it gives me a plus so this is m times 1 over n 1 over n also gets out and this gives me n so that this is m so I have m bits of information that I gain once I read one of the responses when I have certainty about the response and the way that we can think of is let's take small m for example I don't know 2 so let me imagine this is a the 0 1 interval this is a segment and and have I have a 2 to the m 2 to the 2 these 4 options and they are all equally probable so that I'm going to divide the segment in 4 segments of the same length which is 1 4 and so when I learn I can have this response or I can have this response or I can have this response these are the 4 responses and so a and when I hear the response one of them I gain 2 bits of information and also 2 bits is what I need to identify each of these segments because I could say well when I divide the segment by 2 the half of the segment to the left I call it 0 half of the segment to the left I call it 1 and then if I further divide them by 2 then this I put another digit this is 0 0 this is 0 1 this is 1 0 this is 1 1 this allows me also to identify each of the 4 responses so the entropy is not only the amount of information that we gain when we listen here the answer but also is something related to how much space I need to write down the answer to identify the answer anyway now let's apply this to something more biological and in biology noise and variability is unavoidable and usually it's not that you get the full answer you ask something and you get what is this what is this when certain probability is with another probability is more complicated and so the idea here is that for example if you sense the environment you might think that the environment is in a certain number of possible states let's assume that you can count the states and we call them W the states and then the cell observes some data that are called D and so what we want to quantify is how much information the cell gains when it measures this something D which could be the concentration of an effector how much it learns from the state of the environment so so this is the idea of this information gain you have on one if I forgot the idea there is this thing here so before we have we have the entropy the the probability distribution of the possible answers and when we hear one of those answers we simply gain as much as information as the initial uncertainty now we gain partial information somehow and so what we are saying is we have the entropy related to the all possible states of the environment and the amount of information is subtracting from here the entropy of what we can know about the state given that I measure I measured something so this is this here is a conditional probability is I measure for example in the case that we have four responses related to four segments maybe I could I can measure with this level of a resolution and so I measure here I know that I have to be the first digit is a zero so I can cancel out these two and the conditional probabilities this occurs with probability one half given that I know that zero and this is conditional probability one half I kind of constrain the universe of possible answers that's the idea of the conditional probability and then what you do is you are this is what happens when you learn one you have one answer but then what you do is you average over all possible answers of what you measure with the weight of each of those answers so with their own probabilities and that's this expression here so it's the I have the entropy of all this is kind of difficult to point I think here the probability of the word let's say this is the word given that I measured d and this is the probability that d occurs this is copied from the book and this should have been lowercase d but anyway you can work it out the quantities I don't care but I wanted to write down that I don't want you to actually follow the calculation anyway so mutual information is something that is useful to quantify the transmission of information also when you have noise because as I told you maybe because you have noise you cannot detect this with this level of resolution you can only detect it with this level of resolution and that's something useful in in biology so anyway so an interesting example that Bill Bialik and collaborators applied these ideas to is the development of the of the fly the fruit fly, drosophila melanogaster and so which starts with a fertilized egg and then it goes to a embryo and then from something that doesn't seem very differentiated full organism here which is the fly so well things are not so homogeneous they are these drawings these are schematic depictions where the different colors show you some substances that are involved in the expression of genes that are not distributed evenly in the embryo and they will be involved in telling the fate of the cells that will be in those regions and whether they will become what part of the fly they will become this is called positional information how does the embryo know that these part will become the head and not the one with the red thingy anyway so how the process works is that the positional on the embryo which is what is going to become the different parts of the full body of the organism is codified is represented by the concentration of substances that regulate gene expression and so there are questions that Bill Bialek asked what is the mutual information between the concentration of one of these regulators of gene expression and the position along the embryo and so they did it with various of these regulators of gene expression in particular in the book he discusses this one which is called Hunchback is a protein is a transcription factor and so well this is from Bill Bialek's book we will see a more complicated picture of transcription basically this is the idea you have DNA that has the information on how gene gene is and that is going to be used to make a messenger RNA is not plotted here and the messenger RNA is going to be used to make proteins and so and this enzyme reads the information along the gene and makes or catalyzes the union of the nucleotides that yield the messenger RNA and whether the polymerase is going to work or not depends on whether there are some transcription factors bound to the site where the reading of the gene information starts so that's the way in which you sort of regulate gene expression anyway and so what they did is they measured they expressed this protein with a fluorescent label so that they could look at it in the microscope and they looked at many embryos and this is what they found here is you put the plot is in terms of the position over the total length of the embryos in that way you can compare embryos of different lengths and also this uneven distribution stays while the embryo grows so in that way the total length of the embryo and this is the fluorescence basically of hunchback and this is a here is the explanation so there are some small dots that for me are impossible to see but those are the individual measurements I saw one once but basically what you see more clearly is the circles which is the mean over 51 embryos and then the standard deviation around the means this is the plot and this is the fluorescence image he is showing two proteins hunchback is the one depicted in red there and so if you associate this to the gene expression that you call G and you have the position X then you can compute what is the distribution of the gene expression what is the probability of the gene expression for each position along the embryo which is this conditional probability that the fluorescence are equally probable and then you can compute the probability G not conditioned I'm not going to go through that but then you can compute the mutual information that's basically what I want to say and what they got was a over two bits which is different from what used to be thought before because this seems very much like kind of distribution with hunchback high here in the first half and zero in the second half in the posterior half of the embryo but there is a little bit more information than one bit anyway so here in this example all the quantities are stationary although the process is pretty dynamic but there is a separation of time scales and so you can assume that what you are comparing here is pretty much stationary but that's not always the case with transcription factors and so for example let's go back to yeast which is a typical model system because it's very cheap and it's you have a zillion mutants, I mean it's easy to introduce mutations and observe a zillion proteins anyway so what they did was look for transcription factors that enter the nuclei in a pulsatile way and they found many that showed pulses of nuclear localization this is the images of where they took that from so it's not just that transcription factors are either there or not there in a stationary way they can also pulsate and many of them in the case of yeast and furthermore in some other experiments what they showed was that this is another example of yeast in which crazy one is a transcription factor that enters the nucleus in a pulsatile way in the presence of external calcium and what you see is that if you look at the concentration in the experiment you vary the concentration of external calcium and you look at the burst frequency and you see that it varies with the concentration so in some way you are encoding the concentration of the external effector in the frequency of the nuclear fraction of the transcription factor so this is another example and this is an experiment of a student that we co-directed with Paolo Aguilar, Nawel Tarkovski and we've observed pulses of calcium in the presence of pheromone in yeast so there examples in which you encode the strength of the stimulus the concentration of the effector into a frequency and in other cases what you do is the stimulus strength the concentration of what's in the environment is encoded the increasing concentration and increasing concentrations of something that varies more stationary in particular the transcription factor and so the question that motivated our work was why what was the advantage of having these different ways of encoding the external the environment and when one of the encoding mechanisms was better than the other and so something that is interesting is that there are some transcription factors one transcription factor that it can enter the nucleus and stay there up for a while enter in a pulsatel depending on the type of external stimulus and so it seems like this is an example also I'm going to discuss a little bit in more detail in yeast so here is what we call information multiplexing because you have one transcription factor one regulator of gene expression that depending on the external a effector which is on the stress that you put on the cell actually the transcription factor dynamics is different and the response it generates is going to be different because the cell responds differently to the different stresses so it's you use one transcription factor to modulate different genes depending on its dynamics well this is a the example in which they did the experiments with three types of stresses and they observed different types of behavior of the same transcription factor this is averages over many cells these are some individual traces and this is increasing increasing stress so glucose limitation is more limited here than here and what you see is a different behaviors and for example for oxidative oxidative stress is very different and what they saw was that a depending on whether they induce this glucose limitation or osmotic stress oxidative stress the transcription factor either had these pulses or not or it had a prolonged elevation in the nucleus whose duration lasted more depending on the intensity of the stress so there are different ways of encoding the external way the external word this example says anyway so let me discuss a little bit more about transcription to introduce what we did it's 20 minutes left sort is it? so so this is from another book it's a short version of a very famous book molecular biology by Albert et al this is the short version of it and so you have the information in DNA which is a double helix and you have these very tight bonds between the nucleotides of one strand of the helix and the other and this is what is called the central dogma which is you have the gene some sequence has the information to make the proteins so first what you do is create a template which is the RNA and then from the RNA you translate it into what is the protein now and then you have the polymerase which I've already talked about and so you have to bring apart the two strands and be able to read and this is done by the polymerase but in eukaryotes there is a question here is where does this reading start what has to find something that is called the promoter which is a sequence on the DNA that is where you have some accessory things that will indicate to the polymerase that they can start to read and but this is not so easy in cells with the nucleus because DNA is packed associated with zelium proteins and you have to unwrap a bunch of stuff to be able to read and then you have some molecules that bind that are called general transcription factors that assemble on the promoter to allow the polymerase to read the genetic information and there are also regulatory sequences that are used to switch on and off those transcription factors that we were talking about and then you have to change all these packing and this stuff is very, very complicated and so you have all these transcription factors effectors et cetera to regulate gene expression and all these mess we model in a very simple way for what we did which is what we wanted to study the differences between the 3 encoding strategies that had been found in these experiments in yeast which was the transcription factor stayed elevated for a while inside the nucleus with a concentration of increasing amplitude with increasing strength of the external stimulus or it stayed for a longer period of time with increasing strength of the external stimulus or it had these pulses inside the nucleus so we call that encoding by amplitude duration and frequency and then we used a very simple model we started with the transcription factor assuming that it could have different amplitudes or different durations or different interpulse frequencies and then produced some mRNA and which is the blue curve here and then this is the simple transcription factor that I'm going to discuss in more detail in a bit and we did not model the mapping from the external word onto the transcription factor that's something that we've done after this work and then we treated three types of inputs for what we call amplitude modulation it was inputs in which you had one pulse of different amplitudes in the case that we call duration modulation it was one pulse of variable duration in the case of what we call frequency modulation is bunch of pulses and then well we integrated the mRNA produced by the model we tried different integrals but the ones that I'm going to discuss today is when we integrated over the whole time of the simulation and and then well let me go back a little bit and then we computed the mutual information between the integral of the mRNA and either the amplitude the duration of the frequency of the transcription factor to see if there were differences so I think this I'm going to skip a little bit how to make sense of this simple model of transcription I'm going to give you a flavor of this which is we can give it we took it from the literature and they explained it in terms of their fit in their experiments but you can give a mechanistic explanation the idea is that you have your promoter in four possible states and the transition from some of the states are related to binding and unbinding of the transcription factor and then there are transitions that are not related to the transcription factor that switches on of the promoter basically so that's the way in which you can model all that mess with specific regulators of that gene the factors that are general to let the polymerase know that it has to start reading there the restructuring of the chromatin etc. we have it in this way and assuming that there is a separation of time scales so that this binding unbinding of the transcription factor acts on a faster time scale than these other transitions which are related to their remodeling of chromatin or other things that might be slower well then doing that separation of time scales is that you can come back to this simple model in which the promoter is in only two states and the transitions are these saturating functions of the transcription factor concentration so I had all the I'm not going to discuss this which is all the how you can derive that which comes from the from the separation of time scales but the idea is to make the transition you need two things on one hand you need to bind the transcription factor and also you need to activate the promoter through other mechanisms and that's what's here and also the production of messenger RNA is also modulated by so you also need to have the transcription factor bound and the promoter active to be able to make the messenger RNA anyway so we computed the mutual information and we looked for the parameters that maximized mutual information for each of these modulation types and our expectation was that we would be able to find like separate values of the parameters these parameters are the ones that are here these transition rates these KD the exponent how many molecules of transcription factor are bound and stuff like that and that would be like identifying which promoter I mean because we were thinking maybe when the transcription factor enters in pulses then it activates a certain type of promoter characterized by these parameters and when it enters as only one pulse it activates another type of promoter and we didn't find much of a difference actually we found that the parameters that maximized the three ways of encoding the external word were more or less in the same ballpark of values and also the maximum information was not that different, it was between 1.5 and less than 2 bits and so then we said well maybe well how could multiplexing work if the optimal parameters are more or less the same for all the modulations and so sorry looks like this is a large mutual information you have a four state system, right? no no no this you mean because I plotted here oh you mean the transition of the promoter? the promoter can be in four states no no it's not that a because and what do you mean by we actually clamped in two so I mean what I mean is you have four states of the promoter and so that would seem to limit your mutual information to at most two bits and you get 1.37 which seems rather large yeah I don't see how the state of the you say that the state of the promoter will limit the information that we can transfer I don't know the details but say I think this is definitely an ingredient it's a very simple model of course then in the experiment they measure more or less this amount of information and okay so maybe I never thought in that way I have to think about that anyway so let me skip this a slide because I don't want to go and so then we said well maybe there is a different sensitivity to parameter changes of the different of the different modes and there we did see differences we started to change one by one the parameters and for the different modulation modes from the the values that gave the maximum information transmission and so then we said well if the if the sensitivity is different why don't we allow the parameters to vary so that we are within 90% of the maximum information transmitted for a one of the modulation types and then within that set of parameter values we looked for those that minimize another transmission mode and so how much can we turn down the information transmission for the other mode and that well these are projections of these sets of parameters which is what we did was we compare duration with amplitude maximizing duration, minimizing amplitude then maximizing amplitude minimizing duration and so on so with six possibilities it's like having six options of promoters and and then what we found was that this is the information the mutual information as a function of the six promoters that we picked each promoter is plotted with a different color and then we computed for each of those promoters the mutual information when you have an input that is encoded in the duration this is a circle or the amplitude for these two in this case we compare the mutual information when you encode the input in the duration and in the frequency here you compare when it's the amplitude and duration here is amplitude and frequency and what you can see is that for example for duration no matter which promoter you pick you always more or less get the same information while for frequency when you minimize the transmission for frequency modulated inputs you get small transmission of information and if you are in those promoters that maximize the transmission by frequency you have a much larger transmission of information what we found was the duration was the least sensitive and frequency was very sensitive so that you could have some multiplexing in the sense that you could turn off the frequency modulated inputs for some promoters so we found promoters that could be blind to frequency modulated signals and could transmit good information for the other modulations but we couldn't find the other option and well these are some examples of how much RNA you produced but I'm not going to discuss that and this agrees with some experimental computations of mutual information done with this transcription factor in yeast that can have the different types of behaviors and what they did was they what they can change the amount of the transcription factor and so they manipulate that experimentally they can produce a prolonged accumulation of different amplitudes of they can produce pulses of different frequencies and then they computed the mutual information and they obtained and what they compared was two genes was good at frequency modulated signals and the other one that was good at duration amplitude modulated signals and what they saw was that the one that was good for a frequency when you look at the amplitude when you compute how much information it transmits for amplitude it's pretty good the amount of information it transmits and it also well is similar how much information it transmits for frequency but in the other case which is the one that is tuned to work for amplitude it doesn't transmit very well for frequency which is what we found in our theory so you have a promoter that is blind to frequency and transmits well amplitude but the other way round is not found and in these experiments they also did experiments with mutants that show that you can transmit better with a mutant so the cell is not optimizing its promoters basically it's what they say and our answer is maybe it's not optimizing the promoters because it's trying to make those that are good for amplitude to be as blind as possible to frequency anyway so let me go to a summary in the first half I hope you could follow the second half I don't know if you follow so what I showed was that mutual information subs to quantify the ways in which cells make representations of the world around them and also I showed that not only is under stationary conditions that you do that you can also use the dynamics to encode the external world and that motivated the question of our work in the examples I mostly focused on the regulation of transcription so what is the dynamics of the transcription factor and we discussed the case of the transcription factor that one transcription factor that could be modulated by could be stayed in a elevated way for a prolonged way or have these bursts encoded different stresses in different dynamics and and so one of the maybe this multiplexing is the reason to use these different encoding strategies so you save in transcription factors you have one transcription factor and you can transmit many different signals using different dynamics and well for a future talk how does the mapping from the external world onto these different dynamics operate and then from our work we used that simple model of information transcription we looked for the promoter parameters that maximized the mutual information and we found that the main differences for the different modulation input types was the sensitivity to parameter changes and we could find promoters that are blind to frequency modulated inputs but reverse situation and so frequency encoding is more sensitive so it requires a finer tuning of the parameters that characterize the promoters that respond to frequency modulation and also they need higher whoops, slightly higher binding affinities which is something that is related to other aspects of transcription so maybe the fact that some wild type promoters are not optimal from the point of view of how much information they transmit it might be due to the fact that they sell evolved to a situation in which they are not optimal they are a little suboptimal but they are blind to another way of relying information so this is my final words that I am very glad here many women and basically it's important to challenge gender stereotypes and make everybody feel comfortable doing science I mean it's not a fight here it's a collaborative way of working and also in gender violence there is a very strong movement that started in Argentina but now it's in all Latin America called Nuna Menos so not a single less I say goodbye with the picture of my granddaughter who was born yesterday here in Trieste Thank you Questions? Yes, good morning and thank you for your presentation so I don't know if you understood very well so you said that to encode the information these three words to encode the information frequency and duration and you said that it depends on the type of format that we are going to use so I would like to know if maybe I have a problem and I want to translate information so how do I behave am I going to check all the three types of the translation or they are a certain type of property and I know that in this case I am going to use this type of translation What do you mean from a theoretical point of view because this is actually with the experiments you find that with the cells you see that when you subject them to different types of stimuli different types of stresses and you look at the transcription factor it is a different behavior and this has been observed also in mammalian cells and other transcription factors so it is not a unique example of yeast which is a very simple organism so if you want to say something about a particular system you have to understand experiments I would say then we can come and try to explain why you see what you see in the experiments that's what we we were wondering what were the advantages of one over the other if there were some situations maybe by collecting information from all these observations you might eventually say I can expect in this particular case that the signaling will go through frequency because frequency usually happens when for example promoters have slightly binding affinity for the transcription factor but it's I don't know it's a little it's not so predictive the theory you know you always need to go back and forth between the modeling thank you for the talk and congratulations on your birth of your gender so my question is how did you incorporate a noise in the model I didn't understand how did you take the noise and put it in your model because I started to feel that you were getting a little tired and it was a little late and I didn't go into all the details of the model some steps are stochastic I forgot to mention that we added noise to the amplitude of the transcription factor because in the model what we do is it's like manipulating what's going on with the transcription factor it's not that we are manipulating the external word and expecting a model to say what's going on with the transcription factor we go directly to the transcription factor so we add a little bit of noise on the amplitude of the nuclear concentration of the transcription factor and in the case of the frequency modulated signals what we fix is the mix interpulse frequency but then we choose the time between two subsequent pulses from an exponential distribution with that mean and so there is that's the stochastic part of the whole thing the noise that we have in our model then the rest goes deterministic and then the production of messenger RNA is one by one we count the molecules and we just decide when we create a new molecule so that's also stochastic that step is stochastic so one part is deterministic and then there are other parts that are stochastic and the noise that we add to the amplitude is important in the sense that's the reason we have there I went through a little bit fast but the transition from these two states that we call inactive and active of the promoter this was K1 and here is the concentration of the transcription factor which is an exponent which is called the cooperative exponent and then it's a constant here that is called the dissociation constant that if you think of the if you go from well if you think of n equal to 1 and you have one transcription factor binds to something and this gives me the transcription factor bound to this and this has one constant and this another the dissociation constant this which has units of concentration and so it tells you when this function which if you plot it as a function of the concentration of the transcription factor is a hill function and where this changes behavior is this Kd the dissociation so the dissociation constant in the when you look for the parameters that maximize the transmission it has to be relatively large because if due to noise if this is too low then you can cross it very easily and you don't want that and in the case of frequency you can have it smaller and the dissociation constant is the opposite to what is called the affinity which is how strongly it binds to the the binding site that they call be here Hi, thank you for your great talk I have a question on the K parameters and it is nice you mentioned about the work of Bialek and Gregor I think if I recall they developed their work from Drosophila embryos and later on to pseudo embryos in mice wherein there is this notion of K on and K off in the transcriptional bursting dynamics in your case the case you mentioned could you give us an intuition on what they physically mean on the different the different parameters as I told you you can get them from that a little bit more mechanistic kind of description and so here is like binding and unbinding of the transcription factor is super fast and so it is sort of in equilibrium so this thing here is sort of the fraction of a if you are in this first date which I don't remember how I call it I think like this and I don't know it wasn't like that it wasn't zero I think it was bound and unbound and here you have the binding and here you have the binding so you could and then here comes over here like that and so the transcription here you assume that it's very fast compared to the other transitions and so the fraction that you are here is more or less a constant and this is related to the fraction of bound promoter either in the active or in the inactive state and then via this K1 would be more slightly slower transition which is the other kind of changes that you need to produce to put via a promoter in a situation that allows the polymerase to start reading the information and then the other then the other step that goes to mRNA which is also something that goes like this this is not a transformation from P1 into this the pace at which the transcription occurs and it comes from assuming that it can only proceed from this state when it's bound and active and that's when it's bound you need this factor and then this is the transcription rate from this state basic thank you you're welcome any other question so if not we thank again Silvina and now we are free