 Yes, I have a question for Marie. But before I ask my question, I have to clear up some misunderstandings that may appear as I ask my question. So first of all, I'm a strong proponent of using models in biology, even though I feel close to what Nitin Baliga expressed on the user models and the user models that are dedicated to modeling a certain phenomenon for a certain purpose in a multi-scale way as needed to express the biological phenomenon. And the second misunderstanding I want to clear is that I very much admire the work that is done at CRG. And I think it's very useful. Now I can come to my question. Basically, what I understand, so a bit of a caricature to make myself clear, what I understand is that you have brought a lot of information that was useful to better calibrate the car and covered model, the whole cell model, the microplasma, so that you spend some time on that discussing with these guys, sending data, explaining the data, and so on. I'd like to know if you saved any time on your side using their model to guide your own experiments. Yeah, that's the final proposal of the whole cell model. We try to have it as a tool for predicting or facilitating the work in the lab, but we haven't still used it. We are currently putting more efforts in having the optimal tool for doing that. So shall we say then that their model would be perfect if it were a scale one model, like the emperor of China, who asked, requested, because he was a powerful man, and requested a scale one map of his own empire, so that in some sense, you have the scale one model, because you're working on microplasma. They don't have this model yet, because they are using a virtual copy of it, which is imperfect. Would such a whole model be good only at the time it makes 100% correct predictions, which will never happen in my opinion, and be useless until then? I don't think that it cannot be used at the state that it's normal, but our goal is try to track it in the best manner to try to reproduce simulations more related to the applications that we want to develop. In this sense, since we want to develop a defined media to optimal growth, we need to make a model of the metabolism that is not currently developed and implemented in the whole cell model, to make the whole cell, dependent of the different components of the media, to see how the changes in the media can affect the growth. So at the end, it's depending on the application that you want to give to the model, the improvements that you have to do, and how you want to use it. And I'm not saying that the current version that we have can be used for some prediction, right? If I delete a gene, which I would expect that the cell will grow faster or slower, you can't do that. You can't delete these modifications. But if you want to go in depth and study how I can engineer the medium and which are modifications in the genome that I can do, so we are still developing these four proposals. You have a comment here. I just want to make a comment. I think it's not just true for biology. If someone claims that mathematical modeling and simulation makes a correct prediction, he misunderstands the old discipline because it only takes consequences from assumptions. And therefore, I never would claim to make a prediction of this kind. I give you an example. So we have studied a lot of combustion theory and modeling simulation, which used a lot. But if you have a single model and you want to predict what happens in the model, it's not possible, because your knowledge is not there. And the same thing is for populations of microorganisms. It's the same thing. But you should use it. I'm very happy if I can improve an experiment. And therefore, experimental design is a very important part, which may help not just biologists, chemists, wherever you do. You reduce your costs and your time to do experiments. I agree. The basis of all is the knowledge and also the interconnection between the people, the modeling experts and the experimental experts. So I think that, in the way, you have a great team and a great collaborators. And hopefully, one day, we can merge all and have a proper system or model for that. We have a question. We want to produce something. And you make the patients a network dynamic because you optimize all its own structure. There is missing mathematics which can do it. We try to look a lot on that. There is always needed to solve this from a mathematical point of view. Can I make this one up? I think you're right. The second place I've been told where you may not have numerical methods is when you go across scales. So when you go from. This is something different. We have to discuss what you mean by that. But it keeps a network. Trusts a network problem. There is not a very good system reduction network for optimization. It's a challenge to mathematics. OK. Interacting your question on the bottom top up loads, I think that the characterization of different components in the system, in this case, you know, the system is good because, like in the promoter studies, we now could identify which are the signals or which are the best combinations of different signals, a level of structure or a level of sequence that we could use for expressing genes. Also, by the transcriptome analysis and understanding different conditions, we could identify possible regulators and who are they and willing to different environmental conditions. So it's also information that can be applied to the design of genetic circuit. And finally, if we could arrive to the moment of implementing all this information in a wholesale model, then we will expect that by studying different combinations of this information at different levels, we could, like, do an in-silico design, is what we would like. An in-silico design of these circuits before testing in the lab. So it will be all gone. And hopefully one day we can do that. A question about in terms of the systems modeling one question I had, it was kind of struck by the central example that was given. Where you have two organisms, you're trying to get them to adapt to each other. But how much, I guess maybe my question is, how much can you or how much, how far away are we from being able to model systems evolving? So in other words, not a static system. But if you're talking about centrifuging these organisms, adapt to each other in terms of environmental conditions and so on, but you let the system go. Modeling evolution is very difficult because when there's genetic change, it's difficult to make predictions of what the consequences of genetic change. What you can do, however, is you could, in context of the metabolic fluxes and the dependencies of the two systems, you could find the constraints for how that system could be set up in a way that you can have the right kinds of inputs so that you have a optimal behavior. You can do that. But if there's a mutation in a regulated network that could improve that, it's difficult, if not impossible, to make a prediction as to what that might be. You might have a general, based on the principle, you might say that, for example, if I were to project this onto evolution of an organism that is going back and forth between nitrification and denitrification, or a system that is going back and forth, I would say if the regulatory networks around denitrification enzymes are disrupted, that type of a fluctuation is better likely to proceed or evolve because regulation is not required of those genes, which is counterintuitive, but it's a principle to discover a bigger. So also my concern is that the idea is very good and the proposal is good, but the problem is that we don't know how the mutations will affect, finally, to the function of the different genes. So in the evolution process, at the end, you first need to know which is the impact of the mutations to know which will be the appendages or which is the selection. So that's the main invitation to use the models for predicting which will be the best evolution system. This comes a little bit to the question which we had before, a comment. We have to go from single cells to population. That's an upscaling which is tellable. You can imagine, you measure so much processes and receptors, all this stuff, and a single cell, the population, you cannot treat this precisely impossible because it's too huge space. But we have populations in real life. But they make predictions about populations. But this is dangerous. They have a good population. Sorry, it's the other way around. Experimentally, it's the other way around. It's easier to actually measure populations and get data on populations. And it's much more difficult to get data on single cells. But you asked the question if we go from information on a level, on a molecular level, in a cell. An average cell. I agree that in terms of experimental data, it's easier to obtain when you are not doing omics, you are taking into consideration all the population, all the cells that you have in the cell. How would you get a population if we take a star mix? That's true. We still don't know exactly which is the impact of having different cells in the population. And also it's not the same the conditions in the data, the conditions in the real conditions in the infection process or whatever. So it's a lot of work to do. But the multi-scaled. Yeah, the scaling up is really good. Make it an example. We just tried to do chemotaxis for a population, taking the information what's inside the cell happens. And that's very complicated. It's not satisfactory for the biology. That's the first step. Who has another question here? This is the point of your mathematician. So in all your talks, you have shown in certain parts a regulatory network. So my question is following. Do you think, in fact, that the organism is completely described by this huge graph with all these relationships? Because from a vertical point of view, with a regulatory network like that, you can do everything. So you can put 10 more nodes. So this means that these pictures, for example, like the popular one of the CREP cycles with 1,000 entries, that you can do nice calendars or nice posters. What is the information that is there? So at the end, you realize when we choose microplasma because it's very simple. It has a very low number of genes. And you expect that you can characterize properly the different elements and components and see what is the crosstalk between the different biological processes. But at the end, you realize that there is a lot of complexity. Because at the level of regulatory network, it's not only the transcription factors. Because the transcription factors, we could explain only in 10% of the effects that we were observing when we were studying the transcript on the different conditions. We studied around 300 conditions where we were exposing the cells, and we were observing changes in gene expression. But when we were relating these changes with the transcription factors, only the person was a kind of, this could be explained like this. So there are other elements. And then when we were studying also the structure of the chromosome, and we realized that the supercoiling of the DNA in some regions is also important. But then, what regulates these changes in the supercoiling also, and how it's changing or how it's modulated. So there are several factors, independently of the regulatory network. So the complexity is very high. So the ideal case will be if we could study properly the connection between the different pathways and the different networks to really understand properly the system. So that's the code we have. And we think that by using mycoplasma, which is simple, we could raise this level of knowledge. But at least we could establish like the basis to later develop these models and to simulations more accurately. Because when people do mathematical models with these regulatory networks, then when you describe interactants, for example, the regulation or the repression interaction will be the soon arrow, can use very different type of models. You can use the repressivator models, max action, law-based models. And for any choice you do, you have a completely different output. And for the same network, for the same graph, if you do a choice of a repressivator type of interaction like the atomic L is type, the output is completely different. So do you have this type of experiment when you do the type of your simulation or when you compare your simulations with reality? At the end you have to do what you said, compare with reality. You have to compare with experimental data and see which is the one that is fitting better to the results of the modeling that you are doing. So depending on the, you can do the simulations, you see different kinds of modeling. But at the end, the one that you consider that is the proper one is the one that is fitting better with your experimental results, that are theoretically reproducing the physiology of the system. So this pose is some kind of ontological problem. So at one side you have mathematical work and you have the experiment where you have your ideas to take your piece of reality to ask mathematicians to do something. At the end, this thing has some kind of interpretation. So you must repeat experiments to see if everything goes like the model or not. So why the work of the mathematician, why the work of the model in this approach? So at the end I think that it's extracting new information from the modeling. So that finally you need to corroborate. But it's easier to corroborate something that you have an evidence than something that you have to start from a blank point. I mean, when you are an experimentalist and you want to test an hypothesis, you can have one hundredth of a hypothesis. You have a way to discriminate them and establish which are the ones or which are the priorities. It facilitates a lot of work. So I think that at the end it's good that you have this interplay between mathematics and also experimentalists. Questions probably not only to you but for all who's in terms about models. All these network-based models, all these database models, how robust we are to hidden information. So we are based, we are known. We will work, but two years later we will have new methods. We will add some new informations. How robust are our two-day models to this unknown, still unknown information? How it will potentially change the results of modeling? It will change exactly, but how did somebody measure it? I don't know. I mean, we do measure. I mean, I think there's always more information that you can incorporate, but if you are trying to predict a certain phenomenon, you can look at how close you get to predicting actual observations and new experiments to see what the error is. And within the scheme of what you know, if you get close to the observations, I think a model is doing well, although it may not be completely mechanistic. No, what do I mean, for instance? We have several functional models. For instance, DNA repair cell cycle, this is closed for medical science from cancer biology. We know some connections between cell cycle and DNA repair mechanisms, and we are making some predictions for drug discoveries, for instance, et cetera. Two years later, we will find some new mechanisms which are connected, probably. We can find some new mechanisms, some new interactions which are connected with two models. How will it change potentially with it? Because I know that when people are measured robustness in some statistical models, they accept some data, yes? And when tests do result the same, probably we should add some potential unknown informations. People will add some potentially possible informations about non-existing interactions, non-measured interactions, to test our models quite robust. Put some new information about interactions between these functional models, dramatically change results of our modeling. Well, I think that what I would like to mark here is that the models are database dependent, so you have always the information of all the components of the model and also all the reactions. So you can always add new reactions, always that you know them. And when you know the function or how they are acting or the interplay between different elements, you can include them in these databases and then see which is the impact in the model. I agree that the robot, then you have to test the robustness of the model and see which is the impact of these modifications and probably you will end up doing new experiments to collaborate. But we enter in the topic of standardization also that I think that is one of the, also the topics in all the areas of research. Do we have any standards in modeling? And when you are working with different kinds of modeling, are we using all the same languages? Or are we using all the same standard way to do later on modifications in these models? So we are trying to find a way to standardize this and to have all the database external from the model so that you can only change some parameters or add some features and then this model could simulate or reproduce these changes and how they affect. And we go for final question because we're getting close to the end of the session. The concept of experimenting biology. It's not the same as experimenting physics. I have done a lot of work in Drosophila with analyzing experimental data. And the group in New York, Reignitz group, they have done the following. They do the same experiment repeatedly. And the output is completely different. It's called phenotypic variety. And you have analyzed this data. You know very well the genetic network. It's only simply a network with 12 genes or whatever you want to call them. So we have the protein and RNA and all type of interactions. We have only a very small network. And what we have seen is that for each experiment you do everything in the same thing and the output is different. So when you have one experiment to establish one error in your network, you have a problem. In physics in general, we say that when you prepare an experiment and the initial conditions are exactly there is the temperature it is to have the same output. Here you have phenotypic variability. We have other questions. Then you begin to make models for this. And what's happened is the following. In general, you say that if you have a system with some fixed number of parameters, if you change the parameters, the output will be different. So what you have found is exactly the following. You have in the space of parameters of these networks you change, there is a lot of, you can change the parameters and the output is the same. It remains unchanged. And the parameters are really significant, very important. So I have this problem. When you do an experiment, how do you make a specific, a very exact relationship between the input and the output. So when you have, because you don't know in fact very well the system, when you have this engineering approach that you have input, black box in your slides, the output, this is behind of this. It's the engineer thinks and the physicist thinks that if I change the input, I get something on the output that is different or if it's not different, so I throw away this parameter. But in biology it's not like that. And the experiments, as far as I know, the only group that keep on repeating this experiment is this high needs for drosophical. Because everything is very, very well known. For each experiment, for example, for one protein, bicoid, we have about 1,000 experiments of distribution of bicoids. And the results are always different. So when you do an experiment, you repeat your experiment or not. We keep on repeating. If it's actually a real problem. We are always doing replicates, biological replicates, technical replicates in order to have reproducible results. I agree that sometimes you have a lot of variability, but in the case of bacteria, it's more reproducible. I think that it's also linked with the complexity of the system. So winners of phyrano-caniotic cells, like you have epigenetics or moreover, you have a lot of modifications, professional modifications that are not linked to the genome or to just the condition that you're applying. One more consideration. It is, it is important because you said that. It's about when you say, biologists say two words, I am always afraid. It is a word of optimization. For me, I travel. And the word, to have a standard, to have some kind of thing like that. Because in optimization, suppose, you want to make travel from, let's say, Paris to London. You cannot optimize on the fuel and you cannot optimize on time. Both optimizations are completely different. In biology, you have 10,000 genes. For example, your case has another gene. What the animal, dependent of the environment, optimizes all many genes. For example, it optimizes for three or five objectives. So you have a parental front. You don't have a unique solution. The solution is enormous, so enormous space of solutions. So when you try to make a kind of, you say, you are optimization, optimization, so you have only one perimeter and optimizations are only well defined for one perimeter. If you have two, it is not perfect. But if you have two minimum functions, you cannot optimize for two minimum functions. So I have these programs also for the... We had a very lively discussion on limits and challenges of models. We ended up with limits and challenges in biology to repeat experiments. I will add on this maybe a bit later. On Monday, the question on standardization of experiments by Paul Freeman goes into the same direction. I think I would like to close the session and thank the audience for the questions and also, of course, our speakers.