 Today is Michael, the floor is yours. All right, thank you. So hopefully you can see my screen. Yes, so what I'd like to talk about today is a line of research that we've been kind of pursuing in our group for the last couple of years. And we're interested in mathematical modeling of biological systems from the scales of individual proteins all the way through populations. And in all these cases, we're interested in particular in the kind of stochastic nature of the dynamics. And can we apply ideas from statistical physics? And in the last couple of years, in particular, can we actually have some measure of control of these types of stochastic processes? So what I'll describe today is essentially two stories, one of which is kind of controlling on the kind of larger scale, so entire populations of organisms. And the other is control on the level of individual proteins. So kind of biochemical networks. So it kind of motivates the first of these stories. We all know the phenomena of antibiotic resistance, right? This is basically Darwinian evolution happening at the level of diseases where you have individual, for example, variants developing which are resistant to drugs and can have terrible effects on patients like in spread and actually lead to so-called superbugs, right? These multiple antibiotic resistance diseases. And this also happens intriguingly, though it's a little bit less well known to the wider public for cancer therapies. In fact, it's estimated that the majority of cancer deaths are due to the fact that eventually, for example, certain targeted therapies cease to work because you end up with variants that are resistant to that therapy. So this is a major problem in medicine and people in the last couple of years are trying to essentially figure out ways of mitigating this problem. You can't turn it off completely. You can't make Darwinian evolution cease, but perhaps there are ways of steering evolution in directions that are, you know, make the disease more treatable. Like if you know there's a particular variant that you have a particular drug that works for that variant really, really well, perhaps if you can take a mixed population and steer it towards a population with that particular variant, then you have a methodology of treatment. But that kind of begs the question of how do you do this steering in the first place? So we need to develop methods to kind of fundamentally understand how do you take a stochastic population, stochastic system, and control it and to arrive at a given destination in a finite time, right? Because we can't do this kind of quasi-statically because we need to do this on time levels of actual treatments. So if you have a population of various genetic variants, and these of course could be, I'm gonna focus on genetic variants here, but you can imagine this could also be equally applicable to things like epigenetic changes. How do you steer it? Well, there are several different processes going on in this population. Cells, let's focus on single-celled organisms. These cells are dividing at certain rates. Those rates, the fitnesses may depend on your environmental conditions. So it may depend on the concentration of a given drug and different variants might have different fitnesses under different drugs. Cells die when cells replicate. They also can mutate, right? So you have a different, a population of these various types, and essentially from the perspective of control, your control knobs are your fitnesses, which will be dependent on some control parameters. We're gonna focus on drug concentrations. This could be a single drug, which is the simplest case. It could be a cocktail of different drugs, but it could be a various other things as well, nutrients, environmental parameters. Anything that basically can affect the fitness of your individual variants. So how do you describe such a system? Well, we're gonna, throughout this, we're gonna use actually ratios of these fitnesses. So we're gonna choose one type to be kind of the reference type. We'll call it the wild type, and we'll have a ratio of the fitnesses to that wild type fitness minus one. We'll call that a selection coefficient. And what you end up doing eventually is building a mathematical model, which is quite familiar, and we've seen it in many of the talks in this conference, which is essentially a Fokker-Planck description of the system. So you have, if you have M types, you're gonna define an M minus one dimensional vector. This is just a, each component of the vector is the fraction of that type in the population. The last fraction, everything has to sum to one. So the last fraction, you don't need to keep track of that's automatically known from normalization. And you end up with a multi-dimensional Fokker-Planck. And this equation essentially has two major components. There is this kind of a velocity term or what we would call in statmate the drift term, which consists of two contributions. One comes from mutations. So there's a mutation matrix that describes possible transitions between the different types. It's kind of like essentially a transition matrix in a Markov system. And the other part is essentially the role of fitness. So this is where these selection coefficients as a J come into play. Each of them might depend on the time varying, for example, drug concentration. And then what translates selection coefficient into your velocity is this matrix G, which is given here. And that matrix is also proportional to your diffusion matrix. And essentially captures the nature, the kind of randomness of the evolutionary process. So if you have finite populations, if your total population size N is quite small, you're going to have fairly large amount of randomness just because of finite population sampling from generation to generation. And that's your underlying model. And the question is, okay, you have some given system, you know the types, you know the fitnesses, you know how the fitnesses depend on drugs, you're going to get some kind of dynamics, your probability distributions will change in time, but can we now shepherd those distributions over a chosen trajectory? And in particular, because as I'll explain in a couple of slides, you're going to be thinking about a methodology known as counter-diabatic driving, where we're trying to shepherd over trajectories that are equilibrium trajectories. We can imagine that associated with every single parameter, let's say drug concentration, is going to be a given equilibrium distribution of our genotypes. So this would be, this is for example, in this cartoon, a three type system. So you're basically living on a two dimensional probability simplex, and the distributions are just basically distributions on the simplex. So as we change drug, for example, where we go from this distribution on the right, this distribution on the left, we can also characterize it by the mean genotype frequencies in each case. And so we have this kind of trajectory and probabilities space. And the question is, can we actually force the system to follow a particular trajectory? And that trajectory might be chosen for medical purposes, but here it's just kind of a generic problem of can we force a system onto a trajectory? We're going to focus initially on this trajectory, just being a sequence of equilibrium distributions, but I'll show later that we can generalize these methods to kind of arbitrary distributions as well, that are not necessarily equilibrium ones. And what we're going to do is we're going to take advantage of the close parallels which the Fokker-Planck equation has with the Schrodinger equation, right? So there's been a series of really nice approaches developed in the context of quantum computing, in particular quantum adiabatic computation, where you're interested in controlling quantum states. And I'll describe how we can do this analogously for classical systems, but in the quantum case you're often, you're interested in, you started a given, let's say you started a very simply prepared ground state and you do some kind of manipulation of your system. So you're, for example, changing local magnetic fields and you want to prepare some quite complicated final state which is the ground state under your final configuration of the fields. And we know from the quantum adiabatic theorem that if you do this change infinitesimally slowly, you will end up at your desired target state. But at any finite speed of doing this, there's a chance that you're going to end up going to an excited state or you're going to a superposition of excited and ground states. So there's a price to pay. You're not going to be necessarily at your desired state. So in the early 2000s a variety of researchers including Barry, Demerplak and Rice developed a kind of approach called counterdiabatic driving where what you're essentially doing is adding an additional perturbation to your Hamiltonian in order to force the system to be in the ground state of your original system at all times. Even if you're doing this at finite speeds, which is really nice from a computational perspective you want to arrive at your answer in finite time. And in our case, when we're manipulating these classical biological states we also want to get at the desired end goal in a finite time. So how does the analogy work? Well, in the quantum case we had the target structure was basically a sequence of ground states for each field configuration. Here we're going to be thinking about a sequence of equilibrium genotype distributions. And the control protocol there was perhaps maybe often it's a very complicated perturbation potentially non-local. Here we're going to hope that our control protocol is somehow implementable in the system and it's just going to essentially going to be a modified drug concentration over time. That will allow us to basically hit this target trajectory. And the general prescription of doing this is what we showed in this first paper. Now, let me give you a concrete example. So this is going to be a, this was a system of yeast that was actually developed to study anti-malarial drug resistance. So what they did is they put a malarial DHFR gene into the yeast and there are 16 possible variants of this gene, which you can think of as you're going to enumerate them as kind of bit strings. So the wild type gene where there are no mutations is just 0000 because there are four possible places where you can mutate. And then if you have a mutation or location you just put a one there, right? So you have 16 possible alternative, different mutants going from 0000 all the way through 1111. Each of these individual mutants under a particular anti-malarial drug called permethamine has a certain growth rate which you can actually measure experimentally. So each one of these curves is for one of these mutants the growth rate, or we can think of this in our model terms as fitness as a function of drug concentration. And you can see it's a quite complicated curve, right? Things that are fit under low drug are not necessarily fit under high drug. And then associated with every single, if you fix the concentration your system will go into an equilibrium evolutionary state. Right, so this for example here, this Tesseract that I'm showing here the circles are proportional to the population and this would be an equilibrium state under a very low drug concentration. And you can imagine what's gonna happen now if I start a low drug concentration and suddenly push it out. So if you were to do this numerically so you'd choose a certain drug dosage we start from low, we go up to some high value. What you would see in the system is which is showing the solid curves here, a change in the various genotype frequencies as a function of time. And what I'm showing in the dashed lines are the corresponding equilibrium concentrations or frequencies at each individual time step. And you can see as typical of the system if I drive it out of equilibrium it's going to lag behind the instantaneous equilibrium concentrations and dashed lines and not exactly how it eventually if I wait long enough it will reach those instantaneous equilibrium ones but in between and intermediate times there's going to be this discrepancy between the two systems. So, and here so this is at a small time you can see this transition happening over here where the differences in colors here represents the differences between actual and equilibrium. And so there's this kind of period here where things are not precisely following the track that we wanted to. So the question is, can we get it to follow that track and can we get to do this for all 15 genotype frequencies here? The answer it turns out, there's a closed form solution which is, we were quite surprised by that this particular solution works in the large population limit of fast mutations. And you can express essentially the perturbed selection coefficients in terms of your original ones plus a function of time where these X bars here are the mean genotype frequencies for your equilibrium distributions as a function of your original protocol. So then the question is in terms of implementation is finding the drug protocol that corresponds to these perturbed selection coefficients. And in this system, and we tried a couple of other similar systems, it shows that you can do this quite well. So for example, that drug protocol for this particular system looks like now this red curve here, it has a spike in the middle. You cut it off at a particular maximum dosage because there are certain limits physiological limits to what kind of dosages you can give to a system. For these are all numerical simulations but we wanna keep it as realistic as possible. And you can see that with this now modified drug protocol you're following your instantaneous equilibrium curves quite well. And I've shown only the top four genotypes but this actually holds true for all 15. So you can actually implement these things at least at a numerical level in these types of evolutionary systems. And what's also really nice about this system, this is all so far theoretical is that this particular yeast system is something that is actually has been developed in the experimental context. So we have a collaborator, Kerry Samarad at Arizona State who's working to directly implement these same kind of protocols. And we can actually see if you can, these numerically predicted modified dosages will they actually work as well in the experimental context. So where this is kind of ongoing work as part of this collaboration. So everything that I described so far as kind of focused on evolution but these ideas are fairly general. So they don't necessarily any stochastic biological system is potentially amenable to these types of counterabatic control ideas. And so the second story that I'm gonna tell you is gonna be kind of focusing on the other scale of things. So looking at biochemical reaction networks. And here rather than fitnesses what we're changing essentially are concentrations of chemical species and those concentrations influence transition rates in some kind of reaction network system. This is a in this particular system I'll explain a little bit later is essentially a system of for example chaperones binding to proteins. But in general what we have as control knobs now are these external chemical concentrations and our control targets are gonna be can we shepherd the system through a set of probably distribution of system states and as a sequence of these probability distributions. Now here we're gonna kind of broaden our scope a little bit. So we focus primarily in the evolutionary case on just shepherding through equilibrium concentrations but what if we now are basically allow non-equilibrium trajectories as well. And what if in some cases we also are not interested in controlling every single state in the system what if you only wanna focus on a particular subset of states and only control that subset. So we can then distinguish between local control we're only controlling some of the states versus global control. And we can also think about whether these trajectories necessarily have to be instantaneous stationary distributions. And it turns out that for this biochemical network system for the discrete Markov approach you can find solutions to all of these individual problems. But let's start with the traditional problem which is basically global control where your targets are instantaneous stationary targets. So what you're doing, I'm not gonna go into the complete technical details here but what you're effectively doing is you start with a Markov model for your system you have transition matrices describing your rates they depend on some kind of external parameters lambda and you have an underlying master equation that describes the system and then you have some target probability trajectory vector this is what you want the system to behave at in a particular time duration. And for the traditional problem the catalytic problem this target probability trajectory is essentially a right eigenvector of your transition matrix because it's a stationary trajectory. And what you're essentially looking for is a modified transition matrix that where this row is now a solution to your master equation. So in effect what you're doing is you're inverting the master equation normally you start with omega and you try to find P here we have the row and we're trying to find an omega tilde that does this driving and it turns out that using some of the kind of standard tools that have been applied to this Markovian system so graph theory ideas you can come up with a general graphical algorithm to completely enumerate all possible solutions to this inverse problem. And in fact what's really interesting at least to us maybe it's in hindsight it's obvious but we were a little bit surprised when we first saw it is that overall this is you can have infinitely many such solutions and many of them can be actually physically realizable and in particular for graphs with loops generically you're going to have infinitely many solutions. Let me just give you an example of this in a very simple system that's well characterized. So this is a three state biochemical network of a repressor and a core repressor binding to a gene and essentially turning it off. This is from this experimental study where we know all of the individual transition rates and how things depend on concentrations. You have three concentrations in the system that of the repressor, the core repressor and their complex when they bind together and you can set up a basic Markovian description where let's say you change concentrations of these things over time you have some associated equilibrium probabilities trajectory that you want to now have the target of the system and you're interested in, okay well if I just directly apply this and I directly solve the mass equation I'm going to get this characteristic lag I'm not going to follow the equilibrium trajectory but now I can add a perturbation to the concentrations to basically get it exactly on target. This is one solution, for example this is another solution both of these concentration profiles give you exactly the same probability behavior and in fact there's an infinite family of such profiles that can be enumerated using our approach and you can actually begin to do interesting things begin to ask questions like among all these individual concentration profiles that all hit the same target sequence which is the one that minimizes entropy production at every single time step. And so you can find an optimal this is entropy production as a function of time you can find the trajectory that optimizes that and so you have for example this concentration set of concentration profiles is optimal from that sense and you can imagine other types of objective functions that you want to optimize among this entire family of possible control solutions and that's something that we're also actively looking at what are the characteristics of these types of optimal solutions but this is just so far everything here is now just equilibrium targets what if we wanted to make this completely general? So the most general version that you can imagine here would be to say, okay I don't necessarily want things to be in equilibrium or instantaneous equilibrium at all time steps and I'm only interested in maybe just a subset of these individual states. So let's say there are NT of these states that I want to control and so I specify the functions row one through row NT of those states that I'm interested in controlling and the other ones the other probabilities can be arbitrary we'll call those PI and let's say in this problem maybe there are only a subset of the individual edges in the network that I can actually control because maybe I only have a finite number of chemical species that influence these edges. So if we look at the network of in this case it's a probability current network and in dark red are the ones for example that can be controlled and light reds are the ones that you have no influence over. So you have this finite subset of edges that are controllable and then the dark blue states are the ones that you have particular functions that you want those states to obey. So for a given problem like this there it turns out to be a solution it's a somewhat more complicated than just the standard kind of abatic solution but what's nice there's a fairly easy result that actually tells you if a solution is possible at all. And this is what we call kind of a local control criterion and in order to figure this out what you do is essentially a construct target subgraphs. So for every single one of your states that you're interested in controlling you look at the subgraph of other states that are connected via controllable edges and you enumerate those subgraphs. So you construct all those subgraphs or your given network and the criterion states that local control is possible if every single such target subgraph has at least one non-target state. And one particular consequence of this is for example if your number of edges that you are that you are available for control is smaller than the number of states that you are trying to control that's gonna be impossible. But this is essentially a nice criterion where even without doing it at calculations you can immediately just read off whether local control is possible. So this for example, this is another example in this case it would not be possible because this target state that you're interested in controlling the target subgraph does has no non-target states in it. Okay, so let me try to illustrate this with a particular biological example to show you that in some sense that it can actually help us rationalize some of the things that nature may be doing not just ways in which experimentals can go and control biological systems but perhaps ways in which nature regulates itself. And this is the example of chaperone-assisted protein folded. So proteins tend to misfold with some small probability but this could be actually a major problems for cells if these misfolded proteins end up aggregating. And this tendency is actually exacerbated when you go to higher and higher temperatures. So what you need essentially is a way to solve to basically fix this problem. And this is the one way of doing this is essentially through chaperone proteins. What these proteins do is basically bind to misfolded proteins and catalyze their unfolding perhaps into an intermediate state, perhaps into a native state but essentially allows you another chance to fold correctly. What's interesting about this is that it's a natural non-equilibrium stationary state that develops from here because this catalysis that typically involves some kind of energy usage for example, ATP hydrolysis oftentimes is actually several ATP at once that complete a single cycle. And so you can actually study the non-equilibrium stationary distributions that arise from these types of chaperone networks. The other interesting aspect of them is that under optimal growth conditions there's just enough chaperones typically to deal with the normal level of misfolded proteins. So it's kind of like you're building a hospital and you just have enough capacity for the typical patient load. But suddenly if the cell goes into a new condition for example, you go into a high temperature environment you go into a heat shock you have a lot more misfolded proteins you have a lot more wounded patients you need to basically dynamically and very quickly build up your hospital so up regulate your concentration of chaperones. And this is what happens experimentally if you look at this is examples from two different organisms from yeast and E. coli you essentially in the minutes after the start of your heat shock. So this is a yeast for example subjected to 39 degree heat shock. This is a proxy for the number of chaperones these are the expressions of genes associated with chaperone proteins and you can see in some cases almost a two fold increase in the expression of those chaperone genes after the heat shock. And you also see this typical word increases very rapidly in the beginning and then kind of levels off which we'll return to a little bit later for E. coli you have similar kind of a similar story this black curve here is the expression of a chaperone gene. And then for E. coli additionally you also have this transient peak in ATP concentration which again I'll return to a little bit earlier. It's not a universal property of these chaperone systems but E. coli does show this also transient increase in ATP. So can we take a look at these and experimental kind of observations can we rationalize them in the language of this local control theory? So if we build up a kind of simple model of what's going on so this is kind of a two loop model for these chaperones in the simplest case you essentially have in this network only a single edge that you can actually control. And that's essentially the edge going from misfolded to chaperone because that's dependent on the chaperone concentration. So if you have such an edge what can you do with this type of system? So you write it as a four state Markov model and it turns out that with this one controllable edge if your target state is the probability of a misfolded protein. So you want to let's say reduce that probability very quickly. So at the beginning of heat shock it may be very large and you want to get it down to something very small and follow it very sharp. For example, sigmoidal downturn then with that single controllable edge you can actually do this. So you can actually fulfill local control here of that target state and you can do that but you can't control precisely any other things. So for example, I wanted to control native state probability with that single edge I couldn't do that. So in this type of with these types of control knobs you can get rid of misfolded proteins can't necessarily arbitrarily quickly make an increase in the rise in the probability of native proteins. What about for the coli case? Let's say you have control of both chaperone concentration you can up-regulate that and you can also change ATP concentration. Well, that gives you actually now two controllable edges and now you can have actually two target states. So you can control both the misfolded probability and the probability of the misfolded binding to the chaperone. Why would you wanna do that? Well, in the previous case because you didn't control this probability number two you actually get a transient accumulation of chaperone on the misfolded proteins. But let's say you wanna reuse these as quickly as possible. So you wanna get rid of that transient accumulation. Well, now if you have the second control knob you can actually get rid of that hump. So you can actually control both states one and two directly. So it allows you basically to reuse to recycle these chaperones as quickly as possible in this process. So this is not a direct quantitative one-to-one mapping to these networks because the underlying networks are a bit more complicated than a simple example but it does illustrate that some of these ideas in some sense in a qualitative way could help us understand how nature actually regulates itself. So again, like I said, these ideas are fairly general. So we're working on ways to extend it to things like developmental biology to ecology. But where I wanted to kind of leave off at is the kind of the challenge going forward in trying to kind of translate these kind of control ideas from kind of quantum context into the classical one. Everything I described to you so far really depends on a detailed information about fitnesses in evolutionary case or about the underlying Markovian networks in a biochemical case. What can we do in the absence of detailed fitness information because oftentimes it's actually very laborious. Like the experiments to set up and actually measure all of these individual growth rates versus concentration, this is actually a fairly time consuming series of experiments. And this is just for possible mutations. It becomes combinatorially really hard to do this for much more complicated systems. So one potential solution would be to instead of trying to control the entire distribution of genotypes or system states, perhaps focus on means of those kind of map. In the quantum context, this is coming like macro state versus micro state control. So you're trying to, you have less control of the system but perhaps it's more easy to implement. And the other thing that we're looking at is using techniques from machine learning, for example, reinforcement learning, things like that to dynamically learn underlying fitness models that will enable this type of control, right? When we have imperfect information. So we basically may start with a system, we give it small perturbations, small changes in drug. And from the response of the system to those drug changes, learn its fitness landscape and then try to then control the system from that point. With that, I thank my collaborators on this project and I'm happy to take questions. Okay, thank you very, very much, Michael. If there's anybody else with questions, if you could please raise your hand. In the meantime, we have two in the chat. The very first one was from Megan Engel. Megan, if you're there, do you want to unmute and videoize yourself, whatever the word would be and ask your questions? Sure, yeah, it was just a little thing. You may have mentioned it. Thanks for the talk, by the way. This was so cool. Yeah, so you might have mentioned it, but what was the motivation, kind of medically, biologically, for trying to maintain those yeast gene? I think it was yeast. I can't remember. The one where there were 16 different mutations. What's the motivation for wanting to maintain them in their equilibrium distribution? Because it seemed like at the very end, they got there anyway, whether or not they lagged. Yeah. So like, why? Yeah, why? So there are two kind of motivations. So one is, yeah, you're right. If the target is in equilibrium state, ultimately, then it's the question of getting to that target quicker, right? So that's the first motivation, is getting to, you know, equilibrating the systems much more quickly so you get to the final distribution that you're interested in more quickly. But why maintain equilibrium in the middle, right? Maybe you can find there are easier solutions where you go off the equilibrium manifold at intermediate times. There, that's actually a totally valid question. One of the advantages of doing these things, mimicking equilibrium is if there's any interruption in your protocol, right? Something, you know, treatment is interrupted in some way. If you are at any intermediate point in an equilibrium distribution, you're guaranteed to stay there. So in some sense, it's robust to interruptions. You can return to the system later on and continue manipulating it. But the general question is, I mean, this is something that we're interested in. There are whole sets of techniques in quantum control of fast forward techniques where you don't necessarily maintain equilibrium in the middle. And those in principle should also be doable for the evolutionary case. We can definitely do them with our approach with the biochemical networks. So there we don't necessarily have to stay in equilibrium at all. So, John, I think you've got a related question. Yeah, I was gonna say, I mean you've at least partially or maybe mostly answered the question, but it was that you wouldn't necessarily have to specify in advance what the trajectory is in between. Like if you wanted to get to some final state and you start from an initial state, you could say, well, I just wanna get there in some time. And maybe I put some constraints on where the trajectories are allowed to go in general, but I don't try to specify. And that can give you more freedom. And, you know, you were talking about optimal protocol. So you presumably do even better than your optimum, I guess. Yeah, and that's one of the interesting questions that we have about these systems is, is the relationship of these types of approaches to the more traditional kind of control theory approaches. You know, in generically, how far are these counter-davag solutions from generic optimal solutions for optimizing a particular objective function? That's something that we're also looking into in terms of follow-up papers. But it's, yeah, these are all interesting questions. At some level, we started with the lowest hanging fruit because when the trajectory is specified and when it's in this incident equilibrium, you can actually have these really nice closed-form solutions for the control protocol, even in these very high-dimensional cases. And so that kind of excited us as a kind of a starting point. Yeah, thanks. So we have three raised hands and the sequence, I believe, was started first, then Jonathan and then Gabrielle. Okay, thanks for the talk, Mike. Let's see if my camera operates. So just kind of a small question. Is there any particular cost to trying to control things too fast? Do you have some trade-offs? Maybe you have to dissipate more if you try to change things fast or if you go beyond some internal time scales, you can't really do it. Yes, yeah, there are both trade-offs and limits. We've seen these in individual systems. If you look at individual biochemical networks, generically speaking, for example, entry production, there's gonna be a trade-off. The faster you try to do things, the more entry production is gonna be involved. The more you'll be forced to be further and further away from detailed balance. So we don't yet have a kind of a completely generic formulation of this. So what we would love to do is something like a thermodynamic speed limit type formulation for these evolutionary systems. And we think that there are already several different approaches that people have used these for costful systems. We think many of those can translate into the evolutionary case, but we have yet to fully formulate that. But you're totally right, there will be trade-offs. So in terms of like, for example, one of the most interesting ones medically would be drug doses. So oftentimes you look at what they call area under the curve. So you're basically integrating a drug concentration over the duration of the protocol. And let's say you want to go faster, presumably you're going to have to have larger overall drug dosages. And there's gonna be a fundamental limit there because you're not allowed to, for example, give Delta function drug dosages to patients, generally found upon. And there will also be limits, as you said, fundamentally because certain rates may be not controllable and may basically set kind of be rate limiting steps overall in your system. All of these things in terms of the general formulation, I think are really interesting questions. Thank you. Jonathan, you're up now. Hi, Michael, thank you so much for the great talk. Near the end, you were saying one of the big drawbacks is just the need to know a lot of the actual rates of the underlying model you want to deal with. And sometimes that's just very hard to get. Is there any thoughts on doing any sort of like entropic inference, like maximum entry methods where you have just some information about the whole model but you don't have all the information and then so then you just try to use sort of maximum entropy to fill in the information that you don't have to the best of your ability? I think, yeah, that certainly sounds, I mean, we haven't specifically looked at that approach but it certainly sounds like it should be, it's one thing that that's a possible solution. Essentially any, we haven't figured out for these specific problems here, which exact, I mean, there's so many different possible, there's so many approaches people have developed from kind of purely black box machine learning through something a little bit more informed like these kind of maximum entry approaches. We haven't yet figured out what's the most practically useful or successful one of these, but there's something that we're working on. Thank you very much. Okay, and then to finish off today's, Vost, you're up, Gabriel. Yes, thank you for this nice talk. You mentioned at the end that you're planning to use a machine learning technique for learning. Could you give more details about that? Yeah, so you can imagine, you can imagine this almost like playing a game, right? So let's say, and this is actually gonna be quite relevant, at this level, this is all of course still pure theory, but let's imagine some kind of blue sky future where you are trying to do this kind of individually cater a therapy to a patient. So patient walks in your door, that patient, even if you know the average behavior of a population in terms of that particular disease, each individual patient has its own environment and will have his or her own kind of underlying fitness landscape. Now, you're not gonna go and measure these growth rates of various disease variants in the patient's environment. That's impossible, right? All you can do is essentially give the patient a pill and then maybe through biopsy or some kind of like over time have individual data points where you're measuring the response of the patient to that pill. So a biopsy may give you essentially a distribution of possible types. You give the pill, you give, so let's say a small drug concentration a week, two weeks later, you again measure the distribution of types, you see how it's changed. From that single time step, you make the best, you kind of try to infer the best model. And that might be like a previous question through some kind of like parsimony in terms of like entry maximization. Or you can imagine just giving this to like a machine learning, like a reinforcement learning algorithm just the way like machines can learn how to play go or chess without even knowing the rules beforehand. Essentially, the machine is learning the rules which are the fitness landscape as it goes along. So for a single time step, maybe it builds a very bad model and now you give a, and you then perturb it again. You increase maybe the dosage or you switch drug. And now you have two time points that you measured. And so gradually you build a better and better model that the machine is able to infer more and more about the landscape. And then that hopefully does that fast enough to get a good control algorithm going to get you the target before bad things happen. For example, patient ties. So that would be in some sense the kind of the rough picture of what this would look like. But it's an interesting problem because it's not like teaching like a self-driving car where you're allowed to basically run a thousand different trials and the car goes off the cliff, right? This is learning, it's like one shot learning where you have to succeed for this patient. You can't, you don't have an ensemble of a thousand patients to work with where you can kill off half of them. So it's an interesting kind of learning problem. Okay, well, thank you everybody. If anybody for any of today's talks has more questions that occur to them then you can get email addresses of people from agreeing with Tsar, seconding his sentiment. And I'll see everybody tomorrow then I guess. Bye.