 nice to meet you Ken. Nice to meet you. Okay, am I meeting more people? All right. I think we can start, no? Yeah, yeah, go on. Okay, so welcome everybody to this morning session of the school. Today we have two excellent speakers about two very different topics. As I said in the first day, I really highly recommend you to make questions during the talk, especially in this situation where we are all, we cannot be in ICTP or CISA for this occasion, and especially in conscious students to make questions. And okay, we will start now with the first talk by Ken Sekimoto, and I leave the stage to the speaker. Okay, so can I start? Sure. Okay, so hello everybody, and thank you for attending my talk. And I thank especially Edgar Holden, who gave a kind of brief interaction in his talk during his talk at the first day. And then I start. Okay, so I will talk about progressive quenching and hidden martingale. This is a fruit of the mainly the collaboration with Charles Moslonka who is here in photo. And the other two guys are also the training student in the former project. Okay, the basic notion of progressive quenching is the following. So, so we suddenly fix some part of the degrees of freedom of the system. And by this fixing, we update either boundary condition or the external field on the system that remains active. So the main interest is the statistical characteristics of the state evolution. In the context of stochastic thermodynamics, the dynamic, this is a dynamic partition between the system and the external system, because the border between them are moving. In the context of the information thermodynamics, this is a partial measuring of the system state because it fix. And then feedback field is given on the system through the boundary condition or the external field. Several years ago, an experiment by Damien Fandenbroek and their colleagues has been done on the zone cooling of polymer melt. So from the hot melt of the polymer, they extract a string of polymer. And then after the extrusion, it is quenched and they study the statistics of the surface profile after the quenching and then compared with the equilibrium profile. So for me, this is a kind of progressive quenching. Now, our model, theoretical model is much more simple, which I use the basically the globally coupled spin model. So we put the each spin on the node of the complete network. So each spin is connected to all the other capital N sub zero minus one other spins. And they are coupled with a ferromagnetic coupling. And here the denominator N zero assures the extensivity of this system. The update rule is that suddenly I fixed one spin. And then we recreate those unfixed spins. And once one spin is fixed, it remains fixed. So by iterating this fixation and recreation step by step, we finally reach the state that all the spin has been fixed. So since the equilibrium time of finite system is finite, so this process corresponds roughly to the quasi static inhomogeneous quench. Our model may maybe caricaturized as a evolution of public opinion before a referendum or election. So the voters are connected through social networks. Some of the voters in the community would make up their mind very early, and they diffuse their opinion. Those who don't yet make up their mind are under the influence of those opinions. And they also communicate with each other. Of course, our model is very far from the reality. But the common interest is how important is the initial stages. Our protocol of fixation can be represented as the Markov-Tranche network. Here the T or number of the frozen spins plays a role of the time. And the observable is an M, which is a frozen total magnetization. This is the sum of the frozen spins. So in the plane of T and M, the nodes which correspond to the states of the whole system are joined by two directed arrows, which correspond to either we fix in the next step the spin up or spin down. Because we recreate before the fixation, the probability of those events is given by this formula, where small m is the equilibrium spin of unfrozen part. So using this probability, the expectation of the future fixed spin is equal to the mean equilibrium spin of present time. So in this model, we can find a two stochastic process as a function of time, which is the number of frozen spins. One is the apparent stochastic process, which is capital M, the frozen magnetization or the sum of the frozen spins. But the other stochastic process is equilibrium unfrozen spin. So if the former process capital M is apparent process, the second process is a hidden process. Because if we make analogy with the stochastic differential equations for the former process or apparent process capital M, the hidden process is literally hidden in the drift term. On this Markov-Torrentian network, we first show two extreme cases. So hereafter, we absorb the temperature because we fix the temperature into the coupling constant to make this dimensionless. So on the plane of the number of fixed spins and total fixed spins, if the coupling is zero between spins, the trajectory of state evolution or Markov process is purely a random walk, unbiased random walk. On the other hand, if the coupling is strong enough, typically larger than the critical value of mean field ferromagnetic traction, which is one, the trajectory is determined by the initial or first step almost, and then it moves almost 45 degrees or the diagonal or the diagonal. So this is the result of the 256 spins. So because these two are trivial, we are interested more in the critical case, where the coupling constant is critical. For the finite system M0, the critical coupling is determined by the extrapolation of the Curie's law, and it empirically follows this formula. So by this critical coupling constant, the evolution of the probability distribution of the frozen spin as a function of time with time is somehow strange. So we start from the unimoidal distribution for the small number of fixed spins, but it becomes flat too, and now I change the scale of horizontal axis because the number of total spin, frozen spin increases. It becomes bimodal or double-picked. So we may wonder if this is due to the sum of sign of first-order traction or spin or spontaneous symmetry braking, but in fact it must not be the case because the effective coupling during this process among the spin feature, not the frozen, is less than the critical value or the temperature is hotter than the critical temperature. So we cannot expect any first-order torrential or two-phase domains. In fact, the origin of the bimodal probability distribution function is a persistent memory of the early stage. If I start by fixing definitely the first spin up, then the evolution of the probability distribution shows the asymmetry up to the final process. So if I rescale the horizontal axis by the number of spins feature fixed at that moment, we find that the distribution becomes sharper and sharper with asymmetry. So it means that the persistent memory of the early stages is the reason origin of the double bimodal peak. So knowing that there is a persistent memory in this system, we try to characterize this memory by using some perturbation test. So the evolution of the probability distribution is described by the transfer matrix for Markov process. Starting from initial probability distribution function p0, we multiply the transfer matrix step by step. And if I want to give some perturbation, just upon fixing the t0 spin, I modify this transfer matrix by adding some field. And then I continue to evolve without external field to observe the final distribution. If we are interested in the linear response, instead of applying the finite field, I apply the infinitesimal field at this stage and then measure how the distribution is modified. So this is the result of the linear perturbation. On the left column, I just copied the evolution without perturbation. So starting from initial zero spins, we fixed, for example, already 16 spins, and then I applied the external perturbation. Then the linear response shows a sinusoidal shape in distribution function. However, if we added the perturbation in just the middle of the process, among 266 spins, that is 128, then the response at the final state has a double sinusoidal peak pattern. So the effect of perturbation is strongly memorized at the final state. And in fact, this response of final probability distribution function is a kind of derivative of probability distribution function versus number of frozen magnetization at the time of the perturbation. But the horizontal axis is magnified enormously at the final distribution because of the memory and evolution. So in order to see what is happening in the individual process, we traced several sample trajectory for the system, which started from the critical coupling, and the number of total spins is 5,512. So each trajectory starts somehow, some more or less randomly, but once they accumulate the frozen spin, the trajectory is more ballistic, showing that the memory is kept in the individual history. And that carries the memory of the state in the early stages. On top of it, if we superpose by the color code the value or contour of the mean unfrozen spin, the trajectory looks like follow this contour. By this observation, we found that the hidden process, which is the mean equilibrium spin, mean equilibrium unfrozen spin, follows the martingale process. So I recall the Edgar's lecture in the first day, and if I don't misunderstand, the martingale process XT with respect to, in general, another process YT, is that if the next future expectation value of X, given the process up to now of YT is equal to the actual value of XT, I say the XT is the martingale. So it is supposed that the value of XT is predictable precisely by the history of YT up to now. So in mathematics, martingale has been used for a long time, but in physics, it is only recently that the consequence and importance of the martingale has been found as a paper by Shetrit and Gupta and Edgar's group. And we happen to find also encountered the martingale process by these papers that I will talk about also now. So in the context of our progressive clenching, it is the mean unfrozen equilibrium spin, which is martingale, with respect to the history of the frozen freezing spins. So to be more precise, there is a finite side effect in our case, but this effect in terms of the martingale formula, a martingale property, is the order of the square of the system size. This square is very important. Now the key point in our case is that this martingale process is in the hidden part of the process as compared with the apparent process or surface process of the total frozen manifestation. So the consequence of this hidden property of the martingale on the apparent process is summarized in the second formula, which is first found by Charles Moslonka, who is my colleague. So by mathematical operation of the definition of these quantities, we find what we call the hidden martingale formula, which is summarized in this red box. So intuitively, the hidden martingale conserves the average increment rate of apparent process. I don't go into the detail of the demonstration, but the main idea is this. So now I speak three consequences very briefly. The first one is a direct application to predict the final stage, given the probability distribution of the early stage. So suppose I know I have an access to the distribution function of the frozen spin at the stage of T0 spins fixed. So now our hidden martingale formula, which is equivalent to this red box, gives the value of expectation given under the condition of the given frozen spin at the stage T0. So the final expectation value is given without calculating the transfer matrix after T0 steps by just by using the hidden martingale formula. And this graphics shows the response in the final state or final expectation of the total frozen spin as a function of the stage or the fraction of stage at which the perturbation is given. For example, if I added the perturbation when the 20% of the spins has been already fixed, then the effect on the final expectation of total frozen spin is about 5%. So we see that the later is the stage that I give the perturbation, the influence on the final stage is small. So it's more or less counterintuitive from the viewpoint of the memory effect, but here the early perturbation has a stronger influence on the final result. The second application is also direct, but it concerns the influence. So suppose that we know that the process of progress quenching has been started by some stage T0, but we don't know what value was the early stage, what value of the frozen magnetization at this stage. This value is fixed, but we don't know this value. Instead, we only have the expectation value of the final total frozen magnetization. So the question is given this total frozen magnetization, how can I find this frozen magnetization at the early stage? Now, if we use the hidden martingale formula, which is in the red box, the solution is instantaneous. Here in this formula, the mean equilibrium spin of the unfrozen spin at the early stage is the function of unknown frozen spin. So this equation where we know the left hand side from observation and on the right hand side, this is a function of unknown variable. So by solving this implicit equation, we can find the answer. So we can infer this answer without calculating or simulating the evolution between T0 to final state. So these are the direct applications of the hidden martingale formula, but we can in this model of progressive quenching go a little bit further. So we can even obtain the approximate prediction of the final distribution function from the early stage probability distribution function. So as I have shown many times, the hidden martingale formula is written in this form, saying that the mean increment rate of the apparent process of total magnetization is keeping the initial increment rate or individual increment rate at the initial stage. Now the idea of borrowing the idea of geometrical optics approximation, we can remove this expectation just by allowing for some fluctuation or stochastic errors which evolves in the sense of weak central limit theorem fluctuations, only the inverse square root of the system size. So if it is a good approximation, we can use this formula, so without expectation on the left hand side, to carry the probability from the early stage distribution which we know to the final stage by carrying the probability to the final value which is expected or predicted by the geometrical optical approximation to predict the final distribution by smoothing this point. So this is the result and final essentially the final slide of my presentation. So on the left column, the top two figures show the early stage distribution without perturbation on the left and with perturbation on the right which is a bit shifted toward the right, but they have only 16 points because we have fixed only 16 spins. But now if we use the geometrical optical approximation, this predicts the final stage where that 5066, 256 spins have been fixed. The distribution predicted is, although it has only 16 points, pretty well the full numerical calculation of the final magnetization distribution of the magnetization. Either without perturbation or with perturbation, the position of peak is quite well reproduced and the presence of double well comes from the single peak distribution and geometrical optical approximation. So this is all of my contents of my talk. So as a resume, we introduced the progressive pinching as a hidden martingale process. And for this hidden martingale process, we derived the hidden martingale formula which gives the long-term stochastic memory in some approximations. So I acknowledge this first our colleague, Charles Moslenka, Bruno Ventejo, Mihail Etienne, and also we thank the fruitful discussion with our colleague, Brittin Mungang, Isaac Neri, Luca Peritti, Edward Rodin, Anton Zadrin. And thank you for the audience. Thanks a lot, Ken. I send a clap with a big hand in the behalf of the audience. So I leave the room for questions. So if anyone has a question, please say it in the chat or just unlock your microphone and make a question. Okay. So while the questions come, I have a couple of questions on my side. Yes. So one is about the stochastic process you mentioned that you, it's like equivalent to your model with the one of two stochastic processes. No, the one of the stochastic differential equation. Ah, okay. The first one. Yes, maybe here. Exactly. Yes. So here you say that you have a process that has a drift. And the drift is not depending on big M, it depends on small M. Yes. And small M has memory of the past, no? Because small M is equilibrating with what is left in the system, no? That means unfrozen, you mean? Exactly. Exactly. Yes. So this is like an equation of a, if we say big M is the position of a particle, so this will be a particle that has a drift, and the drift has memory of the past. In our case, yes. Yes. So it is somehow a model of an active particle, no? Yes. It breaks completely the detailed balance. Yes. Yes. Yes. Yes. So in essence, you could apply all this formalism to active matter as well. Maybe. Good idea. To do prediction on the, I don't know, the velocities and the distribution of active particles. Yeah. Oh, okay. I remember I've mentioned this already elsewhere, but I forgot also it's also, thank you for reminding it. Yes. It's interesting. Okay. Yes. But the important point, one important point is that in our model, this is mean equivalent spin, which is divided by number of unfrozen spin. So mean is average for one. So it is the intensive quantity that fluctuates, not a lot. On the other hand, if I apply this immediately to the one single Jannus particle, its rotational Brownian motion is too violent. Yeah. Yeah. To invalidate our optical geometric approximation. So all the marketing hidden marching and formula is still valid, completely valid. But the approximation of geometric optics maybe not so good. Okay. Okay. I thought about this and my conclusion for the moment is that. All right. Yes. Okay. There are questions from the audience. I leave the stage to Ali first. Thanks a lot, Ken, for your talk. So I'm a complete outsider and ignorant to this field, but it would be very interesting to hear from you just some perspective on the kind of problems in biology where these types of where stochastic thermodynamics plays a role and where this type of models are useful just just for myself and also for people in the audience. So your question is what is the relevance to? Yeah. Yeah. Just to get a sense of what types of biological processes where these types of models are relevant and what types of biological questions one can answer with this. Okay. I don't. Okay. Please don't take it. It is not a joking, but at the same time, I'm not so expert at all. So it's just my imagination. Sure. Sure. Please. So the development process or the evolutionary process is fixing the actual state progressively, but this fixation, either in the form of DNA or in the form of social law system, et cetera, influences on the future evolution. For example, in the evolution or development of a child's brain make connection in the neural network and they make up their characters and everything. This is everything is a progressive quenching. Of course, analogy is very rude and vague. I completely agree with you on everybody. But the idea of progressively change the fixed part and the flexible part is new. And I just tried one case to make a profound study. I don't know if it's part of my answer to your question. No, no. It partly answers my question. But I guess there must be some interesting implications, applications of this in many biological systems. I think so. Yeah. Thank you very much for commenting. Thanks. All right. There are other talks, other questions. Sorry. Here is one from the chat. So Suman Dutta is asking the following. He has two questions. So first is for J0 equal 2, could the dynamics J0 equal 2, yes, yes. Could the dynamics be looked upon as a driven random walk? For J0 equal to 2, 0, trajectory is almost straight line. The question is if this can be seen as a driven random walk? Yes. By just random walk. Yes. Okay. So that's clear. And the second question is given some trajectory of a stochastic process like a walk, stock price, etc. Is it possible to extract the memory kernel? Inverse program. I think this is what people does using the hidden Markov or more deep learning. I don't know. Hidden layer process. And once they found a good model of hidden layer, and then if you look back only the surface process, the hidden layer makes clear the role of memory kernel. This is my naïve understanding. Do you agree, Edo? I agree. There are several papers on this topic, papers which they show algorithms to extract memory kernel. If you assume a generalized random dynamics with instead of instantaneous friction like a memory kernel. There are several interesting papers. I can send links to them. Yes, thanks. It all depends on the model. Okay. There's another question. It says the following with the results showing that progressive quenching can show memory of early perturbations. Does that reduce the effectiveness of simulated annealing for doing initial equilibration in MD simulation? Could you just repeat the question? Okay. So I'm reading because it's very long. Okay. With the results showing that progressive quenching can show memory of early perturbations. That's okay up to that. Yes. Okay. Does that reduce the effectiveness of simulated annealing for doing initial equilibration in MD simulations? Okay. Effectiveness of doing especially the initial equilibration. Yes. Okay. I'm not at all expert about the simulation, especially annealing. But generally, I agree because once you are trapped in the local basin of attraction, it is much harder than the early high temperature stage to get rid of that. But maybe the question poses a deeper point to which I'm not confident. Okay. I invited the... Maybe somebody has... No, I was just asking because that model looks about like insimulated annealing is a technique where you start the system off at a very high temperature and then slowly lower the temperature until you reach the temperature at which you want to actually run the simulation. So usually this is used to make sure that you've got a good initial state. But with this kind of result, you might... It might be possible for someone to argue that actually the state that you have with your simulated annealing might still have a lot of memory of whatever speed you put in during the simulated annealing instead of being of the equilibrium. I don't know if that makes sense. Okay. Can I say something? Of course, yeah. So our model of progressive quenching has no annealing. So once I fix forever. So in this sense, it is quite close to the greedy algorithm. This technical term I learned from my review of the paper. But this is a greedy algorithm. So it sometimes reaches a good solution, but often it fails. So in order to improve this algorithm as an annealing process, we have to put in some change of mind, change of memory after that. Does this answer the question? Yes. Thank you. Okay. Any other questions? Maybe that's a curiosity. Let me ask another. So could you try to explain to the audience how these results are translated in the language of this referendum image that you gave? So I suppose that the social media and also public temporary anket is given often, frequently enough. So every time somebody makes up their mind, it is immediately distributed by the community. So it is not only the final vote that makes the publication to the others. So nowadays, many people are using different social media, so vaguely taking the statistics of their opinions. And then my hypothesis is that at some point, each people make their decision. So these two hypotheses is common with progressive quenching. So communication and decision. And why could infer the distribution of the vote at the future time? Okay. But one thing which is always surprising is that even if the intermediate anket shows 52% of our advantage against its adversary, they say it is almost fixed. But sometimes it is betrayed by the Trump's election. Yes. But still, I'm afraid that the strong influence of the first initiative or first bias to initialize and bias in a sense of iteratively bias the future outcome. I'm not expert. I should not say any more. Okay. Thank you. So I don't know if there are more questions. So I believe that's been already quite a lot. So we close this session then. And thank you. Thank you again. Thank you very much. So it will be a break now. I think we started 10, 15. So this is free to have a coffee. This is our coffee break and or to make questions in the chat. So you can still keep making questions. So we come back in 15 minutes. Thank you. Thank you again. Right. Should we come back? So hi everyone again and welcome to new people joining. So we will continue with the morning session of the school with Lorenzo Robigatti. I remember to all of you some of the rules of the game. So feel free to make questions during the talk. Also to send the questions in the chat because this will make the session much more lively. But of course there will be a time around 15 minutes for discussion at the end of the talk. So I leave the stage to Lorenzo. Thank you. Thank you, Edgar. So can you see my screen? Yes, perfect. Okay. As Edgar said, please feel free to interrupt me. So I'm Lorenzo Robigatti. I work in the physics department of Sapienza University of Rome. And okay. Then you can see the title of my talk, which is, I think I like a bit more, you know, probably less informative, a bit more direct titles. So today I'll talk about phase separation in a cool protein system. That's a very informal language, you know. The talk will be quite informal. So this is my sort of take on biologies as a, maybe to a grand expression. And I'm sorry to all of you who actually are biologists or biophysicists or biochemists. I'm not by trade, so I hope to say things that are not too wrong. So what matters concerning this talk is that it's everything in biology, it's a matter of organization from, you know, the large sort of large microscopic scale to the microscopic scale. So if we start with a body, let's say it's a one meter long, size of one meter, then this is made of organs. And then you have sort of, I mean, sort of fractal nature if you want. So these organs are made of cells. And then if you look within the cell, you start seeing the same pattern again. So we have some things, some compartments inside the cell that acts as essentially as organs. So then of course, at the smallest scales, you have this, you have the workhorse of the body if you want. So that will be, you know, proteins and nucleic acids. Of course, you also have smaller molecules, but let's stop here. And these things are organized. So that these compartments are there to organize, especially, essentially, the proteins and the nucleic acids. So these are called organelles. And there are many of them. I'll be honest, I probably don't know the role of the majority of them in cell. Those are very hard, very studied field of biology. And the general thing about them is that they perform some specialized tasks. And there is an interior and an exterior of these things. So they are well separated by membrane. So that's crucial. Since they have this membrane, they sort of maintain an identity over some time scale, over some, let's say, long time scale. Of course, they are not completely separated from the rest. So they do communicate with the cell, with the environment, with other organelles through molecular signaling. The exchange, of course, information. So it was, this is all well established. Of course, most of this stuff is still under study, but it's a part of me that's been there for quite some time. And that was the case, it was the case until roughly 10 years ago when it was discovered that there are some organelles, this is in embryos of C. elegans, was discovered that some organelles are actually liquid. So they flow, they are viscous, they can join, they can sort of evaporate in some sense. And so there are liquid droplets inside some cells. So here the keyword is droplet, which we have known to, it's been a very, we have been hearing about droplets for other reasons in these last months, but still droplets are in some communities are associated to face separation. So, and this is actually, this is not something I like to do usually to show, you know, the number of papers about some, some subject published in some years, but this is a quite sharp, sharp increment. And this is the number of documents published that contains the expression protein, liquid, liquid, face separation, either in the abstract or in the title. And you see that there is a sort of hidden, a huge jump after 2010. So after the appearance of that paper, people realized that face separation could play a huge role in this specific in the cell because of these organelles. Now, what kind of face separation are we talking about? Okay. So a warning. So here explicit doesn't mean what you think it means. It's just, I want to be explicit. So this is, this depends really on your community. So essentially what we have, what kind of face separation we have, we have a face separation where some components face separates. So at the end, we have a part of the system where this, these components are highly diluted. And the part of the system where they are denser. Okay. So that's what I would call a gas like liquid part and a liquid like part. Of course, it's not really a gas. Since there is, of course, there is a solvent, there is other solids. But if you look only at the things that are face separating, you'll see that there is a gas like thing and liquid like thing. Okay. So this is in the biology world is called liquid, liquid face separation. Whereas once I come from, which is the colloidal word, that's called the gas liquid face separation. And the liquid, liquid face separation is actually completely different thing. So if it's a face separation between two liquids, between two dense phases. So one has to be quite careful also when, if you want to look at, look something up on, on, you know, Google or Scopus, you have to be careful. So we are talking about probably liquid, liquid face separation, but in my also in some slides later, I'll probably say something about gas liquid face separation. These things are the same thing in this context. So these droplets, they have been named membrane-less organelles because they don't have a membrane. So they are directly in contact with the cytoplasm if they are not in the nucleus, but with the rest of the cell, let's say. Okay. So they're also known as biomolecular condensates. So those two terms are essentially interchangeable. And you have here example, an in-vitro example. So if you have this complex MVP fuse, which are two proteins, then nothing happens. Essentially your system is homogeneous. If you cleave the bond between the two, you'll see these droplets coming out. So this is just because the fuse proteins will start to aggregate. And you'll see this is a, this is how liquid liquid face separation proteins look like. So you'll see really droplets. That's it. And here you see a sort of scheme that shows some of these membrane-less organelles. There are many of them. And in some cases, there is some, you know, fight over whether something is really a membrane-less organelles or some definition. But it's usually, it's really a lot, a lot of new stuff to study. So they are liquid-like. That's the main point. Mostly liquid-like, let's say. So they can flow. They are viscous. They can join. They can form. And so on. So they are, it was discovered that they are made essentially of multivalent proteins or RNA, where multivalent means that these, the building blocks can form, can bind to other copies of the same, of the same macromolecules or to other macromolecules in, in, so they can form up to some, some number of bonds. And they serve as, essentially as micro-reactors. So they, they tend to concentrate some, some specific species to let, to enhance some, or either to sequester some molecules or to enhance some, some reaction in the cell. And the mechanisms that are at the base of their formation are also linked to the, to some diseases is when you hear about abnormal protein aggregation, it's usually the same mechanism that this mechanism goes ory, then you end up with something that might kill the cells, as happens, for example, in Alzheimer's disease. So these things are important for diseases, but also for the homostasis of the cell. Now, phase separation. Why, let's say phase separation is used by, by, by nature, by evolution if you want. Because phase separation occurs when there is a subtle interplay between entropy and enthalpy in the system. This means that it's very sensible to changes. So some small, some small changes can, can either suppress, enhance or phase separation. And this can be used, of course, to react to changes to the environment. And this is why, why, why phase separation was, so to speak, chosen. So I'll show you one, one sort of qualitative example. So here is a snapshot of a simulation with the five types of particles. So I'm showing you three colors because there are four species that interact with each other, with each other strongly. So I'm coloring those the same color. Essentially, you have to set 15 different interactions, neutral interactions between them. And you see that under these conditions, which I, of course, I've chosen carefully, the system is homogeneous. Then if I change a single interaction, interaction, the way two of these spaces interact, then boom, you, this is the same system. And just let it simulate. And you'll see that these compartments appear. So you have phase separation. This, this is a quite small change. Okay. And this is the point. So if I, if I would, I could change back the interaction to what it was before, and it will, it would re-homogenize. Okay. So that's the point. You can do on-the-fly compartmentalization. So in proteins, it has been discovered, but it's being discovered that liquid with phase separation is, again, used in some sense, in cells in many ways. And of course, it's connected to diseases, but for, of course, but for example, it's also used to, it's involved rather in, in prescription. It can be of different nature. It can act as a way of creating scaffolds that recruit clients. So small molecules that then are maybe involved in some other reaction. And it can be, of course, since it's so sensible to changes, it can also be affected by, for example, changes in the local mechanical properties of the cell. That this also is, has been emphasized that that's also a tool of controlling the appearance and disappearance of these things. And as far as studying these things, of course, the cell is full of other stuff, not only stuff that phase separates. There are a lot of, like, thousands, if not tens of thousands of different macromolecules. And the point is that these macromolecules have been optimized to interact with each other by, by, by evolutionary pressure, of course. So you cannot really take out the proteins you are interested in. Look at how they behave and hope that they will behave the same in vivo. That's true to some extent, but it's not clear whether you can always do the same thing. So it has been shown that in vivo and in vitro experiments don't always give the same results. So it's really, it's not easy to understand what's going on in a cell with all these things, you know, moving around. So our idea. And here I'm taking too much credit. This is, it was an idea of Manuel Levy, who works at the Weidzmann Institute, telling me what's been, what's done in the experiments here in his group. And the idea is, is simple as often good ideas are in principle. So we are, we kindly ask living organisms to express in a stochastic way some amount of some artificial proteins that have been designed by us to phase separate in a certain way. Okay. So of course, the convincing part is a hard bit if you're an experimentalist, but I'm not. So for me, that's just a word. So these artificial proteins should be, those are going to be expressed in a yeast strain. So you'll see some pictures of yeast cells. But so the, these artificial proteins by, since they are artificial, of course, they are extraneous to the cell. So they are not optimized. They've not the mutual interaction between those artificial proteins and the other natural proteins have not been optimized by nature. Then they will buy, they can bind through a locking key mechanism. So only a protein of a certain type can bind to a protein of another type. And of course, they are multivalent because we have, we have known that this is important. And the final goal is to build something that can be used to better understand liquidity with phase separation, but in vivo. So that's the, that's the main point. Okay. And what I've been done is, is to accompany this experimental toolkit with the, with the numerical toolkit. So these are the building blocks. So those are two species of proteins, what I call dimer, which is this one and a test from her, which are this bluish greenish yellowish thing. So there is this anti-gen antibody part that takes care of the locking key mechanism. Then there is a sort of body, which is long and stiff for the dimer, which is a small for the tetramer. And then you have some fluorescent proteins of different colors so that you can spot the different proteins on a microscope. So you'll see that this design was chosen carefully so that the red thing cannot bind twice or cannot easily, let's say bind, bind twice to a tetramer. So that, that helps with bot mobilization and with the understanding of what's going on. So we have a dimer can only bind to one tetramer. The other important point is that the interaction strength between dimers and tetramers can be tuned with point mutations. So we can express proteins and interact with a different strength. So we can understand what is the role of the affinity, what's called the affinity of the attraction strength between the two components. So let's first do a sort of control sample. So we let yeast cells express these proteins but with a variant. So no bond is possible. So there is no locking key mechanism here. And you see that you have two channels, one for dimers, one for tetramers, and then you have the overlapping channel. And you see that in this strain here, the fluorescence is spread throughout the cells. So nothing is happening here. Cells can move around and they do, you know, they are around the cell. But as soon as we express proteins that are combined to each other, you'll see that they localize in some of the cells, they localize. And you see that this localization is a core localization. So if you find a lot of green, of tetramers, you also find a lot of timers. It means that they are interacting. And this is how a bimolecular condensate looks like in a cell. It's just a dot. It's a punctum if you want. And you see that, yeah, this is a memberless organelles, a synthetic one. It plays no real specific role in a cell. Just there because we want it to be there. We want to understand how it behaves. Now, this is, of course, we can take wonderful pictures with this. And Emmanuel and his group have done it. We can also go a bit beyond and we can really understand what's going on from the thermodynamic point of view. So what we can do is we can map the negative of the phase diagram. So the phase diagram tells us where the system is unstable, where there is phase separation. So what we do here is we map the negative. So we take a picture like this one, upper left, with a lot of cells, like thousands of cells. I think each phase diagram was done out of five or a few thousand cells. Then you don't consider all of these procedures are committed. So we don't consider the cells that have condensates in it. So this regard in our analysis those cells and take only the ones that have a homogeneous fluorescence throughout the cell. Those will be these ones, the red, orange, yellowish, and green ones here. And these don't phase separate. And things don't phase separate either when the concentration is too low or when there is an imbalanced tokyometry. So if you have too many of one component with respect to the other, you won't be able to form enough bonds and you need bonds, you need connectivity, support phase separation. So if we take all these cells and we put a cross over the point where there is a cell and we can essentially measure how much protein there is in each cell by doing a mapping between the strength of the fluorescence and its concentration, we can map the lower part of the phase diagram. So we cannot go to the high concentration part because cells are not going to express that many proteins, but see we can map the part of the phase diagram by taking a negative photo if you want. So this was the experimental part. As far as the numerical part which I helped doing, we want to model the system in a simple way. It's a very complicated thing. So you have water, you have a lot of solutes, you have proteins, electro-solid interactions, that's too complicated for me. I'm a simple man, I like these simple things. So we just keep what we think are the main ingredients of the model. So what we do is we need, of course, multi-valency. That's called limited balancing in the quality of the field, but it's the same thing that we learned. It's very important. Then we have these locking key mechanisms. So we have, we need bond specificity. So this particle can only bind to that particle. Then, of course, we need a single bond per site. So we, it's a locking key mechanism. So a bond can be involved in only, sorry, a site can be involved in only one bond. So this is a perfect job for what is called a patchy particle. So patchy particles have been used in the last, let's say 15 years to model a lot of different things starting from blends of polymers to DNA now stars, to some parts, some types of colloids, spherical nanobrushes, a lot of different things. And the model as in its simplest version boils down to a sphere or a particle decorated with sticky spots. It's a really straightforward model, but not withstanding its simplicity. It can give rise to very interesting effect. This is, for example, an effect that's very important also in the protein world. So this is a quite famous effect. So this is a phase diagram of patchy particles with different valence, so multi-valence. Let's look, for example, at the red, green and blue curves. So this is density and this is temperature. So this is the phase diagram. So inside the curves, you have phase click systems. And you see that as you decrease the number of patches, the number of weld bonds, this phase diagram shrinks. So this is important. For example, if you think in the biology world, if you think of membrane-based organisms acting as scaffolds, you need a liquid density here to be not too high because you need space inside to accommodate for clients. So you need something that is phase-separating, but it's not very dense. And this explains why you need multi-valence particles, multi-valence proteins, because you need to make a few bonds, but not too many with neighbors. Otherwise, you will end up with a very dense phase that will essentially will not make it possible to do something with the condensate itself. And the other thing that's been done with multi-particles is to understand the interplay between self-assembly and phase separation. That is important because, as I told you, phase separation is a very also sort of fragile phenomenon in some cases. And here you see the same sort of phase diagrams, temperature density, but you see that if you change the way patches are arranged or you add different patches or you make big small patches, you end up with very different shapes. You might even have particles that have two critical points. For example, in a closed loop called system region, and this is for a one-component system. This is quite technical, but it's a bit of a unique system, and the model is quite simple. So these simple models can really be used to look at complex behavior. Now, the point is that we need this single-point-point patch condition, that if you have the single-point-point-point-point-perpatch condition in the simplest way, which is by using small patches, you'll see that these small patches are going to hinder your equilibration, because you need to break a bond from another, and a large affinity that can cost you a lot of computational time. Of course, you could in principle use large patches, but then you end up with a lot of bonds per patch, which we don't want. So we use a sort of trick, which I named after Francesco Strattino, who invented it, let's say. And the point is that, let's look at this configuration. Let's say that if you have a bond between patches, you have minus epsilon energy. Now, as soon as another patch come in with these large patches, this is possible, then you have, of course, two bonds. So we have minus two epsilon. And this will be energetically favorable. But if you add a three-body part to the interaction that goes with the parameter lambda, then you have an energy that's a bit more complicated. So it's minus two epsilon plus lambda epsilon. Then, of course, if one patch goes away, you still are back to the same energy. So what happens is that everything is controlled by lambda. If lambda is larger than one, you see that here it's not energetically favorable to have triplets. So this is a single bond per patch condition. So you will repel any other patch that comes in. However, if you use lambda equal to one, then all these three configurations would give out the same energy. It means that this process here happens without any energetic penalty. It can happen essentially for free. An interesting bit is that this is still a single bond per patch condition, since this has the same energy, but it's centropically is favored. So essentially what this means that it's, for lambda equal to one, some nice things can happen. So this is an example. So this is, I'm running a simulation with the system. And you see, this is a potential energy. And these are the extent of the thermal fluctuations. You would expect the systems to be completely arrested. Because of course particles that are bound cannot unbound, unbind, okay? But if we look at the means for displacement, you see that these are means for displacement on different consultation conditions. You see that it always becomes diffusive. So these things diffuse even though they don't break any bonds. And they diffuse through this bond exchanging mechanism. And this is interesting by itself, but it's also interesting because it means that we can equilibrate a lot faster this sort of systems. In this example, for example, I'm showing equilibration curves. And you see that there is equilibration that it's like a hundred times faster with this model than the other one. And this is what I call a free launch, because it's not something we put in, we are interested in other properties where we get it for free. Which is very, very nice. And we can make large simulations since we equilibrate this fast. So now almost at the end of my talk, sorry, but I'll show you what I've done for our specific system. So the system I simulate is a better mixture. Particles that can form up to two and four bonds and we change density, concentration, temperature. Temperature is essentially reciprocal of affinity. Then we found this way of understanding what's happening if we are in or out of equilibrium. And we do so by using sedimentation simulation. This is a trick. So in experiments, they don't do this. We do this because in this way I'll show you later we can get easily the phase diagram, the phase behavior of the system. And we also do some regular constant volume simulations to compare to the dynamical data and measuring experiments. And if I have time, I'll show you a few things we can do to go beyond what has been done in experiments. So sedimentation. It's an interesting numerical technique because you can, with a single simulation, you can extract all the equation, the whole equation of state. What we do here is you see that you have a crossover when you go from a low density gas if you want to a high density liquid. And this crossover is the phase separation. Of course it's a finite system. So it's a smooth crossover. But if you fit this with some realistic form, then you can extract the existing densities. And from those, you can map the phase diagram. So these are the experimental phase diagrams. As I told you, this is the low density part, the negative of the low density part of the phase diagram. But the main effect is that, first of all, everything is sort of symmetric with respect to the perfect stoichiometry line. So every two dimer, you cap a tetramer. So the perfect stoichiometry line is the red one. And you see these phase diagrams are sort of symmetric around it, which makes sense. But then as affinity grows, you see that this phase diagram enlarges, enlarges. And then something happens here. You have sort of re-entrance of the lower branch. And we thought that this might be an out-of-the-grew effect. So we run some simulations to check that. And we can do that with this sedimentation technique. Because if we run a really large affinity, with lambda equal to one, we are able to sample in equilibrium, even if the affinities are so large. But if we use lambda equal to 10, we still have the single bond per patch condition, but we fall out of equilibrium. What we do here is we map, we compute the phase diagram simulations between the lambda equal to one, which is equilibrium, and lambda equal to 10, non-equilibrium conditions. This is the phase, same phase diagram, but we can map the whole thing, not like in experiments. We can go up to large densities. And you can see we have the same qualitative effect as in experiments. So you have this re-entrance of the lower branch, whereas the upper branch remains more or less still. So here, the shading is, I've done it to show the oral effect. Of course, there is some noise. These are sort of our simulations to do, but you see that the effect is quite clear. And the other thing that you can do in experiments is to sort of bleach the condensates. So you take the cells that have condensates, you bleach them, and you look at the, how much time it takes for the fluorescence to recover. And these are the curves. These are, in red, you see the averages. And you see that there is a large spreading experiments. And as you go up in affinity, the recovery is lower and slower, since of course the bond strength is larger. And you see that a larger affinity, you don't recover essentially. So this is really, you are, you can say that we are out of equilibrium. And then we can do the same thing in simulations. And you see that also this for three affinities. And you see that the spread, simulations show that the spread is due to the fact that condensates a different concentration and overall density, and we see the same effect. And we see that the largest affinity, which is the one that gave the out of equilibrium phase diagram before, we don't see any recovery as in experiments. Also the spread is smaller. As in the experiments. So just to conclude, I want to give you an idea of what we are doing next with this model. Sorry, Lorenzo, you should try to finish in five minutes in order to have time for questions. Yeah, yeah, I have two slides more. Thank you. So a quick overview of the model extensions. So we are looking to, we want to understand what happens if we change the model a bit in the way that might be either changed or can reflect some experimental modifications of the model. One is to add, for example, no specific attraction between our, let's say, proteins. And this is something that's probably there. So proteins on average feel very weak attraction. So this might be this thing here. And you see that what happens here is that this is another representation of the phase diagram that works a bit better if we, to look at what happens when we change some parameters. And here you have concentration of one component with respect to the other and here overall density. And you see that as we increase, this is the strength of the no specific attraction. As we increase it, the region of instability, so the phase diagram enlarges. Then we can look at what happens. These are bit noisy data because it's some preliminary data. But we can see also what happens if some of the proteins are at defects. So if instead of forming four bonds, a fraction of particles can form up to only three bonds. And you see that from the red and from the black, which is the perfect system, we see a shift upwards and leftwards. So the phase diagram shrinks a bit as we increase the fraction of defective particles. And we also look at what happens, for example, if we change the geometry of the patches, this would be if we take the dimers, dimers are the particles with two patches of the poles, if we change the angle between the patches, there will be something like changing either the geometry of the real protein or also the flexibility. And you see that, you see a similar effect. So you see a shrinkage of the phase diagram. So this is, these are all things that either can happen in the experimental system or can be added in the experimental system. We can look at what happens in insulations. So to conclude, this is the main reference of the work has been published a couple of months back. And I hope I showed you that we can engineer synthetic biomaterial condensates or membrane organelles and we can tune their properties in vivo. We can measure in experiments their phase behavior. And we can understand what's going on by using simple cross-screen simulations. Also theory, I haven't showed you but we also developed some theory to quickly draw phase diagrams of the same systems system. And this model, this toolkit is being used to look at, to make some, sorry, to check some specific hypotheses about biomaterial condensates. One of them is also in the paper and also to test new methods. And with this I conclude and thank you very much. This is the list of people who worked on the project. And also thank you for your attention. Thank you very much Lorenzo. So I leave the stage for questions of this. Please don't be shy, send any questions on the chat or don't hesitate to just unlock your microphone. Right. So I will start with one question about the parameter lambda. So I see that the dynamics been strongly on this parameter and my question is the following. So can one estimate this parameter from experimental data? So this is the way I see it. This parameter is really just something we put in our model to be able to simulate under sort of extreme conditions where we wouldn't be able to equilibrate otherwise. So it's not, in this specific protein system it's not something I would expect to be in experiments in the sense that we use it to check what happens if we fall out of a critical room. So we can, we have the like incredible opportunity of comparing what happens if we sort of speed up the dynamics or slow down by changing the parameter. So this is, this lambda thing is something we have introduced so that we are able to essentially draw something like this. So the, of course in experiments we know we are out of equilibrium but we don't know in experiments how to get the real equilibrium behavior. For those same conditions because if you do the experiment then you realize you're out of equilibrium. So what we are able to do here is to check what would happen if we could look at the system in equilibrium even in experiments. So that's why lambda is there, okay? So what I did provide a slide because I sort of expected a question like this and the important bit is that lambda does not affect the real equilibrium behavior. So if we have run some simulations sort of a smaller affinity where we can equilibrate both systems with lambda equal to one and lambda equal to 10 it turns out that this swapping thing this lambda does not affect the phase diagram. So it's really a dynamical thing we can use to check what's happening in experiments with respect to the equilibrium behavior. Thank you, thank you. So any other questions? Okay, I have another one about the relationship to experiments. So what is your perspective about applying your models to explain for example the big granule phase separation? So yes, this is something that actually we are trying to do now and people actually we were quite disappointed that someone has published a paper that we wanted to publish essentially a couple months back. So people are looking into this and are doing it exactly the way we are doing it. So they're using these multivalent particles, so patchy particles to look at the behavior. And of course what we can do is we can try to understand what happens on a qualitative level. So what happens? What's the role of the, for example, valency over the viscosity or over the equilibrium properties or for example, the porosity of what you get and how shapes shape influences this, all of this. So you can look at some sort of qualitative trends to understand what's going on. Of course you always need feedbacks from experiments to really be sure that you are not doing something that's very far from reality. Okay, thank you. There's one question from the audience now. So it's the following. In what sense lambda greater than one is non-equilibrium? Would the simulation reach a steady state? Thanks for the question. So here I mean non-equilibrium in the most general sense. What I mean with this is that the system becomes non-urgotic essentially. So if you look at any observable, you'll see aging. So you'll see that all your averages change over time. And then at a certain point, you get stuck somewhere in a state that's not an equilibrium one. So you do reach sort of steady state which is this limit here. And that gives this sort of behavior. Can I feel you? And we know, I can say that we are out of equilibrium exactly because I know that we are not in equilibrium. I know that that sounds sort of tautological. But still, since I know what should be the behavior in equilibrium, as soon as I see that we are not in equilibrium, I can say we are not in equilibrium. So that's why, but it's a very general non-equilibrium thing. I don't know if that answers the question. Just observation. I see that I cannot vibrate. I cannot reach the right number of bonds, the right energy and so on. Okay, so any other questions from the audience? Yeah, five minutes more in case you have any doubt. For students, I open the room. Pleased to ask anything. There are also details about simulation techniques, if you want some of this. This is quite different from many of the other tools where we people have used sort of an optimistic and molecular dynamics. Here is this quite different thing. There's a question now. It says, I'm curious whether the swap is limited to form or nature of pair interaction or is it valid for any form of interactions? Say it again, sorry, the first part. It's curious about where the swap is limited to form, to pair interactions or is it valid for any form of interactions? So the swap thing is essentially you need to make sure that... So when you want to design something that does this swap thing, you need to be sure that the state where you are swapping, when you are swapping, which would be this middle one here, is essentially the same energy as the other ones. So this particular swapping mechanism is of course for pair interactions, but if you can make sure that these three configurations in your specific system has roughly the same energy, then you will end up with a system that swap bonds like a constant potential energy, which is what worked. But the idea is quite general, I think. All right, any other question from the audience? You can also unlock your phone. Okay, another one. Is it also valid for polydispersed systems? What are we talking about, the swap or... Sorry. I guess so. Yes, I mean it's quite general. So here I showed you, I didn't talk about any details, but here I showed you the system here. This is for a five component mixer, so you can make anything, you can essentially use the swap thing on any models. Here, they all have the same radius, but they're all different. The five species are all different. Same radius, but different patches, patches arrangement and stuff like that. But then if you could make small or larger particles, you can make ellipsoidal particles, you can make... You just need to... Essentially, yes, for these specific model pair interactions, and you can put them the way you like, you want. So also for these pair systems, yes. All right, thanks a lot. Seems there are no more questions, so I will then close the session. Thank you again, Lorentz for the great talk. Thank you. We'll return in the afternoon with more talks. Thank you. Thanks a lot.