 Hello, Ralph. Ralph, you're muted, but it's okay. No, it should work. Yes. Since I use so many different of these systems, I never know what I need to do to share my screen. So we have a couple of minutes. Would you like to try that? Yeah, maybe. So I have a button here. Click that button. What will happen? So just for the benefit of the other Ralph, no, the first speaker that just joined us. So Ralph Everest is going to try. We are recording now. I don't know what I need to do to share my screen. Yes. We have a couple of minutes. We are recording now. If you don't mind, we can try at the break. I have too many windows. You have to tell me if this works. Okay. Yes. We see your desktop. So the presentation screen as it should be. Okay. We see the presenter screen. We have one minute, Ralph. If you want it, then we should switch it to Ralph icon. I'm second, right? Is this the way it should be or not? I don't think so. Yeah. Okay. Then I should just stop doing anything. Stop share. Okay. Thanks, Ralph. I think we have a break. A quarter of an hour break after a professor icon talk and we can figure things out there. Too many Ralph in one session. Right. Okay. So now I will mute everybody and unmute the professor icon. Okay. So Ralph, are you set to go with the presentation? Yes, right. So welcome everybody to this afternoon session of the second school. My name is Christian Micheletti, based in Sissa Trieste. And we have two talks in the afternoon session. The first talk will be given by professor Ralph icon from Nordita in Stockholm. The second talk by professor Ralph Everest, who is a professor at the University of Stockholm. He is a professor at the University of Stockholm, based in the Economa Superior in Lyon. So the first talk, professor icon. We start momentarily and he will talk about the brony and dynamic simulations with her body interactions. Exactly numerical treatment. So Ralph. Yeah. Thank you very much. I also would like to thank you. The organizers for the invitation. I'm happy to join this meeting. Of course I would have been more happy to be actually. Yeah. That's how it is. So I start sharing my screen now. Okay. You can see the presentation, right? Yes. Okay. So. So the title is brony and dynamic simulations with her body interaction exact numerical treatment. And this is a talk basically about these two. Papers. I published together with Hans Behringer. Was in minds at that time, but he is now in Hamburg. And this is. In principle, it's a surprisingly simple problem, which cost us a lot of headache. And if you go through the literature. There are many attempts to, to solve it, but. They're not really correct. All these attempts, I will explain a little bit more. So. These are about brown and dynamic simulations brown in motion. It's a particle in suspension or interacting particles in suspension in an aqua suspension. There are external forces that can be driven by electric fields by fluid flows or whatever. And important here they are moving in a structured environment so that there can be obstacles or confinements chambers. There are no limitations to your fantasy. And here what I'm talking about this kind of simulations, it's not directly a molecular dynamic simulation. So I'm not treating the solvent explicitly. The solvent here is. Modeled by viscous friction and thermal fluctuations. So these are simulations on the level of so-called launchable equations. I will, I will show the equation in a second. So these kind of simulations where the structure environment and external forces, they have many applications. In particular in microfluidics where, for instance, I did a lot of simulations in trying to model many experiments. And this is, this is one of. One of the structures I simulated a while back. Then you can apply to biomolecules in the cell, self-assembly polymers. Always when you model these objects more or less as hard beats or maybe objects that have a little bit more structured shape than just spherical. I will focus on spherical objects here in my, in my talk. And it's about. The simulation of the system. So how do you do the simulation if there are these interactions with the hard words? For instance, these obstacles here, this is a particle interacting with these obstacles driven in this field. And the model is, as I said, a launcher equation, which I've written down here. Our dot first time derivative is equal to the forces on the particle. This is just for one particle for now. There is the diffusion term thermal fluctuations with models, the thermal noise coming from the environment. And this is the friction coefficient. And the fact that this year is a first order time derivative in this force balance tells you that we already have neglected inertia effects second order time derivatives. So we are in the overdamped limit. This is a very common idealization, which is used in this modeling approach. And it's very, very well fulfilled. So there's an order of magnitude difference between inertia effects and friction effects here on this micrometer or nanometer scales and aqua solution. So this is one of the commonizations we use. And the second one is that interactions between bodies, the extremely short range and strongly positive contact forces are usually modeled as hard body interactions. So there are no forces when the objects don't see each other. And when they touch, the force is more or less infinite at the contact point. And for that reason, these forces are singular and cannot be included in this force term here. So these are external forces or maybe other kind of interaction forces between the particles, but not these contact forces because they are not regular enough to be treated in this kind of differential equation. And this causes a problem when you want to do a simulation. Standard is to use the Euler algorithm. It's a simple discretization you can think of, and we will stick to it for the talk here. So you calculate the displacement at a certain time from the forces acting at this time. So here I abbreviate it as a velocity because force divided by friction coefficient is like a velocity. It's a deterministic velocity coming from the external forces. And here is the term, the discretized term from the thermal fluctuations. And this GT thing here is a Gaussian or is it de-dimensional vector D's dimension of Gaussian normal distributed variables. They model the thermal fluctuations. And then you simply update the new position is the old one plus this displacement. That's all fine and works very well as far as you don't see any walls. This year is a wall. You don't see it yet. But what are you going to do if you do such a step and you end up in an end position which is not a valid configuration, you end up in an obstacle in a wall or overlipping with other particles. And if you want to treat this in a numerical method, you need a kind of algorithm which tells you what to do. And the first part of this algorithm is that you have to detect these unphysical configurations, these collisions, but keep in mind they are not really collisions like elastic collisions because this is an overdamped diffusive motion. So a particle is diffusing here and then touching the hard wall at some point and diffusing away. It's not an elastic collision that happens here. Anyway, I call it collision because it's such a common term. I put it in quotation marks. So this is one part of the algorithm. You have to detect these unphysical configurations. And this is a geometrical problem mainly. It depends on your structure. I'm not going to talk about that. I'm going to talk about the second part about the rule you have to apply to generate a physically valid configuration from that situation. So what are you going to do if you encounter such a situation to make it into something that is consistent with the equation of motion and the force fields acting and this hard boundary here? And of course, this is a very common problem when you do simulations. So there are many, many articles on that trying out different things. And all these methods have in common that they somehow are heuristic. They come up with some rule and then apply it and try if it works or not. And there are two very prominent examples which you can also use when there are external forces. When the deterministic force is zero, then things are a little bit simpler and there are even more heuristic methods. But I'm going to focus on those where you have a deterministic external force. And the simplest one you can think of is the rejection scheme. That is, you simply discard this unphysical configuration and try out a new one by generating a new Gaussian random number from this thermal noise and try out if it leads to a configuration which is actually okay. And then there's a question, what are you going to do with the time during that step? Do you advance it or not? So here you already see there might be a problem. It's just something you, a simple thing you can think of. There's one reference here where they suggest it, but there are many more. And then another one is a more sophisticated method. It's an event-driven scheme. Here the idea is that you propagate the particle, only a fraction of the time step until the collision point where it actually hits the hard wall. And then you use the rejection scheme for a remaining time step. So you somehow simulate one collision with a wall. But if you know a little bit Brownian motion, it's quite clear that this also cannot be fully correct because in Brownian motion with a white noise, a particle can cross a certain line infinitely often in any time interval. So this is just one collision. It's also like just an idea, okay, let's try out if it works. It works actually surprisingly well. I will show a little bit later. But all these methods, they lack actually a thorough justification. There are rules of thumb you can say. You try out and you see if it works or not. So what if you really want to do it correctly? What are you going to do? Is there a way to do it correctly, let's say? Or with a justification. And this is something which took us, Hans and me, took us a while to realize. In the end, it's very simple. If you think, what does the Euler algorithm actually do? It creates a new position here by using this Gaussian random number and advancing from the O to the new position that corresponds to a probability for this displacement, which is a Gaussian simply because this GT here is a Gaussian and this Gaussian is actually a diffusive displacement by dr and the center of the center of the motion drifts along with this velocity value, which is constant during this time step. This is the discretization, even though the force depends on space for that time step, it's taken constant. So the Euler algorithm actually generates a new position from the transition probability, the probability to be at position r t plus dt when I have started out from a position r of time t. So it's a step in the transition probability. The Euler algorithm chooses a new position from the transition probability. Okay, and once you've realized that, you simply can say, okay, all we have to do is we have to generate a new position when there is this wall from the transition probability, which actually takes into account the presence of the wall. Okay, that sounds simple, but in general, we do not know. If we knew that we wouldn't have to do the simulation, right? So we do the simulation because we cannot solve the equations of motion. We cannot write down the transition probability. So it seems that this is almost hopeless and this is the reason people come up with rejection schemes and so on. But since you're discretizing anyway, like the force is constant, if your discretization is small enough, you always have the situation that you're seeing locally a flat wall, like you're seeing locally a constant force. And close to a flat wall, you can, yeah, this is the question, what is the transition probability? If you're close to a flat wall, you can decompose the motion into a perpendicular and a parallel component, which are uncorrelated, statistically you can show that. And then the collision only happens in the perpendicular component. The parallel component is completely unaffected. Imagine the displacement would just suggest the parallel component, then you would not even see the wall. So the collision happens in the perpendicular component. And this is a one-dimensional problem. So we only need to know the transition probability for a one-dimensional motion, where there is a reflecting boundary. And this is actually known. This has been calculated by Smoluchowski already in 1916. It's the driven diffusion on a half line with a constant force, which is here in the velocity. This is the diffusion equation. The q is the diffusion coefficient. I call this coordinate now q, this one-dimensional coordinate. And there's a deterministic velocity vq. And the wall is represented by reflecting boundary at the origin. So there is the current, the probability current through that origin is 0. And the initial position is some q0 at time t equal to 0. And then Smoluchowski has calculated that the full solution looks like that. It contains of three parts. And here I've written down the three parts. The details are not so important for the moment. Important here is that p1, the first part, is actually exactly the free diffusion part, as if the reflecting boundary is not there. And then you have two parts which are somehow corrections to that. But keep in mind that this solution only is valid for positive q. So in total, it's normalized for q positive, while the p1, if it's free diffusion, is also normalized over the whole real line. But the three together are normalized over the positive q. So we know the transition probability, exact transition probability for any time when there is a reflecting boundary at the origin. And this we can use. And I will show you a few pictures here of this solution, the solution is always the black line. Here are the parameters. So vq negative means that the deterministic force is pointing towards the origin, towards the reflecting wall. And the dashed and dotted lines are an attempt to simulate this motion in 1D close to a reflecting wall in the origin using the rejection scheme. So any time we find the position hitting the origin or being beyond the origin, we just discard the step and try a new one. And the distribution that comes out for dt equals to the whole time I elapsed the motion here, dt equals 0.05 is the red dotted line. And then if I decrease dt, the integration time step of the rejection rejection scheme by a factor 10, it's the blue line by a further factor 10. So in total by a factor is this dashed dotted line. So you can see that it never really gets the distribution right. And here is the same situation when you advance time while rejecting the steps. And there you see there's a kind of data peak at the initial position. And this is the one where you have rejected and you just stay there and time is further advancing. And this insert here shows an enlargement here of the event-driven scheme. So the event-driven scheme for dt equals to 0.05 is very, very good. It is on the large scale. It's indistinguishable from the black curve. But if you zoom in, you can see a small deviation. And I should add that for these parameters, the collision probability to hit the wall, to hit the origin is actually only 8%. And still the rejection scheme is quite bad. So you may have a problem if you use this all the time. There's another example here which I skipped because time is running. So I come back to the solution. As said, I have the free diffusion here and some correction. You see also that here, for instance, there's a negative sign that's not a typo. That's correct. So for a positive vq, force is pointing away from the wall. This contribution here is actually negative. So you can also not correct by simply adding stuff to a first attempt because there's a negative contribution here. It's more complicated. So the idea is, OK, because we have the free diffusion part here and then some correction. The idea is, perform standard integration as long as the displacements do not lead to unphysical configurations. The particle doesn't see the wall. Just go on and do your standard integration. If you hit the wall, if there is a collision, replace the component along the collision axis, so which is involved in collision, this one dimension direction q by a new displacement, which is drawn from this distribution here, the p2 plus p3 normalized. And the w here is the integral from 0 to infinity of p2 and p3. Read written in that way just by using normalization. And this is like the collision probability, if you like. So in case you encounter collision, correct your dq displacement by using a random number drawn from this distribution. OK, so why does that work? There's a tiny little calculation here. So the collision axis, the one-dimensional direction where the collision happens towards the wall, this is this half line from the Smoluchowski solution, this coordinate q with a collision point at the origin. And the random number created by my algorithm on this collision axis is q, which is Gaussian if it's positive. This is the heavieside theta function. So if this q Gaussian is positive, I also take it, I keep it, OK? I said then I didn't hit the wall, so I don't change anything. If it's negative, meaning I went through the origin, my end position is inside the wall, I take a q which is taken from this collision probability for pc, which is p2 plus p3 divided by omega. And then I can calculate the distribution which I actually get from this approach by my algorithm, just transforming usual transformation of probabilities, variables. There's the Gaussian for the q1 and there's the collision probability for the q2 and then this delta function enforces these constraints that I keep q1 when it is positive and I take it q2 when q1 is negative. So I can do the calculation and in the end it comes out, yes, this is exactly p1 of q plus p2 of q plus p3 of q. So this algorithm correctly reproduces the Smoluchowski solution. That means that if the force is constant, the Smoluchowski solution is for a constant force, I can do an exact integration step close to wall with an arbitrarily large time step because I have an exact solution, I have an exact transition probability and this is what I'm going to exploit here. I said that the collision direction is this dr per particular, so I go on with my integration using standard Euler scheme. If there's no collision, I have to, maybe I have to do some correction. This is the reason I write here now on drt star. This is my final step I really accept after the algorithm has been applied. So if there's no collision, I just can use the free Euler scheme. If there is a collision, I have to do a correction with this q minus q naught with this dq here taken from the Smoluchowski p2 plus p3 distribution. And then this n here is the normal vector at the wall. So this actually here I subtract from this displacement I have calculated before. I subtract the normal component and I add the new dq. This is basically what happens. So I take out the normal component because it's not correct and I replace it by a correct one and this is done in that equation. This is not such a complicated equation and the algorithm in the end is quite simple. Important here that I can do it is that the dr per particular and dr parallel are uncorrelated because I change the statistics of the perpendicular component. If it were correlated to the parallel component, I would have to do something here too to keep the correlations but one can show that these components are uncorrelated so everything is fine. So that's also the idea if I want to generalize that I look for coordinates which are uncorrelated statistically. In the same way one can treat collisions between two particles. The end formula are these here. This is interesting because imagine I have a collision with two particles which are not of the same size so which feel different friction. It's not really clear from the very beginning it's not really clear how to separate them so that everything is correct statistically. Do I have the larger particle a little bit, move a little bit less and the smaller little bit more or what do I have to do? And this formula tells you how it's related to the ratio of the friction coefficients and this e vector is actually the collision direction. So this formula can be derived by transforming the whole motion to the center of mass coordinate and to a relative coordinate and then you show that these coordinates are not correlated and then in the relative motion we correct the direction which is connected to the collision axis of the two particles. So the idea is the same and the final results after some calculation are these formula. So yeah I don't want to say more. You can look it up in the papers and I would try to give you the idea how to do it. And this is, there are many other examples but this year I like because you might think okay you have the rejection scheme or the event driven scheme. I just make the time step small enough and then the distribution may not be fully correct but I don't care. I don't need the full correct distribution very close to the wall. The thing is sometimes it does not work at all. For instance, and this is how I stumbled actually into this problem. Imagine you have a particle which is driven by the characteristic force a little bit into the wall and you're interested in the escape of the particle from a certain region. So it moves that way and diffuses at the same time and at some point it will cross that gray line and this is my goal. This is where I want to go and I want to measure the time it takes. I want to measure the mean first passage time for the particle it takes from here until it crosses that line. Let me tell you that you have five minutes left. Okay, thank you. If you do that simulation here then the mean first passage time is that line here as a function of the velocity deterministic velocity so the driving force in the end is the driving force it's pointing into the wall and the component parallel to the wall and the velocity is f divided by friction coefficient and this is calculated for a particle radius of one micrometer and aqua solution at room temperature and this is the integration time step and here you see the mean first passage time. Calculated by our algorithm and also by the event driven scheme you cannot see a difference here. So the event driven scheme is fine for this problem but the rejection scheme which is the blue dotted line gets it completely wrong for large forces and the reason is that the particle basically gets stuck in the wall. This force is so large that almost all the attempts you make to find a new displacement end up in the wall and somehow you get stuck. So your whole simulation stops and you get times also if you're thinking you want to simulate association times of molecules you get them completely wrong. Another thing here is in the inside is the computational time in arbitrary units. The black line is our algorithm which makes use of the exact solution so it doesn't waste time in a way and this dotted line again is the rejection scheme which also goes up because it has to throw away a lot of steps and the dashed line is the event driven scheme which gives the correct value for the mean first passage time but the computational time goes up very much for large forces because it also has to discard a lot of steps in the end and has to repeat a lot of steps. It's two curves because there are two slightly different variants of the event driven scheme which I don't want to go into detail now but you see that from a computational efficiency perspective that here the correct method for course gives you an advantage. Okay, in practice a few words you need detection of collisions as I already said you need that for any integration scheme I have not talked about that and this is a geometrical problem that may be quite difficult actually but this is something you have to solve independently the integration time step in our method is determined by variations of F and by curvature of structures or particles now in this case your integration you're usually used to choosing a time step small enough so that over a typical displacement the deterministic forces are virtually constant then your integration scheme is fine and now since you're also simulating structures the structures have to appear flat on that scale straight flat walls that's a second constraint you have on your integration time step DT but it's like the usual discretization you do you just have to choose it small enough for the system at hand another thing is how to generate a random number according to this via distribution which involves error functions and so on and this method I think we found in numerical recipes you calculate the cumulative and the inverse then of the cumulative gives you with a uniformly distributed random number gives you a random number which is distributed according to your distribution you want the thing is that this function you cannot invert analytically so you have to do it numerically for instance you can use some scheme from the new scientific library and this slows down the simulation a little bit but you still have seen as I shown before that it's still faster than the other methods okay and that's sort of my summary that's a part what I presented has been some part of these two papers and we have been ever since you see it's already almost 10 years ago that we did this we've been ever since thinking about extension or generalizations but it turned out to be very difficult for instance what happens for non spherical particles there you have rotational degrees of freedom too and once they collide you have to distribute this correction you do with this reevaluation of the collision axis you have to distribute the new position over the rotation and the translation in a correct way that's not so easy to figure out so we don't know yet and then what happens in corners or if you have like wedges in your structure because there you can also say you have like an infinite curvature so then the algorithm in that way also does not work what happens if you have really many particles so that there are not only like two particle collisions but three or four particle collisions in one time step dt of course you could decrease your time step dt and make it smaller and smaller and smaller so this happens rarely but then you lose the advantage of that algorithm because then you have to again you have to go to very small time steps so it would be good to have an extension of the algorithm which takes into account say three body collisions or even four body collisions but honestly I don't think that this is possible at least not on the flicker level we did it here okay thank you very much for your attention thank you, thank you Ralf so you can join me thanking the speaker thank you very much for being perfectly on time and especially for this very clear and didactical talk I think it's perfectly suited to the school so we can have questions in the chat I will read them out for you just to get started let me ask a question it's the first time that I hear about this topic it's really fascinating so I was wondering regarding the more challenging problem of having ended an isotropic object having to deal also with rotations and so on I understand the challenge how about existing algorithm let's say is there an event that even scheme that does this job of distributing the rotation and so on or no it would be a first algorithm ever to do that the one that you're working on no it would not be the first algorithm ever actually you use the event-driven scheme that I discussed here because let me go back this event-driven scheme just the idea you create until the point where it collides and this you can do no matter how your particle look like and how they rotate and how they move and then you have to think about what you want to do then with the remaining time step and the question is also of course does that lead to the correct distribution I don't know at least to a distribution which is close to the correct one let's say I don't know but I suspect yes because what I did not discuss what we can show actually that this event-driven scheme correctly captures the P2 contribution of the Smoluchowski solution so the P2 contribution can be associated to just one collision during the time and this is correctly catch up by the event-driven scheme so I suspect that this would be a good start to do but it's also numerically expensive because to find that collision point you have to do like a root search more or less exactly what I have to do for finding the random number Q you have to do numerically to find that collision point because you cannot find it so we have some questions starting to appear in the chat box so the first one is from Samandata thanks you for the talk and then ask is it possible to extend the analysis for fluids in channels or other types of confinement if yes what modifications one needs to follow there are basically no modifications you can do it directly in a simple situation where you do not take into account hydrodynamic interactions between particles or between particles and the wall so if your system is such that you can model the fluid flow in the channel by just say a velocity field which drives the particle then it's exactly this here that is unusual and the walls the channel walls you can represent by some geometry but this is then what I said a few times then the task is also to detect the unphysical configuration so you have to have a part of your numerical of your simulation algorithm has to tell you when are the algorithms outside my channel this is what you have to know and for applying the algorithm I suggest you also have to know the distance from the wall and the normal at the collision point this is what you have to know to to use that algorithm but you can apply it directly exactly like I described it under these if you don't want to include hydrodynamic interactions so while we wait for the questions let me ask is there a publicly available code or implementation of your algorithm unfortunately not but anybody who's interested please drop me an email and I can send you the code I have it in C and it's a very small it's actually when you use which can handle vectors and so on it's a very simple code in the end it's it's very simple it's not okay so it's a plugin quite a transferable from code to code very well so are the questions from the students can I ask a question yeah please go ahead so well thanks for the great talk I have a question which is the following so can you extend the algorithm in the case of space dependent diffusion even in a simple example yes I think so then your discretization is also to be like small enough so that the diffusion coefficient doesn't change significantly over the typical displacement but otherwise it's fine and then of course you have to take care of the multiplicative noise I mean you have to know which kind of interpretation you are using if it's e to us to tonovitch in the Overdome case another unrelated question so you can reproduce the mean first patch time but is the algorithm able to reproduce the full distribution of the first patch time the mean first passage time what was the mean first passage time yes yes yes I think we have a picture in the publication let me see if I can find it quickly I think we looked at the distribution the full distribution but the answer is yes it can we have lots of examples here very interesting no it's not in here but we tried that so the algorithm can reproduce the first passage time distribution yes thanks so other questions there is a further question from some and data and meanwhile I invite others to ask other questions if there are so my second question is if we add some activity from any means like shape asymmetry or orientation of fluctuations etc could that be working in the same way because adding activity would be just like adding one more noise term and okay and some and data would be interesting in receiving the code so please contact the speaker for that so Ralf yes if you can represent activity like force term by an external force term which enters here like for an active bounding particle you would have rotational diffusion and force which depends on this directional diffusion you can add it here then it would work exactly the same way yes okay yeah okay questions from others so I think the chat box is complete so thank you Ralf thank you for this very clear talk join me in thanking the speaker again no pressure thank you now of course participants may contact you for follow-ups about the code or other details I hope you are available and we take a 15 minutes break before the talk of Professor Ralf who has already joined us thank you very much again so now I'm asking the ICTP technical support if we should try out the setup of Professor Everest okay so first of all we would like to listen to the microphone but I think it was working before and in this lesson that I did now I also have seen that he already shared his presentation but yeah but somehow it was in presenter mode we didn't see the full screen one let me just try again maybe looks okay okay because now I'm sharing a second screen and this is like this PowerPoint thing goes and I didn't know how to do that first so Ralf I will let you when you have 5 minutes left to the end of your talk the other Ralf are you still there? who? I don't he was a minute ago yes yes he's still there I'm here now you can yes very nice stupid question and this whole collision problem is infinitely simpler for ballistic motion because the particle doesn't cross the boundary several times so in that case I think you can just reflect its position with respect to the boundary force and so it's much so that's much easier to handle exactly and so I've just been playing for my lecture with multi-particle collision dynamics and in that case the screaming step would be ballistic so then I handle the boundary conditions after the ballistic streaming step which is not so difficult and then this diffusive behavior would emerge from the collision step which the collisions would always by definition take place I mean in the good part of the system because I mean I've put the particles back where they belong so I have 4 weeks experience with this when I started programming it intrigued me as an algorithm which is simple and maybe you can avoid all sorts of problems yes but this algorithm simulates the solvent these are the solvent particles it's a way to send the solvent and then you have to embed something in the solvent so when I want to simulate the Brownian motion of one particle I solve one launch of the equation if you want to simulate the Brownian motion of one particle you solve I don't know how many equations I agree all these methods of course if you need to have hydrodynamic interactions then they are much better than a launcher approach I think because it's a mesoscopic way of simulating the solvent you can do DPD you can do multi-particle collisions you can do lattice Boltzmann I think these are the 3 most common ones where you include the solvent in your simulations on a mesoscopic scale yes it depends on the question it depends on what you want to know about your system as to for a single particle you would well it's a coffee bag one so Daniel I'm almost in front of you and I really didn't need it but now it's really going so I'm still here hi how is the situation Leon? well who knows I'm at home because I was feeling shaky and we seem to be beating all sorts of Italian records in terms of how many people we get infected also Leon Leon is western Paris oh wow ok the worst is Marseille and not Leon so who knows because Italy is still cool no? Italy is still ok schools have reopened since the last week everything is ok I don't know why I'm a bit surprised by Spain I mean they should have had a scare yeah it's very heterogeneous across Europe very strange ok I see most people have come back so I think we can start on time in one minute and I'm asking good confirmation for the recording which has just started good ok so we are all set I think we can start so welcome back for the second and last session of this afternoon it is a pleasure to introduce a long time friend and colleague who's based in Leon and he will give a talk about understanding the large scale structure for different stages of embryonic development throughout the floor thank you for inviting me and I guess I have to apologize for that horrible title so I will try to make up for that by explaining it word for word well you know the little food fry one of the model organisms of biology which is a research done then it said something about different stages of the embryonic development in the life of a fruit fly starts as a fertilized egg which is a single cell and then the cell starts to divide to differentiate and there are different stages and cycles in this process and it depends on whether you look at how this thing looks in the microscope or whether you count cell divisions and this is an extremely well regulated process and so whatever we want to understand we want to understand it so then at different stages in this process and we will see that again the second half of the talk so that refers to the different stages of the cell cycle what sort of a cell grows and then it duplicates a genome, it divides and then everything starts all over again and interface is the stage of the cell cycle in between divisions so this is you could say the most boring part but it's also the part and this is what the cell does most of the time and so we look at interface then we look at nuclei so of course the cell nucleus, the nucleus of eukaryotic cells from in higher organisms it's one of the organelles in the cell and it's membrane bound it contains all the genomes, the chromosomes of the cells then there is large scale structure of interface nuclei so what one knows about this structure is I mean the two dominant effects are the first is the existence of territories and so that's a non trivial thing and it's an experimental observation that you see in chromosomes what they do during interfaces they decondition, in a priori you can't see them anymore, you can't distinguish them anymore but if you find ingenious ways of painting them then you can really see that these chromosomes they do not really mix which is not expected if you use the simplest models of thinking about these problems and the second one is an organization into compartments you could say it's orthogonal to what I just said about chromosomes so it says something about u and heterochromatin which I will explain in one sentence on the next slide so there's different parts of chromosomes but each chromosome has both types of chromatin and now these the u-chromatin and the heterochromatin they would try to aggregate and face separate from each other and in this kind of an organization you would find one of these two types typically at the membrane and the other one in the center of the cell okay now comes the last word on understanding so what do we mean by understanding what I mean by understanding is that I want to use polymer physics tools from polymer physics to understand the organization because if you take a closer look into the system then what you see is quite some extent what we're dealing with very appropriate for the school in Italy is a bowl of spaghetti very long polymers in their DNA chromatin and that has profound effects on the way things are organized so now instead of walking you through this hierarchy of organization that starts at the double helix and then there are histones chromatin fiber and so on and so on and so on maybe what I should do is I should just sort of put down a few length scales so that we know what we're talking about and the smallest scale is something like the diameter of the DNA double helix so here you see the picture Watson and Crick with their famous model and so that is of the order of nanometers if you say well the other picture I just showed you I mean what is the diameter of nucleus depending on what kind of cell we're talking about but it's sort of in the range of microns now if you ask a different question if you take the DNA of one chromosome and then you just stretch it out and you just ask what is the length of this double helix end to end then this is in fact in the centimeter range so I mean these things are gigantic but these are macromolecules and they have a length which is macroscopic if you take all the DNA in one nucleus so now for humans right we have of the order of a hundred chromosomes then you end up somewhere in the meter range and now if you do something which is totally silly you just say well let me take all the cells in our body and let me just multiply this meter by the number of cells in our body that gives us the total contour length of DNA in our body what you end up with is something which is astronaut that as a motivation why we should deal with long macromolecules and their properties so now the lessons from polymer physics that sort of I will try to illustrate the following I mean there's one thing which is very important for the dynamics it's topological constraints so that sounds like a fancy word but it just means if you look at that bowl of spaghetti then a priori they cannot do anything these microscopic spaghetti cannot do anything that macroscopic spaghetti cannot do and macroscopic spaghetti I mean they they cannot cross through each other and so then this is called topological constraints because it's the thing which is at the heart of for example not theory then the second lesson from polymer physics is that things can become very slow because these molecules are very long that even if you have very weak interactions between different kinds of spaghetti if you like they can nevertheless cause the emergence of a large-scale spatial structure so that's polymer physics and that polymer physicists they would say one more thing and that's of course what physicists love and one of them I love it very much is that if we want to understand these things then to a very large extent they're universal right they are the same for okay I keep saying spaghetti different kinds of macromolecules which are nevertheless microscopic of course so we keep saying well we don't really need to worry about all the microscopic details in the system and then if we talk to a biologist a biologist will say oh my god um details everything right and so there is a tension between these two views um that somehow we need to resolve and since this is a school about simulations I guess I'm trying to make the point that simulations are a very good tool for doing that so what I want to do is I'll talk first a little bit about this the physics aspects um and then the second part is actually trying to look at Drosophila in different stages of development so the first physics question is I mean this importance of topological constraints that I postulated for the structure and the dynamics of interface chromosomes now essentially it comes down to the question I mean can polymer physics tell us anything about um how we brought up to a macroscopic scale I mean how to arrange 46 of length 100 meters in a box of 1 meter cube right so then I'm thinking about the human nucleus well if I want to model this as a polymer problem and of course what I do immediately is I throw away as I announced um all the details when I just need a description of a chain molecule and essentially you just need to specify how many things you need to say how long such a chain molecule is the contour length and you need to specify how flexible it is so typically one uses persistence length or the cool length which says um under the influence of thermal fluctuations when such a molecule starts to bend and so when we model chromatin at least in the first part we have different organisms I mean we don't care so much about the details so then the difference is just the length of these chains so that's the structure of these spaghettis if you like um one important point is what is the dynamics how can we relate results that we have to to real experiments and so what you see in that plot here is the the mean the blue points are data points from some experiments we do the resonance label it follow its motion and then the red points they come out of this kind of assimilation and by just matching um the the time scales you get an idea about where you are in this description and the point to make is that this kind of a description it's a very coarse grain description right you already had talks about RNA and things like that um well these are atomistic models right and then they are simulated and then they assimilated at the femtosecond scale now these are coarse grain models um so the the units are much bigger but it also means that the characteristic time scale is much bigger and so there's a reason why we can simulate time scales which are also much bigger than something that you would do for a microscopic model now in technical terms how do we do that in fact the model that we typically use is a very simple model I need to model this chain molecule and I model it as as a as a bead spring polymer model so these chains are composed of beads they typically have general interactions or purely repulsive versions of that then these beads are connected by spring so that gives you if you like a linear chain molecule but you could do any kind of connectivity if you like um then we can tune the stiffness of these things as I just told you that is sort of one important aspect when one wants to map to an experimental system and then again this business about topology so there is sort of the energy barrier for two chains to cross for this kind of a model is of the order of 70 kT it doesn't really happen in the simulation unless you force the system so what we use is a variant of the of the Kramer-Grass model and Kramer-Grass they are famous for this kind of simulation this kind of a picture again related to these topological constraints where the theorists were saying okay the consequence is that a chain actually because it is surrounded by all sorts of other chains which block its upward motion it can only really move along its cause grain contour and it was sort of this kind of a model and this kind of a simulation where these things were seen really for the first time and that was a big step forward for polymer physics so we use these kinds of things together with Angelo and maybe you've already seen this animation here so what we did is we wanted to simulate the decondensation of a chromosome so we imagine we start at this point in time when the chain is dividing when the chromosomes are distinct and condensed and then in the next step on they can decondense and we just said well okay we're going to use these very long chains and we're going to let them decondense in a finite volume so you just see one chain you see a box but there are other chains and there are periodic boundary conditions so that that sort of fixes the density of this thing and then this initial configuration it's very strongly folded and then whoops we let this thing run if I click on the right thing you see this and we let it run and we let it run and we let it run for a very long time that corresponds after this time action to something like three days in real time and when you do that and when you look at what you get in the end of this process so that you see a snapshot from the last configuration what you see is in fact the emergence of territories like what is seen biologically and we can even be a bit more quantitative by saying well let us measure something like the typical spatial distance between two sides as a function of the genomic distance and then what you see is the lines they come out of the simulations and the data points they come out of experiment and the funny thing is there are no adjustable parameters of this comparison doesn't mean it's correct but I mean it's not bad and so what does that mean what can we do with this kind of well it's encouraging if you want to do modeling of a biological system because it means we can make a comparison between what we get for relatively simple polymer model and real experimental data for a biological system. Now the second question is how do we understand what we see and how do we build a theory can we do these things which means again this time now we're going to compare the simulation to theories or to simulations for even simpler polymer models and maybe these two slides there are even the most important slides in the talk right I mean this is a school and so not so much talking about how you do things but maybe it gives you an idea of why you want to do this because it puts you into a unique position somewhere in the very center of these problems because you can really sort of quantitatively address whether models are correct and whether theories are correct. So what one learns in terms of theory is that in fact you don't really have to worry so much about the motion of the chain ends because what these polymers show is these chromosome show is territorial behavior which is something that polymer physicists know from ring polymers non concatenated polymers they show the same kind of behavior there's not obvious why that should be so because chromosomes are linear and linear chains usually they inter penetrate their mixed entangle there's nothing that stops them and it's only these ring polymers that don't do that and so it's a bit strange you can say but why do the chromosomes remain untangled if I say remain well I mean we could say they are not entangled with each other at the beginning right if sort of during cell division right they're separate from each other then we let them decondensed and we made the assumption that they're not internally not then what you can do is if you map this thing to what people know about polymer physicists you can estimate the time it takes to really equilibrate the system to build all these random knots that should be in there and if you do that for humochromosomes you find it's like five hundred years that's too long and for Drosophila that will occupy us a little later it's still five years so you don't need to worry about thermal equilibrium from biological systems at that anyway okay so the second part is that well epigenetic state dependent effect of well that's again nasty words so far sort of I treated my my chromosomes of my chromatin fiber right this spaghetti model and everything was the same and that's maybe overdoing it a little bit because in fact the properties of this chromatin fiber they are position dependent I mean obviously there's the sequence sequence then there are things which come under the heading of epigenetics so the DNA can be methylated or not and then very importantly it can be associated with all sorts of of proteins so proteins make up roughly 50% of the chromatin fiber this is highly regulated and so what people found is that is this is not random or anything in fact what you find is you find a polymer of characteristic types of chromatin fiber and maybe the only one that you should keep in mind is but that's the red one because I'm going to use the same colors later on red one means these are active parts of the chromatin fiber this is where we find actively transcribed so we're not really dealing with a homopolymer and that is not really so shocking for polymer physicists because polymer physicists have been looking at what is called block copolymers for a long time polymers composed of different constituents and they are very interesting and very exciting because they can actually fall into larger scale structures and the same here so one should maybe use a block copolymer model where sort of beads of a certain color mean they represent parts of the chromatin fiber of a certain type of the kind that I just described to you. Okay so now we arrive at Drosophila so I already showed you this little sketch of these different cycles of development and I will not dive into that too strongly because really this is not my specialty but what we could say is at least let's characterize what the genome of Drosophila is and Drosophila has four different chromosomes and so not 23 like us or you can feel very superior but I don't think that makes much of a difference and the fourth one is extremely small but doesn't matter for all of you but I mean okay so what we could do now is we could say we want to adapt the kind of model we want to develop the kind of modeling that I've shown you so far for this specific case and we don't want to do that instead of just as a theoretical exercise so this was part of the collaboration with an experimental group and so what Luki Agayama and Giacomo Cavalli did was they made high C experiments so what that technique allows you to do is it allows you to measure contact probabilities genome white so you get a contact probability between any site in any site you get data on that and the resulting plots are of the kind that you see down there and I will discuss them a little more but there's a color code so red means there's lots of contact and lots of contact on the diagonal meaning there are lots of context between genomic sites which are close to each other along the chain and complementary to that we did simulations which are essentially the kind of simulations that I showed you that I did with 100 years ago the difference is that now they might have this additional coloring which you see in the upper left corner which corresponds to these different epigenetic states and we know from independent experiments where to find which state okay so now what do the experimentalists find when they look at their data so this is data now from very early stages of development sort of these maps they are symmetric so what I will always do is I will show you just one half of this experimental data and then the simulations are going to be in the other half of this matrix and as I told you this technique allows you to measure contact probabilities genome white so what you see here I mean these little cases here they refer to the different chromosomes so this is the left and the right arm of chromosome 2, the left and the right arm of chromosome 3 the very small chromosome 4 and the X chromosome okay so what does one see now the first thing is as I already said most contacts are located along this is not surprising at all but it just means that genomic sites are close to each other in space because they are close to each other along a one-dimensional chain I mean how could they not be so it is not surprising at all now the second feature is what you find here this is now a correlation between the left and the right arm of chromosome 2 and you see this pattern the rest of the day will be an X-like pattern so how can you understand that well in fact if you take drawings from like a hundred years ago of what people saw in the microscope because you can see then these contacts they just correspond to contacts between sites on these two arms an equal distance from this middle thing here which is the centromere where during cell division the machinery docks and pulls the chromosomes to the two docker sites and so you see this sort of V-like or U-like folding and that's these characteristic rabble terry toys for Drosophila I don't know if you see this so well with the resolution but again here you can see an X so now this is a correlation between the two arms of chromosome 2 and the two arms of chromosome 3 now and just what it means is again in this drawing it means these are contacts between arms on different chromosomes and this correlation shows you that that is not randomly oriented in this cell. So now comes the simulation let's see if this thing can run so we stick in the right number of chains the right length whatever the colors in this thing now refer to chromosome identity and then we just let this run exactly as before and we don't do this just once we do this many many many many times and then we average over results and then what you get is you get again a contact map something that we can measure in the same way as the experimentalist and then we can compare these two and what you see is this looks fairly reasonable it's not perfect but it looks fairly reasonable and now we can advance we can look at later stages in this embryonic development and what happens there is just the duration of this interface is a bit longer so we let the simulations run a bit more and so there's a bit more intermingling than before and this still looks reasonably reasonable and even more and suddenly looks totally different suddenly it's a bit cheating because this is data now for late embryos so they are not on this scale anymore sort of they are almost ready to be born or whatever insects do so these things are pretty different so I go back one so you had these rival territories these X shapes very very characteristic well here you don't see much I mean you hardly see any characteristic context between different chromosomes most contacts are within arms of chromosomes and if we want to reproduce that and what you can see is we do it with this kind of simulation what we need to do is we need to assume that we need to assume that these chains are in a different initial state which is much more isotropic before they start to decondense remind you you have a few minutes left yes that's perfect I'm good so now what the experimenters do is they don't care so much about this large scale what they do is they zoom into these maps and the stunning thing is really the resolution that they have with this kind of technique so they can zoom in and zoom in and zoom in it's quite remarkable okay so now we do the same thing we zoom in right and we do this first now in the early stages of development so you zoom now this is one arm I've forgotten which one from this overall map and then we zoom even in further and essentially don't see much happen so again later cycle 1913 not much happening and then suddenly something happens you zoom in in this cycle 14 and what you see is you see structure structure means that it's not it depends on the identity of of all genomic sites whether they have more or less than average contacts right so you find these called this tats where local zones where there are many more interactions than before and the interesting thing is that this happens at a point in development which is called internal to zygotic transition which is in fact when these developing embryos start to express their own genes and something happens that generates structure in the genome and in fact if you look at this is now data for the late embryo I mean there's not even a big difference compared to the cycle 14 so that's something that the biologists find extremely interesting and then I'm always nasty so I'm saying okay on the left hand side you see the observed data if you just average along the diagonals that's what they call expected the dynamic range is something like 4000 and then contact probability and if you divide observed by expected you see patterns but the dynamic range of these patterns is much much smaller it's 16 so this average thing is what we reproduced this is what I've shown you before so now we have to do a bit more work to also reproduce these patterns and as I already told you what we're going to do is we're going to use a block copolymer model and so now the coloring if you like is in terms of epigenetic identity and you could say the green bits are heterochromatin and the red bits are actively transcribed genes okay doesn't matter so put this in so here I show you in fact these are very simple simulations just distinguished active and non active domains and you can't even tell the difference what I showed you before on larger scales it's hardly any effect whatsoever but we can zoom in so we can zoom into this arm so okay again it's always the comparison prediction from the simulation and experimental data and we can zoom in and that doesn't look so bad and we can zoom in a bit more and that looks fairly reasonable and this is not just an accident we can zoom in in a position and it still looks fairly reasonable even though it's not perfect because this was a model that only had a single adjustable parameter so what can we learn from this I mean I would say first of all we can learn from this that this is an interesting polymer physics problem secondly we can learn from this that I think simulations are a very powerful tool to connect physics and biology to connect experiments data from different systems now for biology what you can see is you get a quantitative relation between simple polymer models and massive experimental data and from the physics point of view well there is to do this relation between crumpled polymers and randomly branching polymers but Angelo was maybe talking about so all the simulations in the first part were Angelo's and in the second part they were done by Pascal and over the years I've enjoyed collaborations with many very interesting colleagues and with that I thank you for your attention and I'm happy to answer your questions thank you thank you very much Ralf so please join me in thanking the speaker for the very nice talk thank you for being on time and also for having given such a didactical walkthrough both phenomenology and theory physical modeling of chromosomes and genomes based on polymer theory so the session is open for questions so Ralf maybe just to get started while people write their talks would you like to comment a little bit on the progress there has been both let's say experimentally maybe theoretically in detecting more reliably inter-chromosome interactions which were very challenging and very noisy at the beginning but now we're starting to get more reliable data from IC maps okay maybe I should explain the problem first maybe if I find one of these things where I had this technique okay so I mean don't look at the details of this technique but the way it works is you have the cell nucleus you throw in something that essentially glues together DNA which is close in space then you cut up these bits and pieces you ligate them and then all of this relies on sequencing and so if you sequence the end result of such contact then what you find and if you analyze the sequence and then if you know the sequence of the entire genome then what you see is oh I have a bit of DNA which does not fit anywhere along the chains because I've glued together two different bits which come from different parts of the genome and that allows then to identify where these contacts are and that's at the origin of these maps here but then as you know in our cells we always have two copies of any gene one copy that we have inherited inherited from our mother and the second copy that we have inherited from our father now of course our mother are not totally identical but the difference is in terms of gene sequence they are fairly small so if you do this experiment typically you cannot distinguish whether you have for example in these maps that I show you here let's take the whole one if this contact is between the left and the right arm of chromosome 2 whether this is a contact within the same chromosome whether this is an intracromosomal contact or whether this is a contact between the left arm of chromosome 2 maternal copy and the right arm of chromosome 2 paternal copy and of course if you want to understand the structure of the genome that's an important distinction to make so for the simulators it's easy we know who is who so we can distinguish these contacts but for the experimentalists it means that they have to work much harder on the sequencing in order to identify who is who and that is something that is beginning to emerge and that keeps of course additional information about how this works for example stupid question are the two copies are they close to each other in the cell? Thanks a lot Ralf so there is a question from Katharie Azizi so can you comment about the temperature how temperature is considered in the model and how much is it so in the model temperature and probably real temperature I believe so when we build the model I told you that there is this length scale here which says when a chain can start to bend under the influence of thermal fluctuations and so the mapping is such that you just get that length scale right and you don't have to worry about temperature in the same way temperature would tell you how fast you can diffuse but since we use this mapping to real time we don't have to worry about temperature as such you could ask an additional question which is about biology now for us temperature is always fairly constant in the general range out of that range we simply did for other organisms that is not true we could ask does it matter whether or not you were 270 whatever or at 290 whatever which I always think for biochemistry it's an interesting question I think for these problems so much okay thank you it's not a gigantic effect we have a second question from Isabel Luís Grothaus who's asking to comment about I think the spirit of the simulations she's asking whether it's more in the MD spirit or I believe the Monte Carlo spirit she's literally asking how does it actually work or are computed similar to MD or are they like a cost grade I think she wants to learn more about the stochastic dynamics okay so now technically this is molecular dynamics but it is not molecular dynamics with an atomistic force field so if you interpret in terms of a force field you could say we use an interaction potential for point particles which is very popular for argon and then we connect these beads with springs so it does not really have much of microscopic interpretation on the scale of this model and the justification for doing such things is this point that I brought up somewhere but I don't remember where it is I think a little back this business of universality so what polymer physicists know is you can do experiments with different kinds of polymers and if you look at very different kinds of polymers at different temperatures if you look at them using the right units then in fact they all behave in the same universal way that is true for properties which depend on the large scale structure of these systems and that gives you a lot of flexibility in how you want to model them right if universality means that things that are characteristic of polymeric systems do not depend on the microscopic details it means in terms of formulating models for polymers you can take anything which is computationally convenient or it's the same for doing calculations if you want to do calculations with polymers I mean the simplest theories are random walks on a lattice and then if you solve that model you end up with these Gaussian distribution function that professor Icon was talking about only they mean something totally different they mean something about what is the typical conformation of such a molecule but it turns out the mathematics is the same as the mathematics in diffusion so from a technical point of view what we do is MD simulations but these are cause-grain models they're not trying to capture what is going on sort of on the scale here of individual atoms because then we need I don't know how many orders of magnitude more powerful computers to do the same thing and in the end they will probably behave in exactly the same way but there's a bit of understanding in making that decision what matters and what does not matter so this is a top down approach in terms of model if you start with identifying the courses features the essential features of the model and then you add detail to it instead of saying well I try to represent my system on the level of atoms then I run MD and I see what I get very well okay so Isabelle Louise thanks you for the answer other questions so Raph maybe would you like to comment a little bit on what you think are the challenges either the next ones that should be tackled or those that would be like dreamlike to be able to tackle in every years yeah let me add one thing which is to say this slide I have after the conclusions which is here I mean there's more than what I've told you the cell nucleus is an active system and there are many things going on that we have simply ignored and now the two ways of thinking about it one way is to say that means everything we've done is wrong and the other way is to say it's interesting that something which is so much more complicated comes out to behave in the same way as a much simpler system so the the interesting or very interesting mechanism which is characteristic of these polymers these chromatin fibers in the nucleus is what has now been established as being new extrusion so there are some and it's related to this next stage in the story I went to just decondensing but at some point you have to replicate and then you have to condense and one thing you have to do on the way is you replicate these long polymer chains after you've replicated them they are totally not with each other you have to separate them from each other and one good mechanism for doing that is you sit somewhere and then you pull out the length in between there's a lot of this activity going on in these systems and the question is always A can one model it B is always the question if somebody shows you results which are so much more impressive than the ones that I've shown and if they've used a thousand parameters then I'm less impressed so there is something in a bit of Occam's razor approach but the physics of these active systems is very interesting and the other aspect that I mentioned these effective interactions between different kinds of the chromatin fiber but on the scales where we model them they are relatively weak but if one goes to smaller scales then they become much stronger in the way biology functions that also has effects so there's much more to be understood I would still say it's a reasonable to have this top down approach to start from the causes feature and then to see how much more you can add and get back what you had before and get something in addition which you didn't have in my talk I did one step I put in a bit of identity along the fiber because much more along these lines that needs to be done but in all of this is implicated in gene regulation which is a big question but in all of our cells they contain the same genome but our cells are doing very different things and how does that work that's a fairly non-trivial question structure contributes to gene regulation Thank you, thank you also for giving us a peek into the challenges and the physicists perspectives how to tackle them in a convincing way to use models to identify the prominent underpeating physical mechanisms without getting lost in details and overfitting so I think we are at the end of the session so I really wish to thank Professor Everes and also Professor Eichhorn that we heard before I'm really grateful for them and as we all are for having taken time out of their busy schedule to give us really a flavor for fundamental topics that should be in the basic knowledge of everyone very fundamental also for these second school on numerical methods applied to physical systems so thank you very much again to both of us, thank you Thank you, all right With pleasure So I think we can stop the recording now, great so really many thanks to both of you it was an excellent afternoon very stimulating I think you both the audience are the very light driver Okay, very good I'm happy to hear, great thank you because I was scratching my head what I should do Yeah, I mean giving these talks is always talking in front of a mirror in some way but you are getting lots of feedback you can see in the chat box people thanking it