 Let me know when you want to. Ready. Ready? Yeah. OK, so good evening, everyone, or good afternoon, depending on what you are doing. Today we have Pablo Rancho from the Institute for Theoretical Biology in Cuba University in Berlin. He's going to talk to us about collective information processing in collective biological systems. So whenever you want, Pablo, you can take the lead. Thank you very much, by the way, for accepting the invitation. You're welcome. Thank you for inviting me to give this talk. So welcome, everyone, first of all. So yeah, as my name is Pablo Rancho, I'm located in Berlin and head of an independent so-called Immunoterm Research Group. And the title of the group is actually Collective Information Processing. And this will be also the main talk or the main content of my talk. So my background is actually physics. But I moved over the years more and more into biology. And one reason for this was this beautiful examples of collective behavior that we see in the natural world. Starting from the smallest scales, if you think of bacteria coordinating the motion already at the micrometer scale, and moving up the scales to meters up to larger swarms that we have actually in the natural world, which are local swarms can consist of millions of individuals and span of tens of kilometers in size and travel over continental length scales. And right now, some of you probably have heard that we have actually over the past months quite a problem with locals in eastern Africa, which is something which probably doesn't really enter the news that much compared to other current developments. But it's actually a huge problem for large parts of the world. What fascinates me in particular is, first of all, if you come from physics, you might be interested about this complexity of the system and the self-organizing nature of the system. And you can ask whether there are any universal principles or laws that govern the macroscopic behavior of the systems despite the biological differences. So what could be the common features of the self-organizing systems? However, if you're coming from the biological side, what I find particularly interesting or even more maybe interesting about the systems is that they just don't come together by simple minimization of free energy, like, for example, fluid droplets or something like that, but they actually evolved over long time skills to be collective. And because being collective serves actually a function, a biological function, which benefits the individuals within the group. And one, there are a lot of theories in biology why being in the group might be beneficial to individuals and why it might be evolutionary advantages to be in the group. I like to summarize a lot of these benefits actually as the collective information processing. And in particular, if you think of predator avoidance, for example, for a bird in the flock, it's actually might be a single bird doesn't have to detect a predator. It may actually pay attention to the social information provided by others and if they try to escape, it might be a good hint that this focal bird should escape as well. But what I'm particularly interested in my research is actually this interplay between the self-organizing nature of the systems basically through simple local interactions and also this biological function. How do these two interplay, how they constrain each other and how they might sometimes even synergize to really work together. Let me, before I jump to an actual two topics that I wanted to talk about in more detail, give a brief overview of what we're doing in general now on that. So there are plenty of models of collective behavior, of collective movement in literature. Maybe some of you have heard of the famous Vitchick model or the three zone models. So these models are typically referred to as social force models and they typically assume that you have something like binary force like interaction between agents which could be repulsion, attraction, alignment so that you coordinate your directions of motion. And however, what these simple models typically neglect is the actual perception at the level of individuals. This is something that bugged me for a long time and this is one main activity of our lab to actually develop new models of collective movement which take the potential limitations of perception and maybe even cognition into account but still with the bottom up approach really taking the physics approach to this but really thinking about how can we bring perception into a physical way of thinking. And one very recent paper that I'm pretty excited about in this direction is with Renaud Bastien which just appeared in the beginning of this year where we really take a bunch of agents, self-proclaimed agents that move in a, let's for the beginning in a two-dimensional plane and they have only interact through vision. So basically they can only see what is protected in the retina. And then the simplest case it's shown here depicted on this graph is you can imagine that they have the simplest possible vision that it's basically black and white. So if there's something it's black if there's nothing it's white and there's might be no object recognition. So even you cannot distinguish whether it's a one individual close by or two or three individual overlapping because of conclusions. The question is, is this enough to generate for example, coordinated movement? And in this paper we actually started from bottom up writing down like a mathematical framework how this vision-based model could work and how agents can adjust the movement based only on the pure minimal visual input by this visual field as depicted below in a simple sketch. And just to give you some examples of this paper, here this is just one example how such bunch of agents, this is I think 40 agents here can effectively just paying attention to this intermediate or this immediate visual input how they can effectively coordinate and eventually start to move in a very coherent coordinated fashion. So you should remember that each agent just pays attention to the basically black-white input on the retina on its kind of what it perceives. So there's no concept of distance. There's no concept of other object or other agents. It just sees this black-white swirling pattern on its retina and tries to respond to that. And eventually if all agents do the same they can effectively coordinate the motion. And here we actually looked for very, very simple or the simplest possible rules that allow for such a visual flocking. Another just parameter regime gives you a different dynamic like for example this kind of disordered swarm dynamics that might remind you maybe of mosquito swarms and it still remains cohesive. Maybe it's not as on the first glance not as impressive as this coordinated movement but when you pay attention to this and I hope kind of visible in this movie already is that there are basically no collisions despite this very chaotic nature of the swarming motion but the agents still with this very limited visual perception are almost perfect at avoiding collisions. And this is something if you work with social force models not trivial at all if you want to think if you really want to avoid collisions that agents really don't come to close. And here it kind of seems to work at least for this visual, pure visual flocking or a swarming dynamics. So this is a huge part of our research actually goes into trying to develop new models or models that of collective behavior that take into account individual perception but another big field of the research that I'm gonna talk today a little bit more is actually behavioral contagion and collective decision-making in particular fish. And as I said, especially in the second part of my talk I will talk more about this. Then there's also a bit less prominent in recent years but also a huge interest of mine understanding new phases of biological active matter what type of microscopic patterns and microscopic dynamics you can obtain. And last but not least what I'm also interested in is coupling this complex collective spatial temporal models with evolution dynamics and really see what is the consequence of potential spatial structure or spatial temporal dynamics on the evolutionary outcomes in such evolutionary games. All right, but let's me after this brief overview let me jump to the first topic that I want to talk about which is flocking in the complex environments and in particular something which I refer to as coordination versus environmental responsiveness trade-off. And this work has been done in together with visiting red student, Parisa Ramani in collaboration with Fernando Perrani who is at the University of Nice and was just in April published in cross computation biology. So that typically when you think of collective movement models most of them happen or are assumed to happen in bare or empty environments. Oh, I think, sorry. So I think I don't know if it was me or your connection was lost for a while. Oh, I think it was my connection. I apologize. No, no, it's fine. So it was cut after you were introduced in the visiting red student, Parisa Ramani. Okay, so yeah, it was done together with Parisa Ramani and Fernando Perrani from the University of Nice and this paper just got published or this work just got published a few months ago. And one motivation for this study was that if you think of modeling collective behavior but even experimental collective behavior a lot of this is done in simple, bare, empty environments. You think for example of fish being just an empty tank and seeing how they swim together and coordinate motion. In simulations you even assume oftentimes periodic boundary condition just to get completely rid of any boundaries and just look at the bulk behavior in an empty environment. But this is not the situation that most animals that try to coordinate motion actually encounter in real nature and real ecological setting. And this was our motivation. We wanted to see how can actually agents coordinate and collectively process information if they're in a complex environment where on the one hand they want to pay attention to social cues, coordinate for example with other fish but on the other hand there might be other environmental cues which might also provide some important information. For example, signal that there might be a threat or a danger so that the agent should avoid areas for example around this structures in the environment. So this is our basic motivation and we developed like a simple generic model which tries to really see how agents can do that. In particular, when we assume that they have a finite attention capacity. So they cannot pay attention to all features whether it's environmental or social at the same time. So the essential model set up looks like this. We have agents, let's think of fish but it could be basically any agents in the complex environment and any agent can only pay attention to cane nearest objects. Irrespective whether it's another agent or whether it's a structure or a cue from the environment. All agents in our case try to coordinate the motion with the neighbors that they detect in order to move together with them and flock together. They also want to avoid avoidance zones around a point like a distraction site or danger sites that we distribute in environment but they can only detect those danger sites if this danger site is within its cane nearest objects. So again, this is the kind of limited attention capacity that we implemented. And in addition, there are few informed agents that actually know what is the right direction of motion the right direction for example to go to food or to migrate and this will be throughout the talk always along the X axis to the right. And the question is now given this set up what is the role of this limited attention to actually for the agents to coordinate to avoid the danger zones but also to find the right direction of motion provided by the few informed individuals. And in order to address this question the first and obvious thing would be just to first look at the interaction networks. So how do the agents actually interact with each other depending on this attention capacity how many objects they can pay attention to. And these are the social interaction networks for the agents. The agents are shown in black or in red. Black are just agents socially interacting and red are agents which are within a danger zone but also close enough to the center of the danger zone that they can actually detect the distraction site and respond to this. So these are the agents actually responding to the sites and I should mention that once you respond to a danger site it overrides all the social interaction so you're not coordinating anywhere any more with your neighbors you're just trying to get away from directly away from the center of this distraction site. So this is basically what is shown here for two different attention capacities. One very low attention fragmented networks with low degree and you get this disconnected clusters basically which are connected within each other but there's no global connection between the different clusters. On the right hand side if you have eye attention capacity on the other hand what you basically see is a single giant component within the interaction network and there are of course some isolated individuals which respond directly to the distraction sites and this is why they're non-responding not linked to the interaction network. I should mention that this is in fact an actually directed network here for simplicity we just show an undirected version but basically a link means that there's either connection from one agent or the other it's not necessarily bidirectional connection. And another thing that you should notice that this is of course dynamical so this is just a snapshot but in reality and we will see examples of this these networks are dynamically involved over time. So the question is now what would you predict where the system can better coordinate and better collectively process information? And from a simple network perspective you would argue well for the better connected network on the right hand side for example the information provided by few informed agents can diffuse much more effectively to the entire population therefore you would expect that for K12 this is the network that should coordinate better or be better at collective information processing. And in fact this is what you also see for empty environments if there are no distraction sites so completely bare empty environments and this is as expected with increasing attention capacity with increasing parameter K for all ratios of informed individuals starting from very low ratios to a fully informed system you always get better accuracy of following the few informed agents to moving to the right in a stationary state with a higher attention capacity. So the larger the K the better the network the interaction network is connected on average and the higher the accuracy collective accuracy of the system. But surprisingly if we now move to a situation with a complex environment or heterogeneous environment with a high density of the distraction side the situation completely reverses. So here actually a better connected network leads always to a lower collective accuracy. And surprisingly if you have a look at this graph even a fully connected, sorry even a fully informed system with K24 this is the yellow line this is actually as accurate as a flocking system at K1 at the minimum attention capacity at only a few percent of informed individuals. And the question is why is that? I think it's much easier to actually develop and do that by looking at actual what happens in the real simulations. And here this is now a simulation movie of a high attention case. So for K24 each agent can pay attention to 24 other objects. And this is a high density scenario randomly distributed distraction side with the danger zones depicted as this bright bluish areas around the point like distraction side. And then we have naive and informed individuals. So the informed individuals are always the open symbols and the naive individuals are the close symbols. And the red ones are actually socially interacting whereas the red ones are responding to the distraction side. And in this example there's always or in the examples that I'm going to show there are always 10% of informed individuals so only one 10% of the systems knows that one should go along the X axis to the right. And if you look at this high attention capacity what you see if you start from random initial condition that the system very quickly starts to converge and you see this development of local flocks or swarms that move together in a coordinated fashion. However, they are very good at responding to the environmental cues. So trying to find the free paths within the environment but they just basically move randomly and there's no global coordination or no global order for the agents moving to the right. So now if you look at the collective accuracy how well they are actually able to follow the informed direction of motion. This is shown in the screen curve accuracy it's basically oscillating around zero. So they're basically randomly moving just through the free paths in the environment. So this is the why the accuracy is actually so low at high attention capacity. The agents are very much responding to the environmental cues. On the other hand, if we go to the low attention capacity case, K1, this is actually the smallest attention capacity, the picture changes dramatically. So here initially it's also random initial conditions but you see the locally agents start to coordinate again and move together. However, they are not forming this large clusters but rather moving middle clusters of three, four agents maximum together. But eventually if you wait a long time you seem to realize that more and more agents start to move towards the right. Actually towards the right direction of motion. And what you actually see is that the accuracy goes almost to 0.9, 0.8 in this particular example. So the whole system globally is very good about following the information provided by the few informed agents. And they all in the end basically move almost perfectly along the access to the route. However, as you can see is the agents seem not to care that much about the danger sites and corresponding environmental structure. And the reason for this is essentially that there's low attention capacity. The attention of individual agents gets almost completely saturated by social cues because of the self-organized nature. So if you start to move with someone else you actually end up much closer to them. You form this local dense clusters and therefore for a typical agent social interacting its attention capacity is already saturated at this low attention capacity at this low K by the social cues. And it becomes blind to the environmental feature to the distraction side. Basically the system self isolates from the environment but it's very good at coordinating at this low K. This effect gets even more or even more clear if you look at structured environments. And here we have a single circular path. The right direction of motion provided by the informed individuals still go to the right. But for this high attention capacity again the agents are pretty good about coordinating the motion but also trying to find the free path which is circular here. And if we wait long enough eventually most of the agents will end up moving together in this free path in the environment and only few by accident enter this completely dangerous kind of region filled with distraction side and danger zones. So this is for the high attention capacity. So they basically are able to follow the structure and environment completely and ignore the information provided by the informed individuals which is moved to the right. For K1, again this looks very different. The agents just don't seem to care about the environment at all and are more or less able to work through this high density distraction side field but they are able to follow the information provided by the informed agents. So essentially because of this dynamical networks even if you have always small clusters but this clusters break up fuse together and therefore the on a long time scale the information provided by the informed agents is able actually to percolate through the whole system and you see very high collective accrues. So now you can actually quantify this and really look at the collective accurses versus attention capacity and this is now basically quantified what we saw before in assimilations. You get the maximum accuracy at low attention capacity typically the attention capacities of one or two and then if you increase the attention capacity further then a collective accuracy goes down and the different curses here are for different densities of obstacles. So the blue one, the top one is for a lowest density of obstacles and if you increase the density of obstacles this maximum becomes even more pronounced but this high collective accuracy at low attention capacity comes at the price. You basically become unresponsive to the environment if you have this low attention capacity and this is shown now here on the right hand side this is a measure of the distraction side avoidance and here we normalize it actually by a control group of solitary agents which do not socially interact only respond to the environment and this is the black line. So essentially collective system is much for low attention capacity is actually much worse as responding to the environment that solitary agents. So being collective does not provide any benefits in this situation but rather actually gives you provides only costs because if it's really important for you to pay attention to the environment and only above a critical attention capacity and in this example it's around 10 you become more responsive to the environment than solitary agents. So there you can only see benefits of being collective with respect to the response to the local environment cues. And the question is, you know we believe that this coordination response in this trade-off is actually maybe a general potentially a general feature of collective information processing system where you basically use the same, let's say cognitive machinery to do two different tasks. So you either can coordinate with others or you can respond to the environment. You cannot do the same if you use the same sensory machinery if you use the same basically brain to... So let's see, yeah. So Pavel, we lost, sorry. Yes. We lost you again. Okay, I'm sorry for that. It seems to be an internet issue although I'm using Ethernet cable. When did you lose me? So we lost you when you were like one minute ago it was a chain of a long you were explaining this accuracy trade-off. Okay, right. So did I already explain the right-hand side or only the left-hand side? No, no, the right-hand side was already explained on the normalization and everything, it was like very short break. Okay, so yeah. So we are just basically wondering whether this is like a fundamental feature of collective information processing systems and whether this trade-off, coordination response in this trade-off could be even as fundamental to the system as a speed accuracy trade-off that is well-established. Okay, so this is maybe the first part of my talk and before I move on to the next one maybe there are questions to this first part. If not, I'm happy to move forwards. So is there any question? I have a very quick question that maybe when you were showing these composite plots with the video and the accuracy and the avoidance as a function of time, I think the accuracy was, at some point it was getting negative, it was fluctuating around zero but it was becoming negative. Yes, so this is because we, as a accuracy, we calculated the scalar product of the individual velocities of all agents with the right direction of motion which is just basically a unit vector along x. And if the average velocity of the agent actually moves sometimes in the negative x direction then your accuracy can become negative. This is different, for example, if you think rather like of an order parameter which can go from zero to one, here for this, if there is actually a selected direction of motion, it can also become negative. But this is just a stochastic effect and it typically fluctuates around zero. So is there any more questions before we move? Yes, I have a question. Yes, I was gonna ask how is the information actually implemented in the simulation, the information flow? Yeah. So the information flow is just implemented in a way that we have these different agents, they all obey to stochastic differential equations and they're different force term and the informed agents which is, as I said, 10% ratio of 0.1 in these examples, informed agents have an additional term which tries to align the velocity with the x axis. So they bias the motion slightly, not very strong but slightly towards the x direction. So this is like an additional force that they feel. And then the only way that other agents notice or pay attention to this information is that they try to align with the neighbors and some of them are informed which tends to move towards x direction and therefore they get this through social interaction as bias to move along the x direction. And so this is all basically social interactions and basically copying of the movement directions. So how is the danger, for example, the danger area modeled? So the danger area, so we try to be as simple as possible so we assume point like distraction sites and when the distraction side which was like at the center of these blue zones was within the canary's object, then you detected this and then it had a certain, the danger zone around this had a certain size which was set to unity just for simplicity. And if the agent detected the distraction side and was within this danger zone, then it moved directly away so it ignored all other influences and just experienced a force, a repulsive force away from the distraction side. So it was basically once you detected the environmental queue, you ignored all other forces. Actually... Let's see if we recover him. I have a certain chance to detect this. Yeah, sorry, Pavel. After you said that once you detect the danger zone, you start paying attention to all the other forces, we lost you again. Okay, so yeah, and then you just move away. But we actually did some additional variations of the model just to test for robustness of our results. So for example, we increased the saliency of the distraction sides by saying if an agent is within a danger zone, so within a distance one to this distraction side, and it actually doesn't detect it because it's not within its canary's objects, it still has a finite probability of detecting it because it might be just very important for them to pay attention to the environment. And as long as this probability is not one where they always perfectly detect the environmental queues, you always get this coordination response in the trade-off. So this is really robust with respect to the response to the distraction sides, as long as there's some trade-off in terms of the perception of social and environmental queues. Okay, thanks. All right. So I think we've had one more question before, I don't know. Yeah, I'm about to ask a question, but it was answered in this, so thank you. Okay. All right, yeah, if there are any, maybe I should mention if there are any questions which go more in deep, I'm actually happy also to reply to them by email or maybe in another private chat if someone is interested. But of course, there's much more in the paper that I can discuss here within the talk. But if you have some urgent questions, you can always contact me after the talk at any point. Okay, but then let's move to the second part, which is rather changing the topic from this very theoretical collective information processing in terms of movement coordination to collective escape response and behavioral contagion in animal groups. And in particular, what I'm going to talk about is actually a work that we published end of last year together with Ian Kazan and his students, in particular, Matt Sosner who did the experiments on individual versus collective encoding on brisk. And the basic dynamics of biological system that we're interested in are so-called starting cascade in fish schools. So here's a fish school and sometimes individuals just get scared and they perform this very fast so-called sea start or startle where they just basically jump suddenly to the side. And this startle response has been studied in neuroscience for fish for a long time and it's very interesting behavior which is known to be governed by a single neuronal structure. But what is really interesting about this behavior that can also spread socially through a group. So basically, if spontaneously one fish just startles because it got scared maybe by a light reflection, then other fish may copy this behavior and also startle and then you see this cascade spreading through the system. And actually in this Rosenthal paper, the first one, this is where the group of Ian Kazan actually, Ian Kazan for the first time, showed this social contagion dynamics in the startle cascades. The nice thing about the startling cascades is that they are very, this is very fast acceleration and you can nicely quantify this. It's more like neurons firing one or zero and in the tracking data you can actually clearly, more or less clearly say, oh, this individual was startling and this individual started a few milliseconds after that. And using this information in this clear quantification, who was the first startler, spontaneous startler, who was the first responder, you can actually build interaction networks which is actually in general very challenging for animal groups. And this is an example of such an interaction network from the paper by Brendan Rosenthal and Colin Toomey where they just overlaid on top of the, of free swimming schools, the interaction networks they gained that they deduced by analyzing many of such startles. So each link is actually a probability of an individual startling given that the corresponding other agent that it's connected to startles at a given time. And you can see it's a very complex network which depends on the local structure, but it's also very much dependent on distance. And you can summarize it in general as the closer you are, the more likely you are to startle if your agents, if your neighbor starts. So if you're interested more in the details of how to determine this interaction networks, I definitely refer you to this nice paper by Ian Cousins group. What Matt Sosner did, which was a grad student in Ian Cousins lab at the time when I was a postdoc in this lab, was he moved a step further. He wanted to ask what happens if you change the context, the school, the fish are actually swimming in. So what happens if you can change, for example, the risk perception? How does the system respond to this change in risk? And the great thing about fish is that there is actually a chemical agent, so-called trek stop or alarm stuff, and that can be put in the water which can be obtained from the scales of the fish and to other fish, the signal danger. Because maybe another fish was injured by a predator, so if they notice the trek stop in the water, they respond very, very strongly and they get much more afraid and perceive this as strong subjective increase of risk. And what Matt did was doing control experiment where he actually added the trek stop after a certain time and looked how did the structure, how did the school respond in terms, for example, of nearest neighbor distance, but also what happened to the spontaneous starting cascades that we observed just in these videos. And what can be seen is that if you add the trek stop, after you add the trek stop, the, in general, the collector response increases because you get larger cascades of a larger size as shown in this plot where you see the cascade size distribution. So for the forearm, when the fish are kind of relaxed, you see only very few or mostly very small cascades with one, two fish most of the time only responding. But if you add the trek stop, then in this case, you see much larger cascades with almost half of the fish school oftentimes responding. So the total fish school size was here, I think 14 individuals. So there's a strongly significant effect and the question is, what is the mechanism that governs this collective increase in perceived risk? So when you think of how animal can adapt to changes in perceived risk, if you have to think of solitary individuals, they have only one possibility. They can increase the individual sensitivity so they can set down their threshold and pay more attention to the environment and just be more responsive on their own. However, if you're in a group, you can have another possibility to adjust the collective sensitivity by modifying the network structure. So for example, that you move closer together and then a spontaneous startle cascade could actually, given that the links become stronger, propagate much farther through the system. And the question now is, where is the fish system actually located? And this experimental setup allows us actually to really disinvigorate this two dimensions. So really to ask, where is this natural fish system located? So is, do the fish only modify their own sensitivity or really do they change the group structure or is it a combination of both effects? And what you can immediately notice from analyzing the tracking data and watching at the movies of the experiment is that the network structure changes a lot in this experiment. So this is the fish school before Shrekstorf. So it's pretty relaxed and the fish are spread out through the tank and swim more or less in a coordinated fashion but still more or less also on a global scale, random, but they seem very much relaxed and the nearest neighbor distance is pretty large given what the fish could do in principle. If you add Shrekstorf, the situation changes dramatically, the fish move much closer together. So they basically form this much more cohesive show and stay much closer together and the nearest neighbor distance just drops massively within few seconds after they smell the Shrekstorf in the water. So we know now that the structure changes a lot but how can we actually look into the brain, so to say, of the fish and ask, well, maybe in addition, they also change internal threshold because they get so scared. And in order to address this question, we combined the experimental data together with a mathematical model and here we used a generalized contagion model which was actually inspired by the dots in what's paper for generalized contagion model proposed actually for more social human systems and we just took this model and adjusted it for continuous time dynamics and probability rates instead of the discrete time as they did in original paper in 2004. So essentially we have a network, we have a focal agent here and the red agents are now the active agents, so through their behavior, through their escape behavior, they send activation signals with a certain rate represented by the strength of the links to the focal agent. The focal agent receives this activation signals over time and what it does, it just integrates with the moving window, it receives signals so that at some point the cumulative dose increases and may hit its own threshold and if it hits its own threshold, then the agent becomes close from a susceptible state infected or active state with also response with the escape response. So it's now basically pure network model that we map the system to and after a while after the agent becomes active, it can go back to a refractory state and for us, for simplicity, we assume that this refractory state is an absorbing state just to be sure that we have only one cascade and that we can in this way clearly analyze the size distribution of these cascades but given that the cascades in the real system are rather rare and it's also a good approximation of the real biological situation. So now we have this model and we can actually parametrize this model by the experiment data. So what we can do is we can take the networks that we have that we can obtain from the experiment from the tracking data for before and after structure. So we can actually count for the change in network structure due to the change in perceived risk. And now what we can ask is what happens if we now change the average individual threshold, we can actually eliminate all the other parameters and we end up with this individual threshold to be the single free parameter in our model. And now we can simulate this contagion process on the experimentally obtained network for before and after. This is one example of a real fish network and if we have now a high average threshold, you see what you expect. Basically, after spontaneous target, you have only a small cascade appearing. If you have a low average threshold, then the situation, sorry, somehow my presentation crashed. I will try to open it again. Okay. Maybe there's time for questions if someone wants to ask in between. Is there any question? I have a quick one, but I don't know if you will answer it like in two seconds. In this container process that you are showing now in a network, the network is stationary. It's going to also, the links are going to quibble because I guess these fish networks are dynamic, no? Yes. It's a very good question and I can answer it maybe not in two, but in five seconds. Okay. So starting a scale up very, very quick. They happen within less than a second. And so you can assume it as a time scale separation. So even if these networks are dynamic on a larger time scale, you can assume that in the beginning of the starting cascade it's essentially almost like a fixed network. And this is the fundamental assumption that we do that basically what determines the size of the cascade is the network at the onset of the cascade. It's not true for very large cascades, but it's a reasonable approximation which seems to work pretty well in terms of predictions. Okay. Thank you. And let me share now again the talk, I hope it works. Okay. Yeah, it's perfect. So now for a low average threshold, which when each individual gets much more responsive, then again, you see what you expect is that instead of this localized cascades, you get basically a whole system firing up. So you basically get a global cascade and basically every individual in the network responds. And now we can ask, given that we know what the experimental cascade size distribution is, we can do some maximum likelihood fitting and see which threshold actually explains the cascades before and which threshold explains the cascades for the after networks. So in a way to account, we know how the networks change from extrovert data. We don't know what changes in the brain, but now through this combination of the modeling work can freely change the threshold parameter, we can see which values of the threshold parameter for the before and after condition match best the observed experimental cascades. And to our surprise, what came out is basically that the individual threshold don't have to change at all. Effectively, this is shown here on the top one, the average dose threshold is the same predicted by the model from the maximum likelihood for the before and after conditions. And these are just two different experimental exposures. So it really doesn't matter, but it seems to be really robust. And you can actually see what effects can contribute to this. So the baseline is the first graph, the first bar on the bottom. And then if you only change the thresholds and keep the networks constant, you cannot explain the increased collective response by the increased cascade size. Only if you change the structure of the network alone, then you get almost the, or you get basically the effects that you observe in experiments. And if you do two modulations, so you change both the responsiveness slightly and the spatial positioning, you get some synergy of both modulation, but it's actually much smaller than the adjustment of the network structure itself. So now given our two-dimensional diagram, what seems to be the case for this fish, which are the golden China's, is that they essentially seem to, at least in this experiment, to only modify the network structure and not really modify the internal thresholds. And the question is, well, this is true for this fish, but is it true maybe for other natural collectives or maybe other natural collectives do this differently? What about artificial collectives that we think of, for example, swarm robotic applications? Can we maybe exploit this two different dimensions in a different way? And one important question is, of course, what about real-world behavior? Because again, this was something that happened in lab and our artificial lab condition maybe is different for the fish in the water. And this brings me to the very last part, the few last slides, which is more like an outlook of ongoing research where we actually look at this behavioral contagion processes in the wild. And in particular, we look at so-called giant escape cascades in Sulphamollis, which is a species of fish that lives in the very special conditions in central Mexico. And they live in Los Azufres, which is a special basso or special ecological setting in sulfuric river system, which contains high concentration of sulfur hydrogen in the water, which is typically toxic to most or to all living beings, but this fish actually developed a specific evolutionary, the specific capability to survive in this toxic sulfur containing water. And the reason they want to live in this environment because they can feed on sulfuric bacteria that thrive in this environment, so they have plenty of food. And so that's the reason why they adapted to live in this environment and they can live at very, very high densities. However, they pay a price for that. Because of the sulfur in the water, basically there's no oxygen in the water column because it immediately reacts with the sulfur hydrogen. And therefore, if they want to breathe, they have to stay, most of the time, this fish have to stay at the surface to do so-called surface respiration. And this is some snapshots from the field side where we do our field research. So this is in the bottom left picture, just taking from underwater, showing upwards. And all this fish that you see here nicely aligned in the stream, I actually at the surface. And this is another close up. So this is a really natural setting and this is the fish are very small, but being in the surface, that makes them very vulnerable to bird predators. So there are no underwater predators because no other fish can survive in this environment. But of course, being in the surface, make this fish a very prime target for bird predators. And this is just a bunch of predators that hunt for this fish. And what is interesting is that this fish seem to develop a very peculiar collective response to this bird attacks, which typically happened that for example, a kingfisher, this black-white birdie at the top flies in, basically dives down, picks one fish up and then moves off. And you can simulate this by shooting projectiles into the water and they basically very much resemble, the response that you observe very much resembles their bird attack. And if you do this, and this is now a perspective corrected video of such projectile that we shoot in into the water, then you see this collective response of the fish. So here, when we start the movie, you see the impact and there are little ripple waves spreading out of this. But then you see these additional waves, surface perturbation spreading through the system. And this is actually a startling response of thousands of fish in this open field system. The length scale here is 10 meters. So you see actually tens of thousands or even hundreds of thousands of fish responding in this collective manner. So it seems to be boiling a little bit immediately after impact, it seems to come down. But then again, there's a big wave and a second big wave triggering. And when I saw this immediately, I thought, well, that looks like excitable media or statistically driven excitable media. And it's very much resemble some things that you might even see in normal cultures or certain chemical reactions that are noise-driven. So I was very much excited by this. How are these waves produced? Again, this is a startling response. However, it's slightly different than in the lab. They don't do this in the surface, but what they do is a collective diving response. So essentially the fish, when they respond like this, they collectively dive down, and what you see in the surface videos is actually the splash that they're generating by suddenly diving down with the startling response. And we can actually use this type of surface videos to quantify the macroscopic activity. And this is just one example where we can track the wave fronts developing and in this way actually obtain statistics on the stochastic behavior of these waves. And we have actually evidence. So let's see if we recover him again. The fish in the spawns. So we know we can make... Sorry, Pavel, sorry to interrupt you. We lost you when you were saying that you can characterize the wave front of the stochastically driven waves. Yes, okay. So yeah, we can, I can maybe restart the video again. So sorry, Hades. Now my computer is not responding. So yeah, we can quantify this. We have evidence that this is actually a excitable, noise-driven system based on this analysis. We can quantify the size and the speed of these waves. And for example, we can show that the largest waves can span up to 30 square meters in area. And this means that there are, you know, even up to 100,000 of fish participating in such a way. And we can also quantify some effects. How long does this waving effect act before the system goes back to the resting state? And here this seems to be that there's actually something like collective memory of the system for several minutes. So this enhanced activity of the system after bird attack or a fake bird attack acts up to two or three minutes after it happens. But we can do more in the system. We can actually do even close-ups. And I apologize for the low quality, but this is actually a real field recording. So on the left-hand side, this is a real recording of such a wave with a close-up. And if you would look very closely, you can see actually individual fish in there. And if I let it run, you can see in slow motion a wave spreading through the system, through this surface perturbation. And what we can do now, we can actually take these videos, use for example, convolutional neural networks to obtain the position of the fish just before the wave spreads through. So what we can actually do, we can get the position and the networks in a similar way we did this in the lab for the starting scales. And we can here, this is now such a network that we get reconstructed on real positions and we can run our complex contagion model on this real networks to ask the question, for example, how does the speed of the wave or the size of the wave or the probability that the wave propagates through certain regions depend, for example, on the density of the fish and compare this to the actual empirical observations. And this is something that we are currently working on. So this is work in progress, but we are very confident that we may get some really interesting results for this behavior, for this behavioral contagion in real system in the lab. And this brings me to the summary of my talk. So basically I gave you two parts. The first part was, sorry, somehow my slide doesn't change. It's stuck on the video. I hope you enjoy this. So yeah, basically the summary would be, I showed you two examples of our recent work. One was about this coordination, responsiveness trade-off in collective information processing. Sorry, I stopped this here and restart. And especially in the role of the limited attention capacity and how it may change actually in unexpected way the collective information processing capabilities. And the second part was about this behavioral contagion in fish and in particular this special type of escape cascades. And what we could show in the lab is that the collective response to increased risk seems to happen almost exclusively or at least in our experiment exclusively through the adaptation of the collective structure, through the network structure and not through the adaptation of individual sensitivity. And the question is how general is this really for other biological systems? And you can imagine maybe other biological systems where agents cannot easily change their relative positions and therefore such a system to collectively adapt to a changing environment, it would make much more sense to change the behavior of individual nodes instead of changing the network. So we are really happy and looking forward to see some other examples or learn of some other examples in the nature where this could be the case and where we can test this hypothesis. And last but not least, I gave you a little bit of an outlook of what we are doing currently with our feed research when we look at this large scale escape waves which we can now study in the wild. And there I hope we will get some first publication rather soon. So if you're interested, stay tuned. And with this, thank you very much for, well, I should thank all my colleagues. So while we wait for Pablo to come back, if I mean now we're going to open it for questions, you can either ask yourself or if you want to write in the chat. So Pablo, we lost you when you were acknowledging the collaborators. Okay, so not much missed there. I think I acknowledge also the funding agencies. So you see their logos. Thank you very much, Pablo. And that was an amazing talk. Thanks a lot. Thank you. So questions, yeah, time for questions. Yeah, can I ask something, please? Of course. Sure. So, I seem to understand from the first part of the talk that when you have limited attention capacity, then clusters are more likely to be formed, you know, and then you have this kind of directional swimming. So is this something more, let's say generic in statistical mechanics? I mean, like when you have nearest neighbor interaction or short range interaction, let's say you're more likely to see clusters or is it just like something that you observe in your system? Have you seen it before anywhere else? So, I think it's pretty generic for this type of flocking models with nearest neighbor interaction. So especially if you have this low nearest neighbor interaction, surprisingly, so we have actually a follow-up paper on this project, but surprisingly not many people look at this from a statistical point of view and we are currently finalizing a preprint that we hope to publish within a few weeks that where we look at the same dynamics, not for a kind of collective information processing point of view, but rather from an active meta statistical physics point of view. And so, surprisingly, people didn't look at this that much in terms, you know, what is the role of K for the spatial structure that evolved on a larger scale? So the clustering seems to be generic feature of the skein nearest neighbor interactions. If you have another topological interaction, topological means that it doesn't really depend on distance but only on the who is your ranked nearest neighbor, which could be a Voronoi interaction where you interact with the first shell of neighbors. This one is typically spatially balanced so that you have all, you know, to all direction neighbors and there you don't see such a strong clustering, but one reason for this also that for Voronoi you always interact with six nearest neighbors which is typically the first shell of neighbors in 2D. There has been another work by the group of, I think, Kavanya where they looked at spatially balanced K and M and I think for this type of model which is much closer to Voronoi, you also would see less of clustering, but still as long as you interact with only very few agents or alternatively very short range interactions, if you have metric interaction, you will see strong clustering. So for example, for a Vitchick model which is classically has a metric interaction, so you have a certain interaction range and you interact with all neighbors within this interaction range. The smaller your interaction range is and you have small noise, the stronger clustering you get essentially. So yeah, I think it's a generic feature and people have a look at this a little bit in the context of the Vitchick model, not that much in the context of topological models like we used here, like K and M for example. Okay, thanks. So more questions? I do have a question. In the summary, and maybe I missed when you were explaining it, do you talk about some tragedy of the commons in the sense of evolutionary, tragedy of common information in an evolutionary sense? What did you really mean by that? Yes, so very well spotted. This is actually a mistake because I had a couple of slides extra of some very, which was not part of the publication. So essentially when you think of this collective accuracy or collective obstacle avoidance, this is a kind of a group's fitness or you can build some kind of a group fitness from that. If you say, for example, coordinating or moving in the right direction is that beneficial for agents, but being too close to the danger zones is that costly for the agents you can build from this like a fitness function and that would be like a group fitness function. And there you can actually see, you can vary the parameters, the costs and benefits and you can see this two distinct maxima. So you can either be good at coordinating or you can be good at avoiding obstacles. But this is still like a group selection or group fitness setting. And from a biology point of view, it would be always a glacial well, does it really hold if you have the evolution adaptation at the level of individual agents? This is what we did in the follow-up project with the master student of mine where she actually looked at each agent can evolve its own attention capacity and then see what is the attention capacity that it gets evolved. And we test a different fitness function but we chose one fitness function where the optimal attention capacity at the group level would be four. So something which is like a little bit in between but it's definitely not one or two which would be the one which is typically optimal for a coordination. And that would be the group optimal. And if you do the individual level adaptation evolution what you realize is that this is not the evolutionary stable strategy but the evolutionary stable strategy is again a minimal attention capacity of K equals one. So there's actually a strategy of common information. I call it strategy of common information but it's basically the same as strategy of the commons. So as long as there are some guys providing some information about the environmental cues because they have a higher attention capacity it always pays for individuals to decrease their attention capacity evolutionary. And therefore they have always benefited that the free riders and eventually the evolutionary stable strategy is where everyone is a free rider and everyone has only minimal attention capacity. And so this is what we observe in the system. However, this is not part of the paper and it's not published yet. I see, thank you. More questions for Pavel? Yeah, I guess if there are more questions which maybe someone wants to ask but not in the setting I'm as I said, I'm very happy to take them by email but even also if possible via online chat if someone wants to talk more. But yeah, just feel free to contact me. Thank you. So if there's no more questions, thank you again Pavel. That was again, I'm very, very agree. Well, we all really enjoy your talk. Thanks a lot.