 Okay, I think we're good to start. All right, so good morning, everybody. Welcome to this third and final day of the Statistical Physics of Complex Systems Conference. My name is Sebastian Gold and I'm going to be the chair of this morning's session. And it's a particular pleasure for our first talk to have David Satt join us from Aston University. David, can you hear us? Yeah. Wonderful. We really appreciate that you joined us today despite some difficulties. We're really glad to have you. And I think you're going to talk about internet routing. Yeah, yeah. So if you can screen. Okay. Let me just screen. Yes, that looks good. Okay. Okay. All right. So you have 25 minutes and then we'll have about five minutes for the questions. Okay. Brilliant. Thank you. So first of all, I would like to thank the organizers for giving me the opportunity to tell you about our recent research on multi-wavelength internet routing. This work has been done in collaboration with Ije Zu, who is a postdoc working with me, Jackie Po and Bill Young from the Education University of Hong Kong. So a brief outline of my talk. I'm going to introduce very broadly the routing problem. I'm going to talk about three different variants of the routing problem called node disjoint, edge disjoint and wavelength switching routing. I'll explain the specific problems of optical communication, which carries something like 90% of the internet traffic. And we are going to introduce a scalable message passing method for routing in all these three scenarios. I'll show some results in summarizing in point two future work. All of this work is on archive, as you can see at the bottom of the slide. So what is the problem with routing? So the routing assignment problem is an NP-hard problem because it has interaction of non-local variables. So the non-local variables are the roots themselves and they are interacting in many different places. Unlike many other hard computation problems like graph coloring, vertex cover, case and so on, where the interaction is very localized. In this case the interaction is inherently non-localized. Secondly, if you want to achieve some objective that is not trivial there is also non-linear interaction between the roots. For instance, if you want to avoid congestion you want to spread the roots as much as possible over the network and this means repulsion between the different roots. If on the other hand you want to concentrate on specific roots you want to attract different roots in order to consolidate the traffic and maybe get rid of some edges that are not of interest. So there have been some works in this area and I should point to the spanning trees and minor trees problems that have been investigated by the Turing group mainly. So clearly the internet is working so people are using algorithms but most of these algorithms are actually heuristic. They use routing tables that monitor the amount of traffic on each one of the edges and they try to do what is called minimal weighted these are shortest path of algorithms. In wireless routing you also do a geographic routing you consider the position of the different transmitters other heuristics are to do with measuring queue length and trying to minimize that they agree the algorithms the problem with all of these methods is that they are insensitive to other path choices. Each one of the routing choices is based on all of the other routes as background. So you may inadvertently route many different routes through the same edges because when you are doing the routing calculation there is nothing there and then you occupy you overburden specific edges so it leads to congestion or to low occupancy route if you have very sparse traffic you are using the huge infrastructure for no need. There is also a principled algorithm that is being used which is integer linear programming but it's non scalable so I mean people look at 30 nodes maybe 40 node networks but that's about it. But you would argue so what is the problem the internet is working on? Yes it does but I don't know why I'm not moving to the next slide okay yeah but here is the problem so this graph represents the speed of the internet over years and the horizontal line points to the percentage of the overall energy production in the world that is running the internet. So while in the 90s it was 0.1% and in 2000 it was about 1% and in 2015 or something like that it was close to 2% now we are close to the 5% and if we continue with the same rate by 2040 or so we will devote all of our energy to running the internet. So obviously one has to run the internet more efficiently and do whatever one can in order to improve it. So we are part of a consortium with Cambridge and University College London that is aiming at doing exactly that. So I'll briefly introduce the communication model so we have a graph and comprising N nodes which is Roman letters and then we have M communication requests so one has to go from one source to a fixed destination and we have M of those. We can denote sigma nu j equals 1 if communication nu passes through node j and 0 otherwise we can do exactly the same thing on edges but in this particular case I chose nodes. The traffic on j is simply the sum of the communications that are going through this particular node. Now what we can do is try to optimise globally like minimizing a cost function on a flow either on nodes or on edges and we can choose an arbitrary function really doesn't matter which one for simplicity we chose a polynomial with a power gamma. Gamma equals 1 simply means shortest path you are simply counting the number of hops that you have from source to destination. Gamma greater than 1 is a convex function that penalises heavily high concentration on specific edges. So if gamma is large it means that you are trying to more or less spread the load of gamma being smaller than 1 between 0 and 1 is exactly the other way around it's a concave function and in this case you are aggregating traffic in order to reduce the number of active edges. Now we've had many papers over the years on routing and this is a partial list on the right but what I'm going to talk to you today about is a very specific type of routing which is either node disjoint, edge disjoint or wavelength switching which adds another constraint. Now all of these problems have been solved using a method called message passing which I'm not going to elaborate too much on but it is based on the fact that you are passing conditional probabilities to neighbouring between neighbouring nodes until all of these messages that informs your neighbours about your state more or less converge to their asymptotic values and then you can calculate infer the values of the variables and hence also they routes themselves. So what I'm going to give as an example is the node disjoint routing so in node disjoint routing you have m random communications and what you insist on is the truths do not cross so two different truths are not crossing through the same single node so you can describe the system as receivers, senders and other nodes so each one of the receivers has resource minus one so it has to be sort of received another communication plus one and all of the others simply have no resource of themselves, they are not producing or receiving an extra node so the route is simply passing through edges and we can denote the flow between h i and j as plus one between j and i as minus one and otherwise the flow is zero. So in this particular case we have a single type of say wavelength and we insist on the fact that it doesn't cross so the aim is to minimize some global objective which is a function of the flow but insist on non-crossing and there are two constraints so one constraint is the fact that locally the resource on each node and each message is zero which is in a way trivial because if you are a node and you are receiving a message you are passing a message and no other message passes through you then the resource is remaining resource is zero if you are a source so you either send out a message or receive a message and the currents are integers and i i j is minus i j i so this is the work that we've done with Katerina Debaco some years ago it's actually very fitting that Katerina is the recipient of the Early Career Research Award so this talk is exactly in the right place so what happens is that everything is then in this case is described in terms of probabilities where you have flows of the different messages on each one of the edges and because of this specific constraint the flow can be either zero or one so so there is no no possibility of accommodating many flows through the same node usually message passing is represented as probabilities but at zero temperature level we can map it into a minimization problem on some energies or localized energies and I mentioned before that we are talking about iteratively updating these variables which are actually something like a logarithm of conditional probability so we are updating them recursively and this is what we are updating we are updating the function on the flow plus this log of conditional probability something like the localized energy of the descendants of this particular node of the edges that are connected to it under the constraint in principle each one of the messages given a node j can either come into it leave it or not pass through it at all these are the three conditions the three cases that are described here but so in general we have to consider three to the power of m because for each message we have three possibilities and we have to consider all three to the power of m possibilities in our calculations which is quite problematic luckily when we are talking about the same the single wavelength if one of the messages is passing through the node all of the others can't pass through the node so the state of three to the power of m possibilities is reduced to two m plus one because one means that you are not going through this node for each one of the messages m you can go in either way from i to j or from j to i and this is the reason for the simplification of this of this calculation however when we talk about multi-wavelength this becomes really problematic because you have to consider all possible wavelengths and therefore we have to it becomes exponential in the number of wavelengths the messages I should also point so the first of these two papers the paper with katerina's first author is node disjoint later on katerina and the turing group followed with another paper on edge disjoint problems so how is this related to optical communication problems so in real fibers in what is called dense wavelength division multiplexing you have about 156 different wavelengths and the communication requests are typically order and square that means that you want to connect from every node to every node to keep an open channel from every node to every node so computationally this becomes really difficult because as a consequence to that we have an exponential growth in the number of messages and we have several large scale variables to consider and we cannot simply reduce all of them to depend on a single entity so just to recap about node disjoint edge disjoint and wavelength switching so if we have two wavelengths represented here in different colors and we have two messages from j2k and from n2m we will have to use two different wavelengths in order that they will not both cross the same node i similarly in edge disjoint we will be able to accommodate the two edges the two communications from j2k and from n2m because they just cross the same node but not the same edge in wavelength switching you also can switch your wavelength at i from red to blue and from blue to red and it offers greater flexibility so how do we actually do that so one can consider the setup as a set of parallel planes each one of them has the same topology in terms of the individual nodes and we assign each one of the communications to one eventually one of these planes but initially we pass messages from each one of the communications to all of the nodes the message new starts from node i for instance so we are connected to node i and we pass messages both from the both from the communication itself sorry both from the communication itself to the nodes and also from the nodes to one another now I'm not going to explain the equations you'll be relieved that I'm not going to delve into them I just wanted to point to show you how these equations look like so paij is the probability from node i to the edge ij related to wavelength which of the messages is being passed and this is a very specific and easy one because one possibility is that the picture on the right that none of the edges connected to i is actually carrying any message and the other one is the left-hand side figure saying that a message has been delivered from node m to node i but bounces back to node n so it doesn't actually go through the edge ij obviously it's more complicated but if you're interested in the details you can go to the to the paper the other thing that I wanted to point out is what is called the trick of edge disjoint so in the case of edge disjoint we can consider the communication between m and n and between k and n for instance and map all of this problem into a maximum weighted matching graph which can be solved and this is a trick that Katrina and the Turing group came up with for edge disjoint and we are using it as well now the bottom line is that we managed to reduce the computation complexity of this routing task significantly remember that m can be very large and q can be very large and the number of message requests q the number of wavelengths and specifically if you look at the case of q the same order of the number of vertices number of wavelengths the same number of vertices and number of requests being order n squared the same message passing is being reduced to order m and q arguably the lowest number the lowest order that you can expect because every node should consider all messages and all wavelengths and so experimentally you can see that this is the case on random regular graph as it appears on the right so what are the measures of interest the first one is latency how many hops do you get until or weighted hops until you get from source to destination the required number of wavelengths for a given number of requests the number of wavelengths used per edge and robustness in general we examined it both on synthetic networks like random regular graphs scale free networks and the training graphs and also on the us internet network which is called conus 60 and the UK BT 22 22 edges these are the main the backbone of the internet in both cases and just to make sure that we are doing something sensible we can see the performance of message passing for edge disjoint not disjoint and wavelength switching against linear programming which can be done for small networks and so you can see for instance for edge disjoint message passing on these two networks you can see that we get exactly the same numbers for the overall length of edges used and the minimal number of wavelengths to accommodate these messages so the table generally says it's a sanity check saying that message passing does work as well as the non scalable linear programming and we also looked at the capacity the number of of message requests that we can transmit the number of wavelengths and we obtain this these results for not disjoint wavelength switching and edge disjoint and you can see that in general the more ordered the network is like random regular graph which is in blue the more you can get out of it when you are looking at conus and BT which are non which are very densely connected relatively and not they are heterogenic then you reduce the gain is lower this is a very interesting result which is how much we gain by moving in the minimal number of wavelengths needed when we are moving from greedy algorithm in blue to message passing and also in blue there is wavelength switching now why is it important first of all there was a big debate there still is a big debate in the optical communication community whether it's worthwhile investing in wavelength switching transceivers these are the nodes because they are much more expensive and the question is can we gain much by doing that and at least these figures show that we can get more or less the same performance if we use wavelength switching or normal edge disjoint and not disjoint message passing the same thing is for edge disjoint and this is all of this was done for the conus network of n16 nodes and 79 edges the one thing that in first glance would look as disappointing is the overall average length of the communications now it is not disappointing really because for simplicity we did not assign weights for each one of the edges so on all of these networks the length is more or less order log n so you cannot gain much unless you introduce weights on the edges which can be done easily but we have not done it just for brevity okay sorry this was for node disjoint and this was for edge disjoint just to clarify in terms of computation complexity you can see that there is polynomial growth in time this is actual time on a computer as we increase m as we increase q for node disjoint edge disjoint and this is in the case of large m where m is the number of communication requests is all the n squared moving on we can consider a more complex objective function like for instance we would like to get a more to reduce the congestion by spreading the load over edges or try to get rid of redundant edges if we take gamma between 0 and 1 and what you can see this is the overall load per edge the average load per edge sorry the load per edge load per edge for node disjoint and for edge disjoint on the right under the different objective regime so if gamma is 2 for instance so let's start with the linear gamma is 1 this is the orange curve the green curve shows a peak a higher peak at 2 so 2 different wavelengths are being occupied on these edges and there is a decay as x increases when we look at the gamma equals 0.5 you can see that the number of 0 non-traffic bearing edges increases and there is an increase also on the very on the very right delta simply considers orange is gamma equals 1 blue is gamma equals 0.5 and green is gamma equals 2 and this is the increase in the more loaded edges or a reduction in the more loaded edges as we introduce the non-linear objective so this is for random regular graph this is for the conus network so all is nice and well but there are a lot of practical considerations that one can take into account when you introduce such an algorithm for a real network the first one is to introduce weighted edge costs so each one of the edges has its signal to noise ratio inherent load and so on this is quite easy to do we've included it also in the work legacy and restricted edges so some of the fibers can accommodate all wavelengths so we have to embed that in the real calculation there is no problem we can do that as well we have done it dynamical allocation of new requests so in reality you cannot expect to reorganize all of the requests in order to get the best use of the resource so each time you have I don't know 100 new requests and you do the optimization only on these new requests given the situation as it stands so we have to we are currently experimenting with our partners in the transnet network who are optical communication engineers on how to do that in the best possible way also we want to reduce the number of wavelengths used to power down some of the equipment and also design problems where is the right place to put new edges so summarize very briefly because I know that I'm running out of time so we employed methods of statistical physics in order to develop multi-wavelength routing algorithm and to explore the properties of the algorithm and the networks themselves and we have derived a scalable optimization algorithm I hope that I've demonstrated that and what is very interesting is that we provide a relatively principled tool for comparing between different routing regimes between different topologies and so on like for instance the debate about wavelength switching and no disjoint optimization and what next so we currently are looking at using these methods in real life and also there are other applications that are very much related to that so routing in wireless communication 5G and 6G as relays so if you want to relay messages through other fonts and the other one is VLSI design how to have in multiple layers the best possible routing in order to to get best performance thank you very much thank you very much David that was a very interesting talk very good start to the day are there any questions here from the audience online on the chat maybe can I start with one question so it's good to see some message passing equations but applying these message passing algorithms sort of in the real world is famously hard if you're not in a base optimal setting that has some challenges so when you started applying them so how do you do that okay that's a very good question and the challenges usually come in a regime where you have long range correlations emerging so in high load close to replica symmetry breaking and so on and most of the optical vunication networks are running way below this limit so even if you have a relatively densely connected networks there is still a lot of capacity in them so we are not trying to go beyond replica symmetry which is the assumption of tree like localized interactions and weak correlation between non neighboring nodes so that's how we manage to get away with what we're doing oh that's very cool, okay thank you any other questions I don't see any other questions right now so maybe let's thank David again then for his talk and for the question and we'll now move on to the second talk of today's session, good morning I'm Richard Mini from Rome University and he's going to talk to us about the statistical physics of financial networks okay so thank you for the introduction let me in the meanwhile I'd like to thank the organizer for this invitation alright so this talk is about as you see statistical physics of financial networks and what I want to do today is to give you a brief overview of this recent research field with some focus on some recent results so in order to avoid getting lost let me go straight to the key points that are not visible so let me read them so the first key point is that networks are very powerful tools to model interaction and interconnection within the financial world and in this respect statistical physics is useful for two things first thing is that it allows to obtain the topology of this interaction when this information is not known which is the TBL situation and second allows you to model the dynamics of propagation of financial shocks over this structure this network structure and at the end I mean all this together allows you to make precise to do stress test on financial networks that are actually used by regulators as of today so yeah so there have been already many tools on networks so I don't have to explain what a network is let me just focus on the specific feature of financial networks in terms of what are the links and what are the nodes the links are I mean there are many kind of links depending on the specific instrument that is exchanged between two financial institutions banks for brevity so the simplest interaction that you can have is a loan you have a bank that lends money to another bank money, bonds whatever but you can have also I mean more complex interaction with the bank which is a specific particular situation in which you have the bank A which buys an insurance from bank B over the default of bank C so you see this is a three-body interaction or you can indirect interaction arising from the correlation of portfolios for instance so many kind of links in this talk we will just focus on the same post situation of direct loans besides that there is also I mean a particular feature of financial network that nodes have an internal structure the internal structure of nodes is the fact that nodes are financial institution and financial institution I mean have what is called balance sheet balance sheet is this table which is displayed here you have two columns the columns of asset which are the things that have positive values like loans to other banks which are the inter-bank loans but can be I mean other things bonds, stocks whatever with positive economic value then you have the column of liabilities which are the depths towards the other banks, the customers towards I mean the rest of the financial system and the difference between total assets and total liabilities is called equity which is the result of the bank this must be positive otherwise the bank has failed okay so this integral structure you watch is what I mean generates the network in a sense that will be clear in the next slide hopefully the point is financial network arrives because of I mean the very reason for which finances exist which is I mean leverage leverage is a simple concept everybody here knows but let me just wrap up here with this nice example so imagine that we want to buy a house and we have 40,000 euros so what we do since the house costs 200,000 euros we need to borrow money so we borrow under 60 yeah no it is under 60,000 euros so this is our debt we buy the house so now we have asset worth 200,000 euros and now we have we have generated a leverage because from our initial capital of 40k we have now an asset of worth 200k so we have a leverage of 5 this mechanism is what I mean amplifies gains in financial system because imagine that now the value of your house increased by 20,000 euros and so this means that we have a net gain of 20 yeah this net gain of 20,000 euros with respect to our initial capital is 50% so we have a huge gain so this is the good side of the coin the bad side of the coin is that if the prices goes down we have multiplied the loss so this leverage what generates this amplification affects the financial system but the very fact that in order to have leverage you need to borrow money this means that I am generating a connection within the system so I am borrowing money from some other bank and this allows for the propagation of shocks between me and the bank because at a certain point if I fail the bank will lend me money will lose the value of this asset so I will explain this in the following but so this leverage is the key aspect which then rules all the models of financial contagion in this context so yeah now few very brief historical facts on financial net the literature started around 2000 because before that so there is before that the traditional economic let's say stream of literature what was nice the financial system like a global entity in which the interaction were neglected then there was some newspapers like this very famous from the director of bank of England and then there was some other yeah some old papers which started to realize that the interaction are important and then we had the crisis the global financial crisis so this was the proof that the stability of the local level could trigger a crisis at the entire systemic level so after that the lesson that we learned from the crisis was that it was mandatory to take into account this interaction to model crisis events and then I mean network based test have been now are used in different financial institutions let me describe this plot that is kept up now this is a very interesting plot from a paper from 10 years ago economist by bank from bank of England in this plot they use a very simple network model and they they plot the probability of systemic event as a function of the average connectivity of the network this was the first evidence that something was not trivial in the system now this curve is I mean as you see starts from zero if you have no connection there is no possibility of contagion then it starts increasing and at some point it decreases again this I mean this going down here is explained by the fact that if you diversify as much as possible you are basically minimizing the risk of contagion because you are many exposure with low value now the point is that when you see I mean when you can measure financial networks you are most of the time in this region so taking into account this connection is important and yeah so going back to what I want to do I want to use to do two things reconstruct the network and then model shock propagation at the end I will try to show some example of practical application of this framework so why do we need to reconstruct the network we need to reconstruct the network because what we have I mean what is public information that banks gives you is the aggregate balance sheet so you know what is the total exposure of a bank towards all the other banks in the system but you don't know the amount of each specific exposure I mean next or next this means that you know the total strength of the nodes but you don't know the specific weight of the links so in order to run contagion models you need to know the network so the first thing that we have to do is to try to reconstruct this network and now statistical mechanics is perfect to this task because we have some information that we want to preserve on our system which is not specific exposure and then we want to do the last bias guess on the structure of the network so we can use statistical mechanics in which I just want to say that when you use statistical mechanics of networks the degree of freedom that you have are the links rather than the states of the nodes apart from this you just do classical ensemble construction for instance an ensemble you just need to choose appropriate constraints for your system we don't have now position and velocity so we need to I mean have some meaningful assumption for which are the constraints in our case it's just the data that we have so we are basically forced to build the ensemble in this way now the fact that we have not specific constraints means that we build a kind of ensemble with local constraint this is an important point now there are several different ways to this construction depending on the specific constraint that you impose we there are different models now well it turns out that central bankers like the first and more simple of these models for obvious reason and which I would just want to explain here very briefly we basically do a two-step inference in which we first try to guess what is the binary topology of the network and to do this we use a configuration model in which we concentrate the degrees but since the degree are not known we use a fitness ansatz which is this one for which the Lagrange multipliers associated to these degrees are taken proportional to the exposure that is the variable that we know we can check this assumption is reasonable in some very interesting which we know the data and so this at the end the final outcome of this construction is that we have a connection probability within each pair of nodes which is just a function of the exposure of the bands that we know from this we assign weights according to what is called I mean a modified version of the gravity model gravity model in the economics there is no distance here just the product of two masses and as I said very simple and effective techniques to generate an interbank network and was liked very much by practitioners in the field you can do more interesting stuff for instance this is what we call I mean a model in which you consider the links of your network as the particles and the weights of those particles as the energy or the spin or whatever generalize coordinates you want to put on these particles and then I mean thanks to the fact that you can map a weighted network into a lattice gas in an appropriate space this space is called triangular graph of the complete graph it's a bit involving but in fact it's simple so you take your original network and each link of the network corresponds to a node here and the weight of the link here corresponds to the energy of the particle here and once you map your system into this space you can do traditional ensemble construction and for instance if we impose local constraints like the degree and the strength of each node we end up with a system in which you have a nice connection between local temperature and local chemical potential with your Lagrange multipliers associated to the nodes I'm not going into details about this this is the slide in which I just showed that this way of modeling a network works very well when you are working on a system like the work and financial systems like a need and it starts to fail when you have more network like this is C elegance neural networks network that derived from possibly more complex generation patterns that what financial networks derive from in the last five minutes I want to show you once you have the network how you can implement the dynamics of shock diffusion now this plot here is an illustration of the process and I think it will be enough for now imagine that you have this network of three banks three banks are connected because for instance this asset of bank one corresponds to a liability for bank two because bank one lent money to bank two so we have this connection between the banks and now imagine that you have some shocks on first bank the value of some external assets goes down for instance the price of security owned or whatever now these shocks on the asset are a shock on the equity because now the bank will work less will be worth less than before the problem is that now this reduced equity becomes a reduced asset for bank three bank three was the bank that was lending money to bank one why? because if bank three wants to sell the loan its loan towards bank one this loan will be worth less and this effectively reduces the equity of the bank and this process I mean is iterative as you can imagine and can be written in simple terms this is the famous at least in our field the prank algorithm which is general enough to include other kind of shop evaluation channel now it's called credit shocks but there are different kind of shock like funding shock and it's possible to extend this framework to account for different kind of shocks yeah so these are just a couple of examples of how the prank has been used by policy magazine this is the slide from a presentation this I think it was the European systemic risk board of the central bank you see here in this plot each point is a bank on the horizontal axis you have the total size of the bank in terms of total assets Bertie is the debt rank so I mean the higher the risk here and then you see that this relation is not linear so it's not that the risk is just proportional to the size of the interaction I mean amplify effects especially in this region of the highest banks these are too big to fails in which you have a very large variation so big banks that are safe big banks that are very risky so this is a quantity of tools to measure I mean how risky is a bank we also did a similar exercise with CCNG which is an institution that it's called central counterparty it's an institution that guarantees the terms of the loans between two financial institutions so this I mean were created basically after the crisis to tame try to tame systemic risk but on the other hand they take risk on their own so they have a way to measure to assess what is the level of risk on the system let me conclude in the last three minutes with some with a kind of question which is which arise from the fact that we often send papers to economic journals so we send the papers to economic journals and with all sometimes about equilibrium now it happened many times that there was a big misunderstanding with referees because I mean what we call equilibrium is totally different from what is equilibrium according to economists right so for us equilibrium in statistical physics is we can say that the system is an equilibrium if it's I mean well described by the statistical mechanical ensemble according to some constraints if it's a term of equilibrium it's equilibrium with it but in general we can say equilibrium induced by some constraints instead the economic equilibrium derives from the fact that you match demand and supply so they are completely different things now there can be a connection between these two these two concepts and this connection derives from the fact that for instance in this context of financial network supply and demand are related to balance sheet variables because interbank assets is the supply the amount of loans that banks can give to others and the demand is represented by liabilities I mean and so if we take this variable as responsible for economic equilibrium and if we can say if we can observe a network which is that economic equilibrium then we can use statistical mechanics I mean the ensemble construction derive from this constraint to derive all possible configuration which are compatible with the same equilibrium this is I mean in economics it's called Barrazian equilibrium which means that you have an economy which agents care about only final location but don't care about the specific market configuration so in this sense the statistical mechanics way of building networks allows you I mean to build the network which are compatible with economic equilibrium and this is the first message now we can ask is the systemic risk that we measure on our system compatible with what will be obtained from economic equilibrium so what we have to do to answer this question is to compare this with the statistical mechanics ensemble for this network this is what we do in this plot I mean very briefly this is I mean on the vertical axis you have systemic risk horizontal axis is a point for each year this is a comparison within the full lines which is the tranquil real data and instead the shaded region are the navigation of the ensemble derived from the degree of strength constraints and you see that I mean in some years there is compatibility so this would suggest that the network was at the economic equilibrium at that time in some years you don't this is for instance 2007 which is the year before the crisis but mostly 2009 and 2012 what happened in these two years the crisis in which banks were really afraid to I mean to end up like Lehman brothers so they were I mean reducing their individual risk this I mean caused the overall reduction of risk and the same happened for 2013 in the opposite direction 2013 is the year when the European Central Bank started to basically lend money for free and so then banks were more willing to take risks and so these were two opposite situations in which 2009 risk is lower in real data than in the ensemble and instead in 2012 risk is higher in real data than in the ensemble so in the economic equilibrium yeah so let me go to conclusions okay so I hope that I showed you very briefly that this network modeling together with statistical physics allows you to build a quantitative tool to assess systemic risk there are several limitations the first is data as clear from the need that you need to construct the network the fact that you assume static behavior of banks and the fact that you have I mean only simple interaction now this connects me to yesterday so in reality you have many interactions you have multiple networks and you have I mean more than pirated interactions so you can have a simple complex is also in financial networks this is of course promising many for future research so this is all let me advertise some recent reviews and let me more importantly thank all the collaborators that in these years contributed to the fields and was very enjoyable to work with them so thank you for your attention yeah thank you very much another very nice talk do we have any questions from the audience right now Edgar hi thanks for the great talk I have a question about the Canonical Ensemble so you said the Canonical Ensemble model works very well economists like it a lot I'd like to hear from an intuitive perspective why do you think it works so well and also what's the meaning of temperature in this context yeah okay very nice questions so the first one I mean let's say that we are a bit constrained by the constraint that we need we have to impose because we just have I mean we also use the information that we have and build the ensemble now there are different ways in principle to start from the matrix marginal then build the network this way is when what they found that reduces better the network concerning the temperature well I mean it's there is no you cannot make a parallel between between physical this is I mean related to the Lagrange multiple layers that you use to impose yeah in our ensemble is the average weight of the network that's it and again your potential is the density basically okay thank you I have a small question so when you discuss the differences in what people mean by equilibrium between economics and physics I mean another way to say in physics that there's an equilibrium you could say there's no net flows in the system right so you can say it again that there are no net yeah right so and isn't that very close to this idea that demand and supply match like can't you make a connection through that route so you are asking I don't know if I said correctly you're asking if in the economic equilibrium you can admit flows right yeah basically whether the connection between the two notions of equilibrium is the absence of flows in the system net flows equilibrium the flows will be responsible for switching the equilibrium point I think I'm not an economist so but it should be something like that so it's I mean there are additional in the economic equilibrium is more complex situation because there are additional variables like the price so the when supply and demand match is the point that reminds the price of an asset I don't have price in this picture so it's I think it's more complex than this but I cannot I cannot say much more okay okay thank you I don't see any other immediate questions so let's thank the speaker again yeah have a quick question and for the last talk of this morning session let me start wonderful so for the last talk of this morning session we have Ada Altieri from the IZ and Ada is an expert in glasses and in her talk she's going to talk about the connections of all the glassy phases in ecosystems thank you Ada okay so first of all I would like to thank all the organizers for this invitation it's a great pleasure for me to be here in Trieste and today I would like to discuss one of my current project which is essentially based on theoretical models in ecosystems in ecology and this talk will be based on these two papers the first one has been published at the Imperial last June and the second one is collaboration with Giulio Biroli and is going to appear in CYPOS T6 in a few weeks so the field of theoretical ecology has gathered momentum in recent years on one end reached by an explosion of experimental results and increasingly sophisticated techniques and on the other hand postered by crucial researchers in the world that still need an answer in theoretical ecology and range on how does the diversity affect the evolution of the other species can we detect cooperative patterns in real ecosystems and how this pattern ordered, disordered how does an ecosystem respond to an external perturbation and is it possible to quantify these functionalities responses in a similar way as we do for physical system and equilibrium for real ecosystem or more complex even chaotic dynamics well theoretical frameworks used in the past mostly focused on small number of species on ecosystem formed by small species small number of species and mostly employ the theory of dynamical systems however when the number of interacting components that can be species firms in a financial market in a complex economic network or neurons in a complex neural network is extremely large we can take advantage of statistical mechanics tools and statistical mechanics of these other systems to provide an effect in the field description and characterize emergent mechanism and collective behaviors in terms of base transitions so for the sake of compactness I reported just a few works here started with Soleil alone to a mechanic and I wanted to mention this work by in which the author proposed a reinterpretation of the well known model which is a resource competition model a reinterpretation in high dimension so with an infinite number of species and an infinite number of resources and together with Silvio France we re-propose we rephrase this model as a constraint satisfaction problem for all who are familiar with computer science and martial learning we rephrase this model has a perceptron model in its convex space to highlight the death constraint satisfability can provide the complementary mechanism to self-organizing criticality agile chaos self-organizing stability to explain collective behaviors in large ecosystems anyway today I would like to touch up on another question that I would like to share with you is this model has been recently acknowledged to be a good platform a good reference model for capturing many different features for various community models that include notably cascade prediction, lump pollinators, research consumers models so the MacArthur model that I mentioned just a few seconds ago by essentially tuning this model is the dynamical equation for the relative species abundances and I where I is an index that run from one to S and S is the total number of species in the pool so this lunch event equation contains a first self-regulation term that depends that is written as a gradient of one species potential that in the case of the Lofke-Volterra model is quadratic in the species abundances and also depends on the term in the sense that if one species eye experiences overcrowding if you meet at the environment it will be pushed back to its own value of the current capacity Ki. Then we have this other term which correspond to the interactive part and I'm going to better detail in a while and to add the source of stochasticity we have this noise it eye which is a wide Gaussian noise which has a zero mean and zero expression so it has an amplitude team is directly proportional to an eye for this reason it's called a demographic noise accounting for death, birth and other unpredictable events and as a wide noise is delta correlated in time and delta correlated in the species eye and we also introduced this lambda which is a weak migration parameter to ensure that all species are present and at the first level analysis is species independence so lambda is equal to lambda for every eye so the main hypothesis the main assumptions to get an exactly solvable model is to suffer from a well mixed community so we are neglecting any space dependencies in the model then we model the noise covariance within the itto formalism that allow us to preserve the density at finite time and on top of that to tackle the staggering we assume this alpha j to be random variables so these alpha j are basically extracted from a distribution which is parameterized by the first two moments mu over s and sigma squared over s and in this hypothesis we are basically following Robert May was a pioneer in this still the first study stability and complexity in terms of random matrix theory techniques so the simplest possible scenario that when one can think about is for random symmetric interactions when alpha j is equal to alpha j i so in this particular case the dynamic co-equation the lunch event equation that I show before and meets an invariant probability distribution in terms of an amethyst which takes this form so this is a quadratic turn that exactly coming to play thanks to this quadratic potential then we have the interactive part and and then we have two other terms that come into play when we perform the mapping between the dynamical equation that I show before and the static approach by passing through a focal plan formalism so for people who are familiar with disorder system this can remind you a spin glass Hamiltonian operator in particular a soft spin version of let's say Sharon don't keep Patrick model so to to be more definite to be more precise this term is a demographic that as I said can be obtained by just proving the the mapping between dynamics static by passing through a focal rank approach and is proportional to the till again then to model the migration parameter we assume the log of a nevisite function meaning that when an eye is greater than lambda this term does not matter because the log of one is zero and so we don't care about it but when an eye is smaller than lambda the function is zero and so the log of zero is minus infinity that with a negative sign provides a reflecting world and not correct policy boundary so this term is properly introduced is a formal trick a mathematical trick to avoid to prevent species from reaching very small value to get extremely close to the boundary and so this model was originally studied without demographic noise and without immigration by gay boonin in 2017 and boonin and collaborators by using the replica method one year later so they basically studied this model without demographic noise and without migration and they obtained the phase diagram of the model that takes these points takes this shape so sigma is the heterogeneity parameter I remind you and mu is the mean interaction so what they serve in this phase diagram is that when the interaction are small or highly homogeneous the system is in a single equilibrium phase then we can increase the heterogeneity parameter sigma and the system enters a multiple equilibrium phase however they did not enter into more detail of the different features of the two phases and so we can wonder what are the typical features of this equilibrium are they stable are they unstable and also how many equilibria are there are they exponential into the system size they sub exponential so to better probe the feature of the two phases and to answer this question we reintroduce the demographic noise into the model and we properly investigate we properly study the interplay between these intrinsic source of stochasticity and the immigration parameter so one possible strategy to do this is to resort to the replica method which is a well-known technique in this other system so I don't want to enter into the mathematical details but I want just to say that the working strategies is the computation of the free energy in terms of the log of the replicated partition function and eventually taking the analytical continuation and going to zero where is the replica index and so since we have we are dealing with the disorder quantities because alpha j are random variables we need to introduce two order parameters that are QAB which is called overlap so it measures the degree of similarity between two configuration in the replica A and in the replica B and HA which in analogy with a physical system will correspond to a sort of magnetization average magnetization and here correspond to the average abundance so thanks to the introduction of these two other parameters we can basically capture the emerging roughness in the landscape we can study the properties of the equilibria in the different phases so we can in particular use improved answers that correspond to the replica symmetric answer that also that it said mentioned before or one step replica symmetry breaking answers that allow us to get information also on the size and the depth of the base in our attraction so one of the main goal is to better characterize the emergent properties the and the emergent collective behaviors in terms of disorder phase transition and in particular in terms of glassy phases so starting from this space diagram that as I said was obtained without demographic noise and without immigration we can pretend now to add another dimension to this problem to the space diagram that will correspond to the amplitude of the demographic noise and so what happens in this case well what I discovered is that there is no sensitive dependence on the average interaction parameter so we can forget for a while about it we can fix mu and just focusing on and just focus on this part of the phase diagram so we can just reproduce a two-dimensional phase diagram for T as a function of sigma and so this was the resulting phase diagram where I plot the amplitude of the demographic noise T as a function of the heterogeneity sigma and so when the demographic noise is sufficiently high to cover a random interaction the system is in a single equilibrium phase so the landscape is purely convex then we can start decreasing the amplitude of the demographic noise T or increasing the heterogeneity parameter sigma and the system enters this multiply equilibrium phase that as you can see is characterized by the emergence of a two-level structure in the distribution of the equilibrium and going deeper and deeper in T or increasing sigma the system will cross another instability line in orange and enters another more and more complex phase that is called a Garner and I'm going to better detail why it's important especially for glassy systems so to obtain this phase diagram I basically look at stability properties so I compute the matrix of the second derivative of the free energy with the respect to the order parameter of the model with the respect to the overlap QAB and QCD and they properly analyze this matrix on a suitable subspace which is called replicon in replica Jordan so this is the final expression for the replicon in the replica symmetric phase that allow us to get information on the stability or the instability of each phase so when this replicon basically touches zero it gives information it gives the signature of an emergent criticality and so when this replicon touches zero we are basically approaching this blue line so we get quantitative outcomes quantitative result that allow us to reconstruct the blue line which is the stability line of the single equilibrium phase and we can generalize we can extend this computation to get also the orange line which is the stability line of the multiple equilibria phase so another important question that we can ask is how many equilibria are there in this multiple equilibria phase well to probe better the features of this intermediate phase between the blue and the orange line I also focus on the computation of what is called the complexity or configuration of entropy that correspond to the limit the thermodynamic limit so for us going to infinity of the log of the number of equilibria at the given free energy density and obviously average over the quench a disorder so I plot basically the curves of this complexity at different value in this intermediate phase and I discover that this complexity is strictly positive in this intermediate phase that if you look at this formula means that the number of equilibria is actually exponential into the system size into the number of species and so this is a first quantitative outcome in this direction and also extremely timely if we think about stability landscape and ecological resilience concept that have been proposed over the year to explain this apparent emergence of multiple attractors multiple steady states in the dynamics of real ecosystems but more interestingly at very low demographic noise or high heterogeneity we are lighted another phase that I call it gardener phase in analogy with glassy system and spin glasses meaning that each of these locally stable equilibrium becomes now marginally stable so it's pretty shutter some according to a higher article and practice structure and why is so important for glassy system and spin glasses what you can think about the single equilibrium phase as a sort of liquid phase in which the space of configuration is completely eroding so if you look at the dynamics if you look at the misquare displacement in the single equilibrium phase it has ballistic regime followed by a diffusive dynamics at long times then you cross this blue line which is an stability line as a set of the ergodic single equilibrium phase and the system enters a normal glass space meaning the dynamics at a long time is no longer diffusive but as a plateau and the height of this plateau is basically proportional to the amplitude of vibration of particle inside the cage if you think and analogy with glassy systems and particle system and going deeper and deeper in team the system will cross these other orange line and enters this garner phase that as I said is characterized by a higher article and practical organization of the equilibrium so in this particular case for the dynamics we don't have a single plateau but we should serve a series of plateau each of them corresponding to a different time scale temperature if you think about an analogy with auto equilibrium dynamical systems and so this is particularly interesting because it was not anticipated not predicted before in this context with so as I said with Silvia France we tried to start another model that was the MacArthur model in the high dimensional version but there we did not find any evidence any signature of a symmetry breaking effect and of such garner phase so all the result that they presented as far concern a thermodynamic analysis static approach so what we can claim in terms of the dynamics well we also studied dynamical correlation function meaning that we look at this dynamical correlator of NT so the abundances at time T and the abundance of time T prime average overall or sources of randomness so the number of species the number of sample the initial random interaction and so what we allied it is that in the single equilibrium phase 40 greater than to waiting that it's order 50 50 is enough in this particular case the dynamical correlator satisfy a time translation invariant property meaning that it does not depend separately on two times T and T prime but just on a single time scale T minus T prime and so as you can see if you plot the model of the dynamical correlators at different P primes all curves collapses to the same value which is a theoretical predicted value within the replica meter so they all curves collapse to this Q zero value which describe the size of the basing of attraction basically and so what happens in the multiple equilibrium phase well in the multiple equilibrium phase the situation is getting difficult more difficult in the sense that you explore the basing of attraction of the most numerous marginally stable equilibrium exactly has it happened in means field spin glasses and exactly as was pointed out by Cuyandola and Kurcha in almost 30 years ago in a peacepin model so in a well known disorder system models in which they stress they pointed out in aging dynamics so the older the system the longer it takes to understand that I presented as far is the love cable model with random interaction that has a tricky part so it's a purely competitive model so we cannot describe in this way a cooperative interaction so at this level we can wonder what happens if we also plug into the system cooperative interaction so in the last few let's say five six minutes I want to describe these other models and how to model cooperative interaction in large ecosystems well we can use the same we can assume as starting point the same dynamical equation that I show before but we slightly change these one species potential be of Ni assuming for these species potentially cubic shape in the species abundances Ni so as you can see there are three fixed point in the current capacity exactly has in the love cable model but there is a new fixed point in them which is called the elite threshold so with this choice of the potential we manage now to model the so called the effect that describes a positive correlation a positive feedback mechanism between individuals so it's a positive correlation between population density and mean individual fitness as you can see from this plot so the elite effect is called strong and is plotted in red when there exist an elite threshold corresponding to a positive value of this parameter M and elite threshold below which the population goes extinct and it's called weak that is shown in orange in this plot it's not threshold no elite threshold exist but thanks to the interaction there is still an increase in the per capita growth rate when population increases so we can come out and think about a sort of small population effect and was asserted for the first time by a famous zoologist who was a limb almost one century ago but first observed that not all the competition but also under crowding can limit can contribute to limiting the population growth and can be caused by can be generated by many different factors that include population may limitation genetic disease social dysfunction and so on so if also serve as the land I suppose that positive effect due to the aggregation of land I suppose that on the other hand when isolated tend to this occasion so they have been a lot of empirical evidence in many different populations so retiles mammals aquatic population but more recently this effect has gaining a lot of influence on the genetic biology in micro bio community and basic epidemiological models so basic epidemiological models in presence of a strongly effect can give rise to surprisingly rich dynamics that range from self sustainable solution chaotic behaviors and also catastrophic collapses of the endemic equilibrium so the missing point now we start to see a lot of results and so what we did is to introduce this cubic potential that none at less likely complicate the analysis because at the variance with the love cable to our model now it's associated with three fixed point so as you can see from the first plot I put essentially the fixed point of the two models so in the love cable to our model we have just two fixed point one in zero here with the with the effect so the curbing orange the globally asymptotic unstable fixed point becomes a stable fixed point so now the two fixed point are the one in zero that correspond to the extension and the other one to corresponding to the current capacity K I but there is an intermediate fixed point that correspond to this parameter hem to this elite threshold that is unstable so if you rephrase this picture in terms of a potential of the mean a bonus you can think about the different the three different situation in this way so the love cable to our potential is basically frame within a parabolic potential so a quadratic potential in the species of bonuses where has the strongly effect and the weakly effect can be basically described by a two well or a single well model so a strongly effect that as I said associated with a positive threshold is associated to a double well potential where has a weakly effect that correspond to negative or zero value of this parameter and is primed by a single well model and so if you think about an analogy with the spin glasses it's like to have it's like having a nice in model in which you introduce generalize external field and thanks to this field we are basically tilting the the position of the two minima so you are tilting the potential and you can switch from this situation to this other situation by passing through a spin transition so given these premises what we can claim what we can state in terms of the phase diagram and the responses of this system well we did exactly the same study that has for the love cable to our model where we plot the amplitude of the graphic noise as a function of the original parameter but again with respect to the love cable to our model we found a completely different situation so now there is no evidence of an amorphous gardener like face we can detect just two single phase in blue for the strongly effect and in orange for the weakly effect so just a single phase that correspond to the single equilibrium phase so to an ergodic and an ergodic configuration space of the multiple equilibrium phase at the very low demographic noise or high heterogeneity and the other important difference is that in terms of the functional response at the variance with the love cable to our model now the replic on again value so the the smallest again value which is associated to stability or stability of each phase does now depend on the species of bundles is evaluated at the fixed point the equilibrium phase is marginal so we know from the replica meter that this space is marginally stable in the sense that the hash of this free energy as a zero mode which is this replic on again value and as I said this replic on again value depends now on the selected point of the dynamics and star so if we rephrase this condition this expression for the replic on mode in terms of a condition for the probability distribution evaluated at this point and star and I rephrase this expression in terms of a condition for the probability distribution of this local curvature and depending on whether this condition is greater than the denominator of this expression we can get an extremely general condition for stability or marginal stability so repeating I'm essentially gathering together the two terms of the denominator I call these two terms and effective the local curvature of whether this condition is greater than zero unequal than zero we can get prediction on the necessary condition for stability or marginal stability in the model so we use two different approach one based on dynamical cavity method and another one based on the Gaussian model of coupled potential and we look at the lowest typical again value of this effective potential in such a way that the system can sustain can maintain stability when also multi-species excitation are taken into account and so if we only assume that this probability of the local curvature scale has this local curvature to an exponent alpha we discovered that we that essentially these local curvatures should stay in this way so should have a power low behavior with an exponent greater or equal than one so this means that when alpha is the system is stable when also multi-species excitation are taken into account and when alpha is equal to one there is a marginal stability condition meaning that logarithmic correction to power low behavior start to emerge so this provides some how a generalization of the argument that was proposed in a different field so in spin glass model by Anderson Palmer workers in a spin glass model by using a sort of stability at the number of spin flip that you can apply or your model in such a way to maintain the stability of the system and so this condition the condition that we found here for the probability distribution of this effective potential is more powerful and more general because it allow has also to take into account continuous degree of freedom with respect to the symbolizing model and so to conclude I present today two different models one the Lofka-Volterra model with random interaction in which we detect different multiple equilibrium phase both from a thermodynamic analysis and a dynamical approach based on dynamical mean field theory and also a garnered phase which as I said is extremely interesting in lights of glassy system because it's an amorphous space associated to a hierarchical and practice structure in the organization of the system and the important point that I didn't have time to mention is that these models this Lofka-Volterra model can be generalized to replicator dynamics to render replicant model so if you are interested in there are more details in the paper and in the second part of the talk I try also to model these intraspecific cooperative effect that can be particularly beneficial for biological and ecological community from which I detect completely different universality classes and so for the future it will be interesting also to extend this result to asymmetric interaction and try to introduce a notion of space that can give rise to surprisingly rich dynamics such as traveling waves of cooperative patterns in this kind of systems. Now thank you very much for your attention. Thank you Ada there are questions from the audience Eric thank you for a very contentful talk there was a lot of material I have two questions if I'm allowed to okay so the first one is there seems to be flavor one could think of you know multi-person games and finding equilibria in games in multi in very high-dimensional games which I know is a very it's a wide field in economics and so on so my first question is do you see any relation to this? Yes okay so as I said this first model so the Lofke-Volterra model can be generalized to these random replicant models that have been proved by different waters also to be as Gala and there are also people in the audience like Matteo Marsilio working on similar connected models both in economics and also in let's say game theory so there are works in this direction and the old tricky part is that for the moment people are considering fully connected models in which there is just a deterministic dynamic so the next step is tied to also model a sort of stochasticity in this model but for sure there is a direct connection with game theory and there are a lot of studies in this direction that try to point it out and to stress the relationship so in the paper that I mentioned so the first paper the PRL we also did we allotted a direct mapping between the two models so the Lofke-Volterra model and the replicator dynamics that is studied a lot in this kind of other contexts. Okay if I ask one more question so it's a more general one I'm not that familiar with this particular models but there is a related models in population genetics which I know better and there you can have some phases where you have effectively a high dimensional dynamics with which satisfies detailed balance and other phases where they do not satisfy detailed balance and in population it's the mutations have to be a very particular type for it to satisfy detailed balance but generally they do not and I guess it's the same thing with migration in your model I'm not sure but I think so so my question is more general to game theory and to the possibility that you have dynamics which doesn't satisfy detailed balance would this change anything in counting the number of equilibria if you go outside the phase of equilibrium statistical mechanics as a technique Okay so to answer this question I want to come back to to this part of the model so okay so this dynamically question assumes let's say a demographic noise that scales in this way so it's a multiplicative noise that satisfies detailed balance now you can try to model the noise in a different way so the noise does not scale as Ni the covariance of the noise in this particular case that you mentioned does not scale as Ni but the second power of Ni so it allows us to violate detailed balance and also to have more interesting behaviors like power load behaviors so in this particular case when we model this demographic noise in a different way so still a multiplicative noise but with a different dependence on Ni so with the second power of Ni we should expect power load behavior in the distribution and so this is particularly interesting because in this particular case we can see that there are several elements in plantonic community or microbial population in which they several holders have several day-sharp separation let's say between many rare species and a few abundance one and the rare species should have a sort of power load behaviors with an exponent at this that has only small variation and we do the computation we have time for maybe one last question Marco maybe when I go over can you have a look if there's any questions in the chat from the online audience in principle no I don't see any alright so thanks for the excellent talk concerning this slide at the beginning you said there is an assumption on the alpha so there is a reciprocity okay yes so this model so to attain the exact prediction to accept to attain the exact thermodynamic analysis I assume that the alpha J are symmetric so alpha J is equal to alpha J so in this particular case we can do all the computation within the replica I cannot okay with the replica with the replica method because we can associate this dynamical process to a static process so we have a probability equilibrium like probability distribution otherwise if the alpha are not symmetric we can use the same approach like dynamical cavity or dynamical mean field theory that have been studied in this case and so one can wonder what are the prediction in terms of the phase diagram well so for the single equilibrium phase and the multiple equilibrium phase we should expect exactly the same picture because the equilibrium here are stable but when you introduce asymmetric interaction this garnered phase should be completely wiped out because of the marginal stability property that is characterized here the equilibrium and so it was studied also in related context okay I'm sure there are more questions but I think we can move the rest of the discussion to the coffee break let's first thank Aray again and all the speakers of this morning session and then I propose that because I know some of you have connections that we reconvene at 11 o'clock for the last session of talks okay welcome to the last session of the conference I remind you that this is the last opportunity to discuss questions also for the online audience we will start with two continuity talks in which I remind it's 15 minutes 15 plus five so I leave the stage to Nothalo so thank you I would like to first thank the organizer for this wonderful opportunity to participate in person in this conference and to allow me to present a talk as well so my name is Gonzalo Manzano I'm going to talk about the thermodynamics of gambling demons which is essentially a new version of Maxwell demon in which feedback control is not allowed anymore but the demon is allowed to stop the dynamics at convenient stochastic times so this work has been done in collaboration with people at ICTP, Rosario Fazio and Edgar Roldan and people in the Pico group at Aulto University in Helsinki and Diego Sobero, Oliver Majed and Yuka Pecola so this is the outline of the talk I will start with some introduction and motivation of our work then I will introduce this concept of this gambling demon and some technical results that we obtained then I will discuss some experimental test of the results in a single electron box and then I will just give you very few points about how this theory can be also extended to the quantum case along quantum trajectories okay so let me I start with a brief reminder about Maxwell demon also I introduced it yesterday essentially Maxwell demon is a false experiment introduced by Maxwell in 1867 after studying the kinetic theory of gases and he imagined gas split in two chambers A and B which are different temperatures and which are only communicated by a small trapdoor in the middle which can be open or closed so Maxwell imagined a small intelligent being which was later called the demon which by looking at the position and velocities of the particles and implementing some control of the trapdoor is able to sort the molecules so that he may challenge the second law of thermodynamics in particular when the demons sees some particles in the cold temperature chamber he may open the trapdoor and let it pass to the other side so that the same for the slow particles in the hot temperature chamber so that after some time he will get that the cold chamber is colder and the chamber is hotter in clear contradiction to the second law of thermodynamics another related situation was introduced by his paper in 1929 which actually did his PhD thesis on Maxwell demon and in this case is the first model it's also thought experiment is the first model of our information engine in which he imagine a single particle gas in a chamber or but this doesn't work in the chamber that is operated in a cycle with four steps in the first step there is a wall that is introduced in the middle then in the next step a measurement is performed to see whether the particle is in the left or on the right which is known by this small intelligent being or demon and then it's a thermal reversible expansion that is implemented in the last step towards the opposite side so in this cycle a small amount of work which is equal to KT log 2 is obtained from the heat coming from the external reservoir so again in contradiction with the second law of thermodynamics so although Cillar already pointed that in practice the measurement step will entail some thermodynamic cost full exorcism was not available until information theory was fully derived by Claussian and others and following this informational exorcism the violations of the second law of mass buildings are only apparent violations when one takes into account the informational processing cost according to the argument put forward by Charles Bennett if one uses Landauer's Eraser principle one can see that the Eraser of the information stored in the demon's memory will entail a minimum amount of work cost of exactly KT log 2 even if it is non-reversive so there is no free lunch for Maxwell-Demon and the further development of these ideas has led to a more comprehensive framework for the more dynamic of feedback control where the information acquired or stored appears explicitly in the non-equilibrium inequalities and this framework invites us to think about information also as a resource as for instance is nicely captured in this concept of information reservoirs as for instance implemented in this nice paper here by Mandel in which you can operate the more dynamic task which will be otherwise impossible in those situations by using a low entropic memory in which you can store information I will also recall the talks yesterday by Neri and Alexia and Wednesday by Sara which they all provided different examples of this framework of information and feedback control so much of the theoretical developments were also triggered by very nice experiments that implemented for the first time without experiment by Maxwell and Sealer in different platforms for instance in colloidal particles in optical traps or in electronic devices but also in quantum systems like nuclear spins using spectrometry techniques or also in circuit QED systems so let me just make a remark that Maxwell daemon has these two basic ingredients first the daemon gather information about the microscopic state of the system and then the daemon use this information in order to apply some feedback control over the system which is just controlling the trapdoor or the piston in the case of Sealer and the question that we asked in our work is essentially what happens if the second ingredient is not present anymore so we consider what happens if the daemon is not allowed to perform feedback control but it's only allowed to stop the dynamics so it's only allowed to do this minimal action of stopping the dynamics at some convenient time and this is somehow similar to what happens with players in the casino that can decide either to continue the game or quit the game but they are not able to change the rules of the game for instance they are not able to shake their roulette if they don't like the result so this is essentially the configuration that we consider we have a system in contact with a thermal environment that is temperature beta and we have a fixed protocol lambda that is implemented over the system and so this entails some work then the system is monitored some degree of freedom of the system is monitored and we can see trajectories of the system and then the daemon can stop eventually the process using what we call a gambling strategy which is essentially that when a condition is verified for the first time the daemon stops the dynamics or otherwise the process continues until the end of the protocol which is this time tau so we have this fixed interval of time and the daemon can stop either before or when you get to the end of the interval the time at which the process stops is called stopping time and it becomes a random variable between zero and the final time tau denoted with this calligraphic tip so this is analogous to a gambler in a slow machine so here the daemon has some best some work that we represent with these silver coins in order to drive a system out of equilibrium with the hope to recover more work or more free energy than the one that the daemon invested so what happens if the daemon plays this game for a fixed amount of time tau until the end of the protocol sometimes the daemon will win sometimes the daemon will lose but on average it is bounded to lose because of the second law of thermodynamics and the question here is what happens when we don't wait until the final time but the daemon is allowed to stop the trajectories at some especially convenient times based on the information at hand and so we ask whether this inequality can be inverted and of course those are averages of trajectories of different lengths and we answer this question in the positive and indeed derive some universal bounds that quantify how much this inequality can be inverted and also derive some stronger fluctuations of the stopping times so in order to do that we use the framework the standard framework of stochastic thermodynamics with an extra ingredient which is the use of martingale theory so what is a martingale process martingale process is a especially useful kind of stochastic process which is defined by this condition there they were introduced by Paul Levy in 1934 what this condition means is essentially that the best guess for the average of the process m at time tau is a condition on observation of that process until a previous instance of time t so the best guess is just the last value of the process at time the observation was made and this is a very interesting property that in particular implies the dupes option of stopping curing that we will use in order to get our results which essentially states that the average over trajectories of the stopping times can be related to the average of the process of the beginning of the time so martingale processes has been studied a lot in mathematics of finance but they are used to model equilibrium markets which you cannot profit from fluctuations in supply and demand but they are also been recently seen to be useful in stochastic thermodynamics and in particular it has been pointed that the exponential of minus entropy production is a martingale in non-equilibrium stationary processes which is this equality here and I have to recall that this is a stronger than the usual fluctuation theorem for entropy production and indeed you can derive the fluctuation theorem by just taking here t equal to zero so that in the left hand side we just have the normal average and in the right hand side we get e to the minus delta which is zero so we get one which has been developed in these references here and also other ones that I had not the space to mention so this is true for non-equilibrium stationary processes but it's no longer true in the case of general driving systems that are generally driving out of equilibrium so in order to assess this kind of situation what we had to do is to find another martingale process which is related to entropy production but also have an extra term and this is what allows us to obtain this stopping times work fluctuation relation in which as you can see there in the exponential you have the first term in blue which is the entropy production in our setup it's just work minus the non-equilibrium free energy change during the process so this is a term delta here that I will comment in a moment using this stopping time fluctuation theory we can also derive a second low light inequality which provides a bound on the entropy production at the stopping times and which remarkably can be so the right hand side of this inequality can be negative so delta is positive so that you get something which reminds a lot about the non-equilibrium inequalities with information in the case of feedback control however in this case this delta term has nothing to do with the mutual information instead it's a stochastic version of the Kulbar labor divergence between the probability density of the process in the forward versus the one in the time reverse process which is situated at the stopping times so we call it stochastic distinguishability under time reversal and in order to better see its meaning you can look at this picture here down here you have the initial density of the process from which the initial state of trajectories are sampled then since the fixed driving protocol is applied over time this probability density changes in time from some row of X tau then in the backward process the initial state is sampled from this distribution after just changing the sign of the old variables under time reversal and if you apply the time reversal in protocol you will not get the same probability density but a different one and this stochastic distinguishability essentially measures the difference between these two probability densities at the stopping times so we test our results in a single electron box which was implemented by the Pico group in Helsinki so here you can see an electron micrograph of the system the system is essentially a copper island which is here typically red in contact with the thermal reservoir which is given by these two terminals they are aluminium superconducting leads and they are maintained at a temperature of 0.67 Kelvin then we have a gate voltage where we can implement at the driving protocol and a detector that tells us whether there is one or zero X-axis electrons in the island so the idea is that this is the Hamiltonian of the system this is a two-level system an effective two-level system where you can have either zero or one-axis electron in the island so what we do is to digitalize the current that the detector measures because it is capacitively coupled to the island so we can get whether the state is either in zero or in one a long time and for that we can maintain the work and the heat during the processes in which we vary these external voltage like a linear ramp so we move these offset charts of the Hamiltonian following a linear ramp so what we essentially implement are some strategies that are based on work thresholds we stop the strategies if you are investing more work than a given threshold WTH and if not we wait until the end of the linear ramp so this is the experimental results that we get here you can see the second low inequality in the left and the fluctuation theorists in the right the dots are the experimental data and the curves are the numerical simulations the shape areas are the experimental errors so you can see that the experimental data follows quite well the simulations despite the errors and that you can have really a negative W-delta if for small thresholds so this in the x-axis there is the magnitude of the threshold that we implement to stop the trajectories and you can see that you can really have effective gambling in this situation in the right part of the right figure you can see that also in order to recover the fluctuation theorem you really need to include this delta term inside of the exponential otherwise the fluctuation term is no longer fulfilled so those are some histograms that we also get from the stopping times and from the work probability distribution for different values of the thresholds so for small thresholds what you can see is that a lot of trajectories are stopped at the beginning and the middle and in the end of the process and in that case the work probability distribution has a lot of negative values and in particular the average is negative so you can extract work in this situation whether that would be otherwise impossible but as long as you start to increase the thresholds all the probability distributions are shifted to the right so eventually the average becomes higher and for very large thresholds what happens is that almost no trajectory is stopped before the end because the threshold is very high and then you recover the the user probability distribution that you expect for the work with average over the change in the free energy okay so I will how much time do I have okay so well maybe another day I can tell you also I will go back to the quantum case but let me just go to the main conclusions so we introduce this Gambian demon which stops the non-equilibrium a non-equilibrium thermodynamic process at the stochastic time according to some stopping strategy that use information about the system it does not require feedback control within that its applicability should be easier in more situations we also derive classical and quantum universal substituent relations of the stopping times and we test them the classical ones in the single electron box experiment the results here are for a specific situation with one thermal bath and driving but in principle the results can be extended to more general situations like with several baths and so on we would like to explore what happens if you apply this kind of ideas in small heat engines or also whether one can derive optimal strategies in order to stop the dynamics and it would be also very nice to make this situation autonomous like an autonomous Gambian demon which is something that we would like to work in the future so essentially that's all thank you very much and you can look at this reference here and let me also just take the opportunity to point out on some recent related work that is also based on marketing and theory that we put in the archive yesterday right there's time for a quick question thank you Gonzalo for the nice talk I wonder since so since your gambling demo is responsible of the fact that the length of the trajectory is not fixed I wonder whether one could see because there's a sort of fugacity or chemical potential in a Grand Canonical Ensemble and if so whether all the things you've done with marketing theory could have done by looking at particular observables just starting them in the Grand Canonical Ensemble instead of using marketing aids yeah I think they can apply also to the Grand Canonical Ensemble because the results are based on the essentially on the entropy production which can be also defined in that case but instead of just having there maybe the work you have different reservoirs for instance you will get also extra turns from the heat flows and so on but I mean you can nevertheless have a stopping time fluctuation theorem like this for the entropy production in the process sorry perhaps what I was asking was not that clear could you see your gambling demo as related to a sort of fugacity or chemical potential not to my knowledge okay thank you okay let's thank the speaker again my name is Benjamin Beiter I'm a postdoc here at CISA and just before getting started with extreme events of non-microbial processes I would like to take the chance and say thank you for the organizers for bringing us together in real life despite all these obstacles so today I'm going to talk about extreme events in non-microbial processes all of what I'm going to talk about today is contained in this preprint previous work that came out in PRR in January so this is mostly material that I've developed together with my PhD supervisor at Imperial College London and Guillaume Salbreux who now is a professor at the University of Geneva okay before getting into the details I just would like to explore what I mean when I say extreme events it's not a very trademarked expression and so for that I have a little sketch so in red you can see a one-dimensional continuous stochastic process and there are a couple of things that you can ask about the stochastic process of concerns it's extreme behaviors it's records that are broken and so as I just learned in gambling demons you have this question of the first passage time so for this what you do is you define a certain barrier at a height x1 for the first time tau x0 x1 that your process reaches this critical threshold another question that you can ask is the question of the running maximum so this is the bright green line on top and so the running maximum at a time t is just the highest value that the process has attained up to a certain time equivalently you can define the minimum and the difference between the two goes under various names for example the span, the range or the volume and as you can see here these are all questions that have been addressed pretty much since we have stochastic dynamics so these are questions that of the last century have always fueled the development of new tools in stochastic dynamics and actually yesterday evening I found out that Schrödinger so you can see that all of these have been published during the Austrian army during the time stationed in Gorizia which is 30 kilometers from here so what do all of these have in common so together they really help you to understand the kind of records that are broken by a system and so what we're interested in is the distribution of these extreme events so explicitly not only averages but all moments the full distribution and so I'd like to show you a little nice trick and so this is the the object of the visit probability qx of t so I draw exactly same trajectory but now I added a blue shaded area the blue shaded area is the set of all points that the stochastic process has seen up to point t so it's the area that's visited by the process this area itself is of course a random variable that is strongly coupled to the trajectory but now it has a couple of nice features so suppose you know qx of t then you can for example take the derivative with respect to time and so what you do then is you weigh the area that has been visited at t plus the t but not at t and so this is exactly the area that's been first visited at t the same you can do by taking the derivative with respect to space so here you're weighing the area that's been visited x but not at x plus dx so this is minus the weight of the running maximum distribution and finally you can integrate out space and so this gives you the mean span or the mean volume explored at a certain time t so all I want to do with this slide is to convince you that once you know q of x and t you really know a lot about the extreme events of the stochastic process and by the way q of x of t the visit probability is for those of you who are familiar with this it's the complement of the survival probability so let's have a look at a macobian example of simple Brownie motion just to get a feeling for this so simple Brownie motion defined here through its large migration has a visit probability that you can give in closed form it's 1 minus the error function and you can convince yourself that with the operations that I mentioned earlier you get all the correct distributions of first passage time running max and mean volume explored and just here to get a feeling for it so what q of x of t measures is a t zero you essentially have a delta function well it's not a delta function it's one at the origin and zero everywhere else and then as time progresses the process blooms open and then for t going to infinity what you get is a constant one okay so this is the case for a simple Brownie motion which is which is probably understood since since 1915 or even earlier and of course now what we want to do is we want to understand what happens if we go beyond this macobian regime so there are many ways to do this what we did is we induced macavenity by adding correlated noise okay so this is the same process as before but now I have this green process y of t and I just say that y of t is a stationary process with some kind of correlation function and I want y of t to be of order one only for cosmetic purposes so that I can express the strength of macavenity with a small dimensional parameter g so g now is my coupling parameter and expect it to be small so what does it mean it means that the random walker experiences kicks at different times s and t but now these kicks are correlated so there's memory in the system so I can repeat the experiment that I did before and I can see that the black line which is now a simulation of the curve I showed you earlier now acquires some corrections so these are small values of g square I chose an exponential correlated here there are various reasons to do this but generally the formalism I'm going to present is not dependent on the particular choice of the correlated anyway the point of the slide is if you switch on memory what happens is this shifts but this shifts only little and so this motivates us to assume that there is a kind of perturbative expansion possible so what do I mean by this what I mean by this is a little bit of memory is almost the same as no memory plus a little bit of a correction so the visit probability is the macovian result plus g square times a correction plus g4 times a correction and so on and so on and so on and now first of all I very all hope to get a close form result for qx of t but what I can at least attempt is I can I can ask myself is it possible to compute for example the value of 2 yeah and so what is the result that we're we're afterwards the kind of intuition behind this so if you take this discrepancy here and you rescale by one of the g square what you get is a collapse so this green line is now the collapsed correction here and we did this for to other values of beta beta is the autocorrelation time of the driving noise and so what you can see here in this plot is basically the non-macovian corrections are non-trivial in the sense that they're below 0 for short distances above 0 for long distance so we can't really expect there to be an effective macovian description but there is some hope that we can capture this leading behavior okay and so this is exactly what we did and we did this using field theory now I don't want to speak too much about field theory I just want to give you a short overview of the method so this is a rather technical slide but the idea is as I mentioned at the beginning we have two statistical objects the random trajectory in red and the visited area in blue and so what we use is standard field theoretic tools to describe the red trajectory here in red with the field phi and here you can see this is essentially governed by the associated forka-punk equation I should say as a footnote we did this for generic potentials and generic driving noises and then the second statistical object is the trace and the trace is an extremely boring subject in the sense that once you have visited a site it's going to be visited forever so this is the forka-punk equation of an immortal particle and now of course you have to understand how exactly the red trajectory produces blue area and this is somewhat more complicated and this is contained in this interaction and you can see these references to understand where they're coming from and then finally the fourth contribution is this non-macovian action so this is the orange part below and you can see here that in a sense the random walker at some time S2 and some time S1 is no longer independent but they're communicating by the correlator C2 of your driving noise and so on the right-hand side for those of you who are familiar with field theory you can see some diagrams and what these diagrams represent are exactly these contributions of the action so I'm happy to discuss all of this afterwards or in person but the details are in the manuscript okay so what's the main result, what's the main outcome of this the main outcome is now that you can use this to ask what's the visit probability in field theoretic language and without again going into much details the question you're asking your system is given I start a red trajectory at X0 what's the likelihood that I find a point X1T within the blue area so these are creation fields you create a random walker and then in the quantum mechanic equivalent you destroy a trace at X1T and so this average now gives you exactly the probability of having seen X1T so now if you remember this pathway there are a couple of things you have to do you basically expand all the non Gaussian distributions and evaluate this in a Gaussian average and if you expand the non macovian action what you get here is just the expansion of an exponential and you can now identify that the G0 term is exactly the object of the macovian theory and all higher orders just naturally pop out of your field theory so glossing over all the details what I'm saying here is that field theory is in itself a tool to understand deviations of Gaussian averages and so in this case what it does is it gives you a recipe how to generate this Q of n's to arbitrary power okay so I restricted myself to G2 but there's no reason to do so you can do G to the 18 and so the main result is basically a recipe an algorithm so what comes out is Q of n so the full distribution to nth order correction and what you need is a full understanding of the macovian theory so the input for this recipe is the transition probability and the return probability of the macovian process so for G equal to 0 and the other thing you need to know is the statistics of the driving laws now the nice thing about this is that if you want to understand your correction only to G2 you only need to know the second moment of your driving laws generally if you want to understand the first nth correction you need to know the first n moments of your driving so the more you want to know about the correction of your driving noise the more you need to know about the statistics but generally if you're just interested in the leading correction the correlator is enough so overall what this gives us is a systematic controllable recipe that can generate arbitrary correction terms which I now didn't talk about you do get quite complicated expressions but the diagrams explain to you very precisely what is actually happening on the microscopic level and I didn't talk about all the extensions so we can do for example linear potential so we can understand the on-chain windback process and so on okay let's go back to our initial problem so now this is exactly the same plot I showed in the beginning but now the solid lines that you can see here are the field theoretic prediction to order g-square so these are the leading order predictions and as you can see they work extremely well in this regime and this is only to order g-square so if you want to access higher values of g then you have to put in some more work you have to go to higher orders but what this shows is that you can predict the leading order and now this is a reminder I'm not talking about moments of the first passage time I'm talking about the full distribution so if you take the respective derivatives you get the full distribution and there's no approximation in let's say beta the autocorrelation time or any kind of diffusion limit this is the full story and you can also ask the question for so now here what I showed you is for a fixed value of time versus space but of course you can also go the other way around and for that you fix a distance 1 and you look at time t and so this is the same experiment here I plot the Brownian result so this is the complement so this is the Makovian result and here you can see some discrepancies if I switch on the driving noise you can rescale them and here again you can see that they nicely coincide and of course there's some discrepancies here so part of that is just an American noise but also part of that is then the higher order corrections G to the 4 okay so maybe let me summarize at this point so what we did is we looked at a particular cast of non Makovian processes that are relevant to active matter or to apparently gambling demons where you place a particle into a potential terminal white noise and on top of this terminal environment now add a little bit of memory in form of this self correlated driving noise so YFT again has to be stationary it has to be yeah but it doesn't have to be Gaussian for example it can be telegraphic noise if you like and what comes out of this procedure is a recipe where you put in the Makovian transition probabilities and the moments of YFT and out comes the nth order correction of the visit probability and as a consequence of a variety of other extremions so because it's a few theory we're not constrained in the things that we want to perturb around so we could for example also look at interactions so what is the first passage time of n interacting Brownian walkers subject to self correlated noise all of this is possible it's modular theory so you can just talk in the interactions you like and as a more generally summary is that field theory is a very apt tool to think about memory and stochastic processes okay with that I would like to close and I would like to thank you for your attention as well as my co-authors and again if you have any curiosities about the technicalities then please have a look at this pre-print and I'm very happy to take your questions okay I have a question okay very interesting so the field theory you mentioned is then a one plus one dimensional field theory or because in principle you could also have a what the particle physicists are doing is it's four dimensional or eleven dimensional but yours is one plus one dimensional yes in fact there's this other paper where we looked at at d-dimensional problems here so this is the Bodeo et al paper so you can of course also study the area that's visited in a d-dimensional volume and my second question is you mentioned the standard diagrammatic expansions that you have in field theory so how do they look like what is their interpretation let's say in describing the degree of non-marcovianity yes there you have some yeah so maybe yeah so maybe just very briefly and then we can maybe discuss later in more detail about the Marcovian theory so the Marcovian here so if you remember so red basically is the propagator of the particle and this we can call a trace so this is the process where a particle deposits a trace because it's visited a new site but then you're over counting and you have to subtract returns so when the particles come twice to this point already because that doesn't count but then you're under counting so you have to add the probability of it having had three visits and then you have to and your over count and so on so what you do is you sum all of these diagrams and so what this says is the probability to be at some point for the first time is the probability to be there minus to have been there twice plus to have been there thrice and so on and so then you can use these Dyson summation tools and this gives you the Marcovian result which has been known since the 50s but now your question was how does the memory enter into this and so the memory vertex I drew here yes so this means the kick at S2 and the kick at S1 are correlated by the correlation kernel and so basically you now have the task to combine this Marcovian expansion that you can do exactly you have to decorate them with these loops which you can't do exactly but this can happen in four different four fundamentally different ways and so you have these four times and then this is a bit of a technical exercise but once you have understood how how these work then you can summarize them to essentially the main result that I now didn't mention which is basically you get this formula comes out if you collect all these four different forms so you have to basically perturbatively correct your transition probability in order to get the new Q function and here's the formula how to do this until you can see that you need the two-point function of the driving noise but yeah there are some diagrammatics behind I hope this one's okay there's time for a very brief question Thanks for the very interesting talk you didn't comment on the properties that the memory should have or should not have in order for this to work well into the first order so of course G2 should it should be a small parameter but what about long-range memories yes yeah thank you for the great question so I actually have the correct slide open already so essentially what C2 this is the Fourier transformed the coordinate of your noise has to satisfy is that it has to keep this integral of the finite okay so this is basically a payoff between the decay properties of the transition probability of the Macoving process and C2 so I haven't found an elegant way to describe the exact properties of this yet but the clearest answer I can give is C2 has to manage that this remains finite and this does indeed for example if C is integrable I mean in the Fourier space then you're definitely done so this is nice with parameters above all yeah thank you right so let's thank Benjamin hello next speaker is Kunihiko Kaneko and I remind the last speakers will be online and it will be 25 plus 5 of this guy okay do you hear us yes okay yeah thank you very much for invitation and it's quite unfortunate that I cannot come on site yeah I visited three years ago and yes but since then yeah it's quite difficult to yeah okay so today I talk about this rather maybe a familiar topic for most people I guess okay so this universal biology in adaptation and evolution and so universal biology so maybe that's a kind of new word but my standpoint in universal biology is as follows so life system generally consists of diverse components for example in a cell there are many many different molecule species not to that number and many many different kinds molecules but still they can maintain the cell can maintain itself and can continue to produce itself so maybe here we can put some kind of guiding principle as a kind of micro macro consistency so at microscopic level there are a huge number of different components so that's a very high dimension of space and macroscopic level so for example cell that's a unit to sustain and reproduce it as a whole so with this constraint of robustness at a macroscopic level there may be some kind of low dimensional description for a huge high dimensional microscopic space so that's quite similar to the spirit of thermodynamics versus statistical physics but we are not in a equilibrium system so we need something new so that's what we aim at universal biology so the point here is that steady growth state or robustness at a macroscopic level and the similar idea can be applied to from cell to organism or organism to ecosystem so with this viewpoint we are working on some kind of cell reproduction and multicellular organism and evolution but today I mainly focus on this so consistency with molecular level and cell level and that leads to dimensional reduction to high dimensional thermotypic dynamics through for adaptation and evolution and so that's our response theory and we can have also fluctuation theory according to fluctuation response relationship that is extended to evolution theory but I'm not sure I don't have enough time to talk on this but anyway I start from this so the basic setup for this study is that so we consider for example cell and in a cell there are a huge number of components so for example in bacteria there are about 4,000 or 5,000 my message RNAs or corresponding protein species so that's a phenotypic space so very high dimensional and genotype that's a rule to determine of this gene expression dynamics or chemical reaction dynamics then that is governed by DNA sequence so that gives a rule for dynamics and so this genotype changes through evolution and according to phenotype changes and there is some kind of selection so that's kind of basic standard and I talk about a little experiment and in that case so basically bacteria E. coli and taking some kind of many transcription analysis that measures messenger RNAs concentrations and I discuss model for high dimensional chemical reaction dynamics and some theory so let us start from a rather kind of trivial law so consider a cell consisting of many components but for a cell to grow at a cell division time almost all components have to be roughly doubled otherwise cell cannot sustain the original state so it's a kind of stationary growth state so for example if you consider i equals 1 to m species maybe m is 4,000 or something and n i is the component each abundance is gross with the explanation and mu i is the growth rate but to keep this kind of steady growth all mu i have to be equal to the growth rate so but mu i is generally a function of all other components so it's but still it has to satisfy mu 1 equals mu 2 equals mu 3 equals mu 5,000 or something so there is m minus 1 conditions so if you try to change this cell keeping a kind of steady growth state without differentiating to show to other cells keeping this original state and growth and division and growth division process then there is m minus 1 conditions so within this m dimensional space this change keeping steady growth should be just one dimension so there is some strong constraint so we can write out this kind of trivial yeah condition so consider xi is the concentration of each component so divided by volume v and volume growth with exponential and this growth rate is mu so that means each component concentration is diluted with this rate mu so that is the equation for this chemical composition and fi is including any complicated reactions and many degradations so fi can be a complicated function of all xj but the steady state means that this should be equal to zero so that has to satisfy this or as a convenience often it's useful to include log x instead of x so log concentration and taking this fi xi fi then we can rewrite this equation like this and then it can be this and then the steady state condition is just given by this so here f1 equals mu f2 equals mu and mu is identical for across all components and of course mu can change upon the environmental condition environmental condition e so for example is a stress strength so by changing e so this equation has to be satisfied but mu can change now consider from the original state and then put some stress so originally stress is zero and put some e is adding some stress then assuming that the change is rather small then we can linearize everything as typical in physics or mathematics so then we can write down using jupy matrix of this fi and then also the change of fi according to the change of stress strength e so it's a kind of susceptibility then we can write down this delta xi is the change of log concentration and delta mu is the change in the growth rate and then we can write down then in this linear regime we can solve this like this L is the inverse of j now consider two stress strength of stresses e and e prime then compare with this but here we assume kind of linear linearization so that means the changes are on this straight line then if we take e and e prime and then this part is common across e and e prime so from this we can get this so that means the change in xj xi or xj by e and e prime is given by this but this part is independent of each component so the change by each stress strength across all components should be proportional so this is a kind of trivial consequence but we can check this experimentally so in this experiment so we have bacteria and they can measure this many different messenger RNA concentration species messenger RNA species concentration so that's xi and then this put some initial condition and put some stress for example put some heat and then low heat and much stronger heat or some starvation a little bit starvation or stronger starvation or osmotic pressure and strong osmotic pressure and put measure this change log xi from the original to this and original to this and check this log xi e over log xi e prime across all i i is a different messenger RNA species so each point here is a different messenger RNA species so the theory says that this should be located along a single line it looks like okay and the theory also says that this slope should be given by the growth rate change and so we can measure the growth rate change separately from this experiment in bacteria how this bacteria grows and device and this slope here is obtained from delta mu so the theory looks okay and this is for osmotic pressure and starvation there may be a few points deviated but here actually 4,000 points here so most almost all points along this line so one strange thing here is that we use linearization but actually the change or the stress strength here is rather strong and actually the growth rate of this bacteria is 20% from the original if you put some strong stress but still this holds so the first question is that why this linearization works so well and the second question is that we can check this comparison across a different stress type for example osmotic pressure versus starvation or heat versus starvation but the theory cannot tell anything because one stress type this direction other stress types in a different direction so we cannot compare and actually when we divide this gamma i is assumed to be common and that is okay for the same stress type but for different stress type this can be different so we cannot divide this so the theory cannot tell anything from that but still from the experimental data we can compare and okay it's worse from the previous comparison but still it looks okay and and actually there are 4000 points here and there are more deviations here but most points here are here so the mystery here is that there is a large linear regime and validity across different environmental conditions and this is we cannot say from just steady growth system of course bacteria is not just a steady growth system it's achieved as a result of evolution so we can check if this is a result of evolution but this is difficult from experiments so we can check by simulation by taking a toy model of kind of toy cell model with high-dimensional chemical reaction dynamics and for this so I do not go into details of this model but this can be discussed in PLL about several times and so the basic idea is that you have many many different chemical components and there are some chemical reaction Xi for example catalyzed by Xj and Xi is changed to Xk so for example here in this case X5 is goes to X2 catalyzed by some other X1 and so there are many many such chemical reactions and only by this reaction so many catalysts are produced in exchange converted but then we need some kind of nutrient so assuming that there are several nutrient species and X0 goes to some X1 catalyzed by something and X0 prime catalyzed by something by X3 or something like that so we put all this and the reaction network is assumed by gene so that's a genetic gene gene determines the reaction network because which enzyme is produced is determined by gene so that's the rule for this and so if this reaction works well then taking nutrients this cell can grow and assuming that this cell the molecule number here is larger than some ratio is divided into two so we repeat this process and in this division we make a slight mutation mutation means that the reaction network is slightly altered so for example put some other path reaction path or delete some other path or something like that so here initially we assume that there are 10 resource chemicals and of 10 all these 10 chemical resources are identical so the same concentration and we did the simulation by using some kind of stochastic reaction network model so that includes some kind of noise okay and then devise and mutate and devise and mutate and then after generations so the growth rate of this cell increases okay we select such higher growth cell so that's rather natural so what we can do next is that can we have this kind of global linear law at this state in comparison to this initial state and to do that we put some stress and stress here is that change resource chemical concentration so for example initially I said that all chemical concentration the same for 10 resource chemical we put some okay in some environment stress this goes to almost zero and this is increased or something like that and then we can check if the change of delta xa by the environmental stress change just in the same as in the similar to this experiment or as in the theory so this is the result so here we put delta xj and delta xj it's a different stress condition that's quite similar to the experiment and delta xj is across many many different chemical components and I forget to say that in this simulation we have 1000 chemical species so there are 1000 points here and what we put here is that so after generation 150 so that's this after evolution occurs then delta xj e and delta xj e prime follows roughly yeah single proportion coefficient relationship but in the initial state in the initial random network or lower growth state there is no such structure and then as the evolution grows or progresses generation 10 and 150 we can see this linear relationship and we can see this increase of the correlation through the evolution okay so the initial question that this global proportionality is a result of evolution so we can verify but still quite this is so now since we have 1000 components it's very difficult to see this 1000 dimensional space so we use principal component analysis so from this 1000 dimensional space we have so across many many different environmental stresses and how its concentration changes and PC1, PC2 and PC3 and actually in this case so this PC3 up to PC3 so the contribution is rather high so after evolution and put this original cell and then put some stress and put some stress put some stress so now each point is a result of different environmental condition and after evolution it follows roughly one dimensional curve but in the initial random network it's just scattered so that means low dimensional constraint in this change occurs after evolution and actually we also find that if we put some mutation to this original state that change also follows the same curve so from these experiment simulation results we can propose this kind of theoretical hypothesis okay we have this kind of very high dimensional phenotype space but good state is rather rare and we have many many noise to this system and if you put some noise but for this system to be robust it should come back to the original state in a rather fast so there is strong stability so many perturbation goes to this original state original state so there is strong attraction to this original good state but this state has to evolve or this is a result of evolution so along the path in the evolution it should be changed rather easily otherwise evolution is difficult so so that means most so across many directions it changes this change perturbation comes back to this but only along this direction it can change easily so that is consistent with this picture or this picture and assuming this kind of structure we can rewrite this previous theory but in this case one so eigenvalue is dominant so only one mode is very slow and the other mode is very fast so from this we can only consider only one mode and then from that and just by simple calculation we can derive this previous so experimental result or theoretical result and actually so this kind of evolution result or this kind of so separation one eigenvalue mode can be seen directly by measuring the inverse of this eigenvalue across generations of this evolution generations and only one mode one over lambda increases so one mode is close to zero and all others so it's quite well separated so that's consistent with this theory so but recall that phenotypic change due to environmental variation and mutation follows the same curve so across this environmental change and also across evolutionary change it follows the same curve so that means instead of the previous argument delta xi e and delta xi e prime but we can consider delta xi e and also delta xi by g, g is a kind of genetic change or mutation evolutionary change also follows the same change across many many components delta mu and this is given by just the growth rate change and we can check this for example directly by evolution and this is the nice experiment by Chikara Furusawa and put this E. coli into so environmental stress and then make a mutation by putting some mutational change over generations and the growth rate is recovered and then so we can measure this transcriptome change the concentration change at each generation and then we can measure this slope against this environmental change versus mutational evolutionary change and then again we have this common slope and actually there are many scattered points but again there are 4,000 points and most points are along this line and also we can check if this slope agrees with the growth rate change because we can measure the growth rate change from the original slope and the growth rate change from the original state and the environmental stress and after evolution and again so with this growth rate change and this is the slope obtained from here so there is no fitting parameter and the theory says that this should follow this line and over generations so here this point here some generation and here some generation so it works rather well and also we can check the same thing from this numerical model and put some stress here and then make some imitation evolutionary change again and check again how this change through this evolutionary change and then slope change versus growth rate change okay it agrees rather well so okay maybe I have to skip this so message here is that phenotype change is a high dimension but their adaptive change are drastically restricted into a low dimensional or one dimensional space and phenotypic evolution here is in some sense okay very much restricted into a low dimensional space and actually in this experiment so this this kind of PC1, PC2, PC3 and all evolutionary courses and he repeated many many experiments but across different experiments mutation occurs in a different side but in a phenotypic change follow the same curve same low dimensional constraint so that means phenotypic change phenotypic evolution is rather deterministic even though genetic changes can be rather random or stochastic so that's also a result of this kind of low dimensional reduction and maybe I don't have time to go into the fluctuation and actually we can discuss the same thing for the fluctuation and actually in this case so each component here change by noise so each component concentration changes by noise and so there is variance of each component for concentration variance and also we can have concentration variance due to this mutation evolutionary change and so we can define VGI is that concentration variance due to genetic change and VIP is that this isogenic but due to noise so the theory says that across all components this should be proportional and so for example for given mutation rate this follows the single line and so that means if one component is very highly variable by noise it's also variable by mutation and that can be that means it can be easily evolved so we can predict in some sense phenotypic evolution and this is consistent with the experiment as shown here okay so I'm sorry I don't have time and so we need of course further confirmation by experiments of this low dimensional reduction theory and global proportionality and so far theoretically we checked this in catalytic reaction network model as I said but also it's true gene regulation network model and also we did some kind of spin glass model and in this spin glass model so we consider a phenotype of spin configuration determined by some kind of spin Hamiltonian SIJ and here a genotype is given by this JIJ and then we make kind of evolution to increase some kind of fitness and fitness is for example given target spins and align these target spins and then again we can consider this evolutionary process and how this target spin change by putting some external field or by putting mutation and actually we can see again across all spins this is proportional across all spins by evolutionary change and by external field and actually that is true only for finite temperature regime and that is so given by replica symmetric phase and if replica symmetry breaking occurs this low-dimensional reduction is violated so we can discuss this kind of low-dimensional reduction in terms of spin glass theory in this case and also we can check of this protein data from this and that is quite recently published in this case protein dynamics and protein evolution and that is again very highly correlated but I don't have time to discuss maybe I have to skip this so today I discussed about low-dimensional structure low-dimensional reduction is formed from high-dimensional hypnotic space as a result of evolutionary robustness and so we can derive universal law for adaptation and evolutionary relationship okay so that's a result and actually one just I moved to Niels Bohr Institute from next year and so we'll have so new universal biology group at this so Niels Bohr Institute and we'll hire probably one or two postdocs so if you are interested please contact me okay thank you very much okay thank you thank you for doc we are a bit out of time so sorry one short question sorry my English identity is decreased within this one year yeah there's a question here okay have you made or your group or your groups made any attempt to predict the future of the pandemic using these ideas it's the next mutation yeah but one maybe interesting point is that so this kind of for example this is a protein evolution data and in some sense if some protein structure is highly changeable by noise so if it's flexible then the mutation of this evolutionary change of this protein structure by mutation of each side is also larger so evolution speed is higher if it's so the state cannot fix state can fluctuate more so it's kind of quite similar to fluctuation response relationship in statistical physics so if fluctuates more it's evolved more but of course we cannot say anything from this to this COVID thing but that's a general evolution yeah consequence sorry for this okay so the last speaker of the conference is Celia Anteneondo and we hope she's online oh sorry yes I'm here waiting to share the screen okay okay can you hear me yes okay so hello everybody good morning, I'm Rio de Janeiro here is 7 in the morning yet so I want to thank the organizers for the opportunity to talk here so I will present some recent results that we obtained in a collaboration with David Gerler, Eli Berkay from Varela New Reality and also Luciano de Faberri who is currently at Pookie into Varela and as a postdoc I will start with this visual abstract to transmit to you the main ideas here what we consider are particles in a thermal environment that is grown in particles and they are subject to a potential of the kind plotted here where there is some region where the potential is seen locally but at long distances it is flat what we observe for this kind of system when we look at time evolution of some quantity in this case is the displacement maybe a thermodynamic quantity too what we see is that there is a time interval where we see a plateau the quantity remains in average constant and the length of this interval may be very long depending on the temperature relative to the depth of the weather so this is what we see under certain conditions and the idea is to define the conditions where such kind of states that we call quasi-equilibrium states appear and also to predict the values of these states so if we pick the the usual partition function of course it is diversion due to the flatness of the potential however I will show that we can still use this partition function after a proper regularization but this is the central ideas that I will now transmit in more detail so concerning non-confined this kind of potential that are flat but long distance of course are very common in nature from Colombian or gravitational or intermolecular interactions here we are thinking more in some kind of surface or membrane in an environment where particles can diffuse and feel attracted by this surface potential with a certain load of the case with distance of the kind of these plots here and what we consider is a Langevin dynamics for the simplest case of a damper dynamics but this is not necessary but I will show for the one-dimensional case this is also valid in more dimensions and the kind of force we will consider here comes from a potential that has this shape also for simplicity we are considered a shape that is symmetric has a certain characteristic length and there is an exponent here that allows to control the decay to zero so this is what we consider but in the presentation but we have also look at what happens with other kinds of potential that are less simple than this one also potential like generalize it, then a chance and other kinds I will focus on this kind so actually we we treated the problem with the Fokker-Planck approach corresponded to the Benjamin equation I have already shown in this equation we can scale some variables so that we reduce the Fokker-Planck equation to this form where actually there is a main parameter, a counter parameter that is this scaled temperature with respect to the depth of the wealth and besides that we have the parameter that controls the decay at long distances but this is the main parameter most plots I will show will be related to this one if I forget to tell so when we solve numerically the Fokker-Planck equation and compute quantities for instance the mean square displacement we see that this is the scale of temperature for high temperature of course the potential is irrelevant and we see a behavior of free particle but when we decrease the temperature we start seeing the appearance the merchants of this kind of flat region of course at later times the particles will escape the well and we will see again free diffusion but this is the kind of state that we call was equilibrium this problem and that I will try to describe so if we look for a stationary solution of course we will give we will obtain a Boltzmann factor but as I said before it will be non-normalized the thing is to regularize this quantity so that we can calculate average through a recipe just to understand what happens in this problem if we are considering initial concentration around the origin and at short times what we will have is that this packet will expand feeling the bottom of the well particles will diffuse in the bottom of the well at long times where there is a probability of the particles escaping we will again see free diffusion what we are interested in is the intermediate times intermediate between these extreme situations and in this intermediate times what we see is that the central part of the probability density function remains almost constant this is after some transient time where the particles diffuse and occupy the well and this is what we see most of the probability is concentrated in this region and we see these tails here is in more detail the cutoff detail of the distribution here divided by the Boltzmann factor and these they diffuse but here the probability of course are very long times this will be a deep version but at short times we see that this cutoff will be effective so that we can integrate this probability I will try to summarize how to perform the regularization procedure for these intermediate timescales I will forgot the initial and the initial and long times for which diffusion is observed will be trying to describe this central region for intermediate timescales so what we see here is that for position around the central part of the well we have this something proportional to this Boltzmann factor and we can approximate the details by complementary function since here we have diffusion but not starting from the delta but from almost uniform initial distribution and we can match this solution for short and long distances this can be done simply by the product since this will be almost will tend to one for long distances this will be approximately one for short distance so we can make this product and match both behavior for near the well and far away we have also solved the problem through function expansion but I will refer to this form that is more intuitive so what we do to find this normalization constant to integrate the procedure is basically to split the integrals rearrange terms here we can when we integrate we split into regions around this L is some crossover point between the two regimes and that we can determine but here we see that it is independent if you assume that this point exists we can when we perform the integration since each factor is almost one in one of the regions then we will have this summation of two integrals then we can complete this integration from 0 to infinity and this is essentially one in the small expression so this one we transfer to the first integral and split it into parts so we decompose the whole area in regions that have defined well defined characteristics this the term will be negligible this term we can integrate exactly and it will scale order square T so it can be very large at long time and this first is the term that is time independent and is dominant at intermediate scales so here making a relation between these two terms we can extract the time scale that defines the exit from the plateau but four times such that this term is much larger than this one then we can approximate the normalization constant but this function that resembles partition function that was regularized by this term similar to what is done in the video expansion so what we have here is the normalization factor now we can do this same procedure for the other integral for the instant integral in the numerator here we can also rearrange and we obtain a similar factor a similar regularization but this will correspond to this is the exponent of the k of the potential this will be valid for a rapid decay this I didn't comment but this is valid for a decay that is one over x so this regularization actually will depend on the potential how fast it decays and also on the observable what we are actually doing is to introduce some correction that we avoid the divergences but we see that the result is very good compared to the numerical solution these lines are the prediction this is just the inflection point these lines are a comparison with the exact numerical solution actually as I said it will depend the general procedure to regularize the partition function will depend on the potential and on the observable so in general what we will have to do is to subtract not only one but more terms in the expansion of this exponential as a concrete example here I present the case where the decay of the potential is one over x and I will try to calculate again the mean square displacement so you see that for the normalization factor I need in this case to subtract one additional term in the case of the average of x square I will need to subtract more terms to compensate the weight of the tails for this kind of observable I will not in detail in these plots here this is the same plot and here is for one over x where I showed that the prediction works very well actually what one is the approximation one is making with the regularization can be seen writing the approximated the potential as with this form that is compared here in the graph this is the solid line is the true potential and this dashed line is the approximation that we are doing in comparison here with the the harmonic approximation so you see that in the central part it works well as an approximation even in a case that is rather a limit in the sense that this is expected to work better for lower relative temperatures and here is again the comparison are the our approximation and the dotted ones are the harmonic you see that there is a discrepancy so we managed to obtain a more precise approximation that simply harmonic it's also interesting that we recovered several relations of the equilibrium conditions for this kind of states and we also compared other observables obtaining also good prediction I wanted to comment also on another perspective another way to see the problem that can give insight that is to consider our original potential but not to wallet wallet well but in this case the evolution will lead to a true equilibrium situation that we can integrate and sorry so we can calculate this probability and obtain now the aberrations as a function of L you see that we obtain the plots that resemble the plots that we obtain as a function of time so the approximation will be better for lower values of the relative temperature here is a comparison of the results for the time evolution and the result using this approximation obviously the prediction will have the same values the values will be the same it is as if putting the wall fast from the origin we were letting the particles diffuse far away so in this case we can make to obtain the limit L to infinity we can also split the integrals and determine the contribution of each term I will not enter that detail now what I wanted to emphasize here is the we know some information about the time evolution the prediction of values can be performed in the same way the expressions, the regularization is the same and here what we can use to characterize the states is a log inflection point so in the case of the bounded domain where we can calculate the true equilibrium state we can obtain analytically the inflection point we then can know when these states can exist or not here is the calculation these are the functions that when equal to zero give me the solutions that provide the inflection point you see that if the temperature is too high there will not be an inflection point so we can determine which is the minimal requirement to observe these kind of states and we can then define the aberrations as the the average calculated we have the average as a function of L but we can estimate the inflection point and calculate the average in that point that will provide the quasi-equilibrium value so here is a comparison of the low inflection point obtained by the L perspective and the time perspective the circles this is the end where there are no more inflection points they do not coincide but in the region where they are expected to give good results that is when the relative temperature is sufficiently low there is a coincidence between these results and those obtained here for the the exact calculation from the numerical integration of the equation this for comparison is the harmonic approximation I just wanted to comment that we performed calculations using again function expansions for this intermediate region the quasi-equilibrium appears I will not show the calculation but just to recall that when I try to explain how we apply the regularization procedure I use this approximation and in fact through the eigenfunction function we can arrive to expressions that justify that choice actually there is a shift in the error function but this matching of the two regions worked very well this one will describe the initial vision the other details and the crossover is very thin so these two solutions match almost perfectly so I compensated the time that professor can exceed it I just wanted to remark that we have find this kind of quasi-station state that we call quasi-equilibrium actually not normally stable quasi-equilibrium and we found a procedure that allows to regularize the partition function so that we have a recipe to calculate the average the non-normalizable quasi-equilibrium states well this is essentially what I wanted to transmit to you Thank you very much for the great talk it's time for questions the audience is maybe a bit tired of all this conference but I have a short question about the method so mainly the method is based on an integral and you break the integral into parts and you have this error function approximation so my question is you showed for which type of potentials this technique works could you say something on which type of potentials this technique is not suitable or you have to modify this error function by another type of function such that all the regularization changes look the error function approximation is not actually necessary to find the regularization when we split the integral we have this error function but if we use it another effective cutoff this will work because we are keeping it's just this first term that is not independent and the cutoff will be important to determine this crossover point but this point is not appealing here we should assume that it exists but if it exists then we can determine this integral which is time-independent the term that is time-independent what will give us is the characteristic time of exit of the plateau for instance but here is heuristic but we obtained the except solution through the function okay thank you another question another thing what may happen is that this was equilibrium states do not exist dependent on the potential for instance for a log potential we will not have this kind of state I see thanks if there are no more questions all the speakers of the session I will briefly call Sebastian to the stage so we will conclude the conference with some remarks okay so we want to thank all of you for being here it's been it's been a big challenge this conference is the first it's been the first ever hybrid conference in Trieste for many of us and I think for all of us and it's been really a lot of work from the side of we have to do an online conference and an in-person conference and lots of changes in the last minute so if any of you is going to organize one of these keep in mind that things change until the last minute but this has been possible thanks to many people both from CISA from ICTP so we want to thank all these people first of all scientific people which are the APS Commission Mateo Marcilli, Andrea Gambasi Stefano Rufo also the video team which has kindly arranged all these for them because this room was not ready 10 days ago and all the protocols are new so you can imagine and next also the CISA people this should be Sarah and Emanuele who took care of all the bookings, hotel bookings and all the arrangements for you to be here I think they also deserve a big clap and last but not least the ICTS team which is everyone who made all these Zoom sessions possible and they did all the technical stuff so also a big clap for them and finally the APS which kindly was extremely motivated to come here we were for a long time thinking is this going to happen and this was thanks to the momentum and the motivation of mainly Christian, I must say and also the commission from APS so thanks a lot also for you and certainly in addition to these people I want to thank Edgar and Sebastian who did a marvelous job really and really a lot of work and were always available for email answers so thank you too particularly in addition to the people that you mentioned already thank you very much so this is it, lunch is waiting and we've been a bit late today have a safe journey have a safe trip back home