 Welcome back everyone. This is our third session of the applied Active Inference Symposium with Professor Carl Friston, hosted by the Active Inference Lab and its June 21st, 2021. We're here representing the Dot Tools Organizational Unit of the Lab, the third organizational unit in the lab. The goals of Dot Tools is to enable effective tool and instrument use for all Active Inference Lab processes. So that's just using the digital tools, affordances that we have better, as well as exploring and designing affordances for our niche, modifying our niche, resulting in effective action as well as innovations in tool development. As with the other groups, we've been meeting weekly in tools and having a lot of awesome insights related to where Active Inference might come into play, and that's what we're excited to talk to you about. Some of the core insights from the work in this unit relates to learning by doing. The recognition that modern systems are cyber-physical, everything is really intercalated with the digital. And also we found it really refreshing, kind of like a two-stroke engine, to be sidestepping or complimenting or augmenting some of these philosophical discussions with technical clarifications and two ways in which we've seen that play out. On the left here is a quote from you during a 2019 Dropbox blog post when you wrote that technology is the natural extension of Active Inference beyond the single person, which of course brings technology far from being something artificial into the realm of extended and embedded cognition in our niche. And then on the right side is a slide from a very recent talk by Bert DeVry on Beyond Deep Learning, natural AI systems, speaking to several applications in hardware and software of Active Inference, for example, gesture recognition, robotic navigation, and also audiometry for hearing aids. And one effort that we're starting up now is a net hack challenge. It's kind of a video game played in text characters and we're assembling a team with already multiple interested participants to get an Active Inference agent on the playing field, so to speak, and have people maybe update their generative model when they see that it doesn't have to be a three billion parameter neural network trained for six months on the GPU, but what if it's enough to just be curious and to want to succeed? Those are the kinds of things that motivate us in dot tools. So we can start right off the bat with asking, how can we use Active Inference to structure the process of innovation and tool development? And how can Active Inference concepts help us design for complex agents that are interacting in complex niches? For example, thinking about niche modification, extension of affordances, reduction of uncertainty or structuring of communications. Okay, great questions. So the use of Active Inference to structure the process of innovation and tool development, that is, in itself, an entertaining notion, in the sense that you are a realization of Active Inference and you're mindful that your nice use of the combination of, well, the emphasis on curiosity as the imperative that drives most of our behavior is exactly the imperative that as a scientist drives me and most of the people I know. And in a sense, I would imagine also drives your initiative and your laboratory. So all the questions you are asking are really, how do I make the next move in order to resolve uncertainty about your particular model of how, say, artificial intelligence or human communication is going to evolve. So in that light, I think there are two levels to the answer. The first one is just to celebrate and acknowledge that you are engaging in the scientific process as formulated by Active Inference, that you are on a journey of trying to satisfy curiosity that will be never-ending. And that speaks to one of your themes in the previous slide about learning is doing. The only way you're going to resolve or set that curiosity is to go out there and see what happens. And that is exactly the right thing to do. A more practical level answer though speaks to the tool development. Because one of the, one of the fundaments of Active Inference is the appreciation that if you just want to maximize the likelihood that this, your kind of world model or generative model that entails the way that you exchange with and interrogate and ping a world is the right world that is articulated out there in the sense of extended cognition, for example, in terms of the software tools or the educational tools that you're making available, that all of this is still subject to the imperative to minimize complexity. So in maximizing the likelihood that these tools will be out there, in a sense you're saying this model or this way of narrating the way that the world works provides an accurate description that is as simple as possible. So there's a, you know, you cannot escape the complexity, I'm speaking like Jürgen Schmidt over now, which is a good thing in this instance. So that means you've got to find the simplest tools and it's interesting that you highlighted Perth De Vries' contribution because, you know, again, just practically thinking what's the game here? The game here is to write down the, or find the best hypothesis, the best explanation for my lived world and my me could be Active Inference Labs and the lived world is everything that you have to engage with in terms of educational or commercial or academic partners. So you've got to write down, you've got to explore the model space in terms to find the right model, generative model of the way that your system or your organization works. The first steps in writing down the generative model are basically to define its structure in terms of the sort of hidden factors and latent factors and their interactions and all that good stuff. But it has to be done in the simplest way possible. So what's the simplest way of writing down a generative model? Well, it's to write down a Bayesian graphical model. What does that mean for the actual coding practically and the software schemes and implementation that you would either offer to people or prepackage in terms of user interfaces, that it's going to be message-passing on those graphs. And I'm trying to get back to Bert DeViz's Factor Lab, a faulty lab formation. So to my mind, that's the simplest, most generic bit of computer science that you would come across in the service of finding the right software tools to build absolutely everything, because absolutely everything can be written down as a generative model. If there's a generative model there, there's a Bayesian dependency graph. If there's a Bayesian dependent graph, you know there's a factor graph. If there's a factor graph, then you know there's a message-passing scheme. What is that message-passing scheme? It's just a free energy, a variation of free energy minimizing message-passing scheme. So I would imagine that as tool development increases that there will be a move towards a common language that will look very much like Bert's faulty style message-passing. And within that you've got very limited choices, which is a good thing, because that again speaks to this minimization of complexity and just coarse-graining the world and your world at its coarsest level that will sustain an accurate account or a precise account of what you want to want to achieve. So I'm thinking here that the tools just have to come in two flavors. They have to deal with continuous state-space generative models to interface with the kind you need for robotics. But also the other flavor will be discrete state-space, your latent states, discrete late states models that you'll need to do, say, competition, linguistics, or modelling the climate in various states. And we know that the, we know all the message-passing schemes that would be entailed by a commitment to one of those two kinds of models in the sense of sort of generalized Bayesian filtering for the continuous state-space and by generalized, I include generalized quarts of motion, which generalize things like climate filtering. And on the discrete state-space side, you're talking about either belief propagation or Bayesian message-passing. So when you just think about it, what you have to do in providing tools of a software kind or a simulation kind, happily, there aren't many choices you have to worry about. So in that sense, all you need to do is to make sure that your tools accommodate both generalized Bayesian filtering and belief propagation and or variational message-passing, and then you're using off-the-shelf technology, which brings us back to, well, what's the real problem then? Well, the real problem is writing down the charity model. What sort of problems, how would you unpack those problems in terms of innovation and tool development? Well, it's solving the model selection problem. So sometimes, I think, you said when describing the space of problems that are faced, say, with generalized AI or AGI, you can unpack them at different spatial-temporal scales into the inference problem, into the learning problem, and into the selection problem, by which I mean using Bayesian model selection to get the right structure. Do I use six or 12 layers in my deep neural network? Do I use a convolutional model or do I use a transformer? These are basically problems that are solved if you have a mechanics that can score the structure enabling you to select the right form. So that, I think, is going to be a focus of innovation in the, I don't already is, but certainly in the near future in terms of development and in the sense that I think the inference and learning problems, they're solved problems that you can just go to Bert and get your favorite message passing scheme, or you can keep at the level of educational or academic message passing and use the MATLAB schemes that we generate here in London for toy problems. What is not, I think, a solved problem and will require an innovative solution is the structural learning problem or the selection problem. Exploring not the right hypothesis, but in the principled way, exploring the space of generative models you might want to bring to the table. That has many, many different issues and things that come to mind are, of course, that you could do it in a bottom-up way by trying out new hypotheses, where do you get those from, you get them from experts in the field, because, effectively, they are bootstrapping themselves on the basis of our pride beliefs or our knowledge about how something works. You can do it in a top-down approach by having over-parameterized but over-expressive models, but with very weak imprecise parameterizations and then use Bayesian model reduction to solve the selection problem. These are ways that people are thinking at the moment, but this thinking is innovative because I don't think there are any clear answers. How would you use active inference to solve the structural learning problem, when in a sense it's already being used in the sense of Bayesian model selection as natural selection, but you really want to speed that up and make it work within your commercial academic lifetimes? I would imagine the exact same principles would be brought to bear there. That almost answers the next one, how can active inference concepts help design complex agents interacting in complex niches? You just have to build these things as proof of principle and hypothesis testing. The nice thing is you know all the machinery and the tools that would be requisite in building these things right from the variational message pessing using say fawny lab through to now you've got the right fitness function when it comes to using say a genetic algorithm to explore a structure space and what is that fitness function? It's the evidence low bound or the variational free energy. So you've got all the maths in place. This is a question that I think of simulating these things and providing proof of principle. How you would translate that into the real world that I don't know at this stage of I think a challenging first step would be to actually use robotics or in silico or sort of hardware or possibly a lot of excitement in the moment of using soft robotics and actually you design your niche and see what happens and then turn your attention to niche construction where you now acknowledge that the niche itself is also succumbing to the principles. Not an active inference in and of itself in the sense that niches don't plan but certainly in the FEP sort of vanilla free energy minimizing approach. So I hadn't actually thought about that before but that's an interesting asymmetry when it comes to simulating multi-agent interactions in the context of niche construction where often it is the case that the niche is just the other agents in an ensemble but if you now actually include the environment as part of the niche that is playing host to all the denizens that are the ensemble of active inference agents there is this distinction between the ability to plan the consequences of action that will entail optimization of expected free energy versus simply reflexively minimizing surprise or by minimizing free energy as an evidence bound and put that more simply more intuitively you're either with generative models that support planning or not. So there's nothing I think fundamentally different between these approaches it's just if you've got a generative model that is a model of the paths into the future consequent upon how you act upon the world that's a much richer deeper generative model and the kinds of generative models that would be a applicable to a thermostat or an environment and it's likely that the environment that I have in mind here which is a warehouse that you've got a sentient robot going around trying to get the right things so the robot can plan that the environment the niche can't it'll still conform to the principles the very you know the free energy principle there will still be particles and things that are conserved and they will still fall and behave in a predictable way there may even be a thermostat controlling the temperature but none of these things are planning so there's an interesting asymmetry that gets into the game when you're talking about complex agents interacting in complex niches part of that complexity has to be a specification of whether the complexity entails planning or not and it just creates different problem spaces certainly in the context of multi-agent simulations so that's that's how I would sort of carve up the the problem spaces you know in terms of implicitly problem spaces that will only be explored by doing and by doing I just mean actually realizing physically these processes in the kind of situations that you think are going to be useful for for the future thanks for the answer and it's really fascinating about using simulation so that selection can happen within the generation of for example a startup rather than between generations because of course we can let organisms or startups proliferate and then let pruning occur at the generational scale or there could be ways to design so that that selection occurs within a generation more like learning and development rather than intergenerational selection so awesome points there this could be a broad question but we're curious what areas of applied active inference you think just might be exciting promising or important so my my personal usual response to this comes in in two flavors the first is from the point of view of a theoretical biologist and a psychiatrist so if you know if you can understand how a normal sentient artifact or person behaves then that creates a space in which you can think about false inference and false learning or certainly suboptimal from the point of view of minimizing surprise or free energy so that's a fancy way of saying understanding the computational basis of psychopathology so you know there's a whole literature on using active inference as a if you like a normative framework within which to provide an ontology of false inference or failures or a bearant active inference and why would you want to do that well if it can all be reduced just to the good belief updating and the good message passing we actually have a very quite a comprehensive understanding of neuronal message passing and all its physiology and all the roles of various neurotransmitters and you know microcircuits and your anatomy that underwrite that kind of neuronal message passing and implicitly we also then have a fairly fine-grained understanding of the role of neurotransmitters and the consequences of pharmacological interventions in the context of experience dependent learning and an influence of the kind we've been talking about so from a translational perspective and literally translating the formalism on offer from active inference into the clinical domain that would be one one motivation for developing this theoretical framework the other one is more more in the line of technology and artificial general intelligence so then the question is well if I now want to build sentient artifacts and not only build them but build brothers and sisters so they are complex and interact and learn to love each other in a complex environment that could include me you know then you've got a clear offer from active inference as to the design principles you might want to use to to actually build these artifacts and then there are interesting questions about what the what kind of artifact do you want to build and we've already discussed the difference between a thermostat and a sentient robot going around collecting your your next sort of home delivery and that you know there are different kinds of generative models so now you ask the question okay what are the promise the exciting and promising kinds of artifacts as defined by their generative models that one might expect to see in the future and then we get into the world of you know generative models that support planning so we're talking about deep generative models where they have a temporal depth what are the next stages that you might be looking at well there's also a sort of hierarchical depth that would at some point first of all include the capacity to deploy precision and why is that important well as soon as you have the deploying the precision as a process of inference you're now you have now a normative theory for this kind of mental action or covert action so one example of this would be I don't know how the technology but I can almost I can be assured that I know I know what it's trying to do but thinking about transformer networks and the way that attentional selection operates in this in this context what you're saying is you can actually optimize the attentional selection as an inference process using active inference or an evidence lower bound and where you're now predicting what things to attend to and what and what does you know what particular weights to switch on and which which weights to switch off and at that point you can understand that as mental action so when when transformers or variational autoencoders start to now optimize their estimates of the posterior precision at lower layers say in an autoencoder it's now acquired the capacity for mental action and it now will pay attention to various representations and possibly even various data sources that's not magical we do that every day in the sort of mdp active inference and you know and use it to expand a lot of the attentional mechanisms implemented in the brain if you can migrate that technology into deep learning you would have taken one baby step towards true sentience which is mental action the next step would be okay so how can I now minimize the complexity of my generative model where my generative model now actually includes this meta inference in the sense I'm now providing predictions about my inference because I'm controlling the precision of hierarchically subordinate message passing and and at that point you start to think well perhaps one simplifying one way of simplifying the computational complexity of the complexity part of the the generative the the inference would be to carve up different states of attentional deployment in exactly the same way we're talking about carving up people into biden versus trump voters you know a simple stable complexity minimizing carving up which suddenly suggests to you that you can now equip an artifact with states of mind and so that they can be in four states of mind they can be happy they can be sad they can be confident they can be unsure and they will have to infer given all the evidence at hand including the message passing low in the hierarchy what state of mind it is in and if you now include in terms of the sensory evidence you know the voltage on their batteries or you know the some measurement of their interception you now have something that's getting very very close to say Ryan's notion of of emotions so now you've got a part of the generative model is now inferring what state of mind am I in as the best explanation for all these interceptive embodied sensations not just the proprioceptive you know state of my actuators but also you know are they getting a bit sticky you know are there some wear and tear you know all my batteries charge all of these things come together as part evidence in conjunction with the all the usual visual the radar auditory sort of acoustic inputs and to actually supply evidence for a posterior belief I'm in this state of mind I'm anxious my batteries running out this immediately creates different prior preferences cost functions if you like that would be applied to your policies because you've got a deep change in model comes into the future so now you've got an artifact but not only has the capacity for mental action it's now got the capacity to be in different emotional states the next step is to say hang on so there are these different states can I now equip it with a minimal self-hood can the hypothesis that I am actually an artifact provide empirical priors that reduce the complexity of my message passing at subordinate levels that is generating that is inferring the state of mind that I am in that in turn you know optimizes the postures of the precisions of various likelihood mappings or or preferences of the policies so at this point you're starting to get to artifacts that could have minimal self-awareness the next stage would be that's only going to be ever that's only going to be I think useful when when you consider dynamic interactions again because the only rationale for having self-awareness is to disambiguate self from other which means that there must be some confusion in or some uncertainty and at hand in order to justify the resolution uncertain justify that complexity of the model which means that you have to be interacting with or exchanging with things that are sufficiently like you to license the inclusion in your generative model of a self versus other or that you are like me or not like me so we actually come now back full circle to what we're talking about before in terms of inferring who I am I talking to so this I think was you know to structurally something quite fundamental about this inference problem are you a creature like me or not or you like one of those are you a pet are you a plant you know just being able to carve up this world in a way that is self-referential necessarily entails a minimal selfhood in the inferences of these that speaks to the importance of getting the the necessary evidence from the environment that would be if you like the license that degree of complexity and the only kinds of environments that can supply that degree the license that degree of complexity are when that environment that equinish is actually is actually comprises other agents like me that make it if you like worthwhile me inferring oh it's me not you doing that so I would imagine then the most promising applications of active inference in constructing sentient artifacts pets and you know carers or people things that you can converse with and will be to grow them certainly with themselves but but more importantly with you there so that you can learn by they're doing with you there so they they want to they're curious about you and you're curious about them at that point they're real what could argue that's the only scenario which you're going to have any empathetic interaction you know you know with these artifacts so I'm sure there are other applications in terms of you know climate change or commerce or whatever but in terms of imagining what you could produce you could sell I would imagine that you know a mindful robot that actually is curious genuinely curious about you because that will teach you something about itself thanks for that answer the idea of tools for attention and design and engineering for regimes of attention to use an active inference term is really essential and what you were talking about there with the phone first off before the internet when there weren't other devices of similar kind there was no need to communicate out and what we've seen is that as there's more and more devices of similar or interoperable kinds new levels of organization have to emerge and then I thought about the anxiety that a person might feel when their phone is running low on battery right now that sensor reading is getting emotionally off-loaded to the human so we could have that anxiety on device so let's have a more relaxing relationship with our phone and then as you pointed out it would be the um incipient steps of selfhood or perhaps what they could even call a cell phone if I'm allowed one pun per symposium the next question is what kinds of tools have been most helpful in your work in research which includes many areas such as SPM and DCM that a lot of people who are just learning about active inference might not be very familiar with and what kinds of tools don't exist yet but might be helpful for active inference work so the mathematical tools so you know and I'm often asked this question of students you know do I need to be able to do maths to contribute to this field and if so what kind of maths um I won't tell you what my answer is but what I have found useful is certainly mathematics but not not necessarily very high end this is always um wikipedia level mathematics and in particular um dynamical systems theory information theory and linear algebra are probably all you need to to to do everything really um and indeed you could read most of quantum electrodynamics as basically a linear algebra with a bit of probability theory underneath it and so that that has been the mainstay that that is you know if there is one tool that it would be the tool and the language um of of maths and relatively simple maths and the second thing is to um the learning is doing um you know see one do one teach one um ethos I think applies very pragmatically in this context which means you know it's actually um very useful if you can get students to actually build their own little um simulated artifacts and even more useful when you know when they can actually code it up themselves which means you need access to a high level at least the generation programming language um that you know a student can get um fluent with should they want to not only to use the existing tools but you know try and write it down themselves without having to spend years training as a computer scientist so I found math now very useful in that respect not because it's terribly efficient although I have to say actually some of the matrix operators and under the hood tensor operators are actually much more efficient than people give give them credit for um because it actually came from um x-ray crystallography however um what's really useful about it is it uses the same syntax that you would find in a book on linear algebra um which didactically or education is really quite important when it comes to writing and reading the code um so we have deliberately stuck with math love not because it's computationally efficient or that it's open source um you should be I don't think it is um uh but simply because it's it's configured in a way that that people reading um standard texts 101 texts and linear algebra and the like would be able to see how it transcribes into a computer language so that's been a really useful um a really useful tool and looking ahead um I imagine that your your one's going to need open access and possibly more it's um I I um I don't know it could go out of the way I'm just thinking about first of all people like Bert and Forley lab in terms of very generic very high end specifications of message passing um in computer science um that may be that that's the level you want people to actually compose their generative models and their their artifacts and they don't even need to know about linear algebra and even less information theory what they need to know is the language of relational you know the object relations and and how to specify just different classes of exponential probability distributions and you know is it categorical is it continuous is it always positive or can it be positive and negative and that may be quite sufficient to write down a factor graph or a generative model and then everything else this is off the shelf a little right itself um so um that would certainly be um possibly helpful in the future so I'm moving on to what kinds of tools dead is this at the moment so I'm thinking of a what I never used it but I would I imagine would be Bert's Forley lab facilities but offered as an application or a user interface that allowed you to compose generative models and then just hit compose a generative model compose a generative process the actual world that's going to be modeled and then just click run and see what happens that you know that would that would be that would be really useful I think um having said that the other side to to to um future scoping here is I repeat this sort of um leveraging um more specialized or other fields uh and the you know amortizing certain parts of of the inference or learning to infer um as a or indeed inferring to learn or learning to plan or learning to infer how you plan so or starting to sort of see what parts of the inference process are so concerned that they could actually be amortized and learned and certainly that looks as how as though that's what the brain has done for example there are people who think that the cerebellum has basically learned how the motor cortex does its online KL control or um calm and filtering and therefore um lens of fluency and a computational efficiency to the message passing which um you know which in its absence doesn't mean you can't do something it just means that you can't do it as fluently and as gracefully and as quickly as you you're as you're as you could with a cerebellum indeed when you um have a cerebellum lesion all that really happens is you become a bit clumsy and slow um so um that those kinds of tools you're a quick and cheerful um integration or importing um various amortization and deep learning technology um into a um a four-lead style message passing scheme that could support you know any kind of generative model I think would be really useful awesome thank you approaching this nexus from another angle what kinds of tools and platforms could inform transdisciplinary highly contextual and engaged teams that are working with these approaches act in flap we hope to be working with others to be developing the active uh inference curriculum and body of knowledge more broadly but when teams are actually using these kinds of approaches what kinds of platforms might exist to enable their work um okay I have a strong suspicion that you know the answer to this much so I'm trying to guess at the answer that you you know is the right answer and I'm not doing very well here um so um I you know I think that you've you've that we've already talked and certainly implicitly in the way that you presented the ambitions uh and implicit and some of the questions um you know all the answers are there whether that's trying to engage through education whether it's trying to engage through insight um using say you know embodied experience illustrations of the basic principles whether it's supplying um games or user interfaces graphical user interfaces to facilitate the designing and enacting and playing with um generative models and an active inference I think these are all um your obvious and laudable ways of um of your of leveraging um what what what what um active inference has to have your house has to offer um participatory you know it's um I mean the the learning is doing thing and the see one teach one do one you know keeps coming back to mind um and of course completely licenses the participatory aspect um but what kind of participation did you have in mind are you talking about sort of hackathons are you talking about some um playing games with active inference computers that that start to start to um hate you or love you or what what what what level of participation were you yes Stephen do you want to give a quick thought on a few kinds of participation or what does that mean to you yeah one area is quite interesting is in psychodrama they use action methods like action sociometry or spatial um activities to look at how people um relate to their experience in a dynamic way sort of physically so I've been looking at ways that um spatial participatory approaches can unpack people's relationships to different niches or different workplaces or different types of embodied experience and then that could be visible um to be put into active inference type geometries I see okay right right well there's a great example so um so two things um that I've come across before are the um architectural design um and the um the importance of not just uh sort of pragmatic affordances you know can I walk up can I sit there but also the epistemic affordances you know if I look over there um what would I learn about the space around me I go around that corner so um there is um you know embryonic interest um in you know in my world um from um this the architectural sciences and you know and architecture in and of itself that um is is is could in principle be motivated it's an odd discipline because it's half like art and half like science but the certain the some of their ideas are very much um aligned with certainly Gibsonian notions of affordance and also um the um the affordances the dual aspect affordances brought by expected free energy under active inference so it's not just you know am I the kind of creature that can sit on this on this particular chair but also what will I learn if I if I so do and so things become epistemically attractive to engage with the other domain um is um in entertainment and in music and in particular um the joy of synchronization and mutual predictability of minimizing free energy through mutual prediction prediction when singing or dancing together or indeed um interacting with a slightly greater asymmetry in terms of being a member of an audience um watching a band for example so um you know one of the key things that comes out of that kind of research in ways of um measuring the implicit generalized synchrony that you get from having um this information geometry that I was talking about before that rests upon there being um a synchronization manifold between the inside and the outside but if the outside is another inside from another person's point of view what you now have is something called a synchronization manifold so there's a mathematical image or space um to actually talk about mutual inference and mutual active inference and engagement and communication singing together for example or um dichronically exchanging um um messages that does actually translate mathematically into movement and belief of updating on a synchronization manifold and that has real world correlates you can measure that using um uh kinematic um uh measurements you're putting LEDs on people who are dancing together for example or um measuring heart rate variability or galvanic skin responses or doing eye tracking or indeed EEG and start to so there's quite a lot of work in um um in things like hyperscanning and in um in it you know sort of ethology and dance disciplines um where in the arts in the life sciences well they do use a lot of these techniques to quantify the degree of generalized synchrony what it would be nice to do is actually try and model that synchrony or understand that synchrony in terms of um movement on the synchronization manifold which is so the mutual belief belief updating and one thing which comes out of that just in discussion if not if no if no further is the reciprocal the circular causality that is necessary um to maintain um that generalized synchrony um the particular synchronization manifold we're talking about for the from the point of view of active inference of course is mediated across a Markov blanket so the active and sensory states and but in general the the um you need to have reciprocal coupling um in in order to to get to get synchronization so directed coupling doesn't work um and if that's true what that means is that engaging as an audience for example participating as a spectator will only really work in terms of establishing that generalized synchrony that you are chasing and why you're chasing it well as soon as you have a generalized synchrony you've got predictability for for free for all and that's a good thing because that minimizes free energy you know more predictable you can make the world the better the better it is from the point of view of free energy um but you can only do that if as a member of the audience or a witness or something you can actually actively intervene on it so that brings to mind how could you get for example a discussion discussing this with Ben's of Maxwell you know if you wanted to promote virtual concerts on your online for example during the pandemic what you don't have online which is what glues things together things like mosh pits in in sort of carnivals and festivals um is you don't have the audience participation the the applaud the roars that you know the the lighter waving or the light waving um so how would you get that back into into a virtual um a virtual experience because that would be absolutely essential you know I think to actually engage people otherwise you know you'll be you'll be looking at a pop concert on television so you know more than just if you like um revealing the underlying correlates of that generalized synchrony in terms of the EEG traces of the the dancers or doing some sensory mapping from their their motion to auditory import you know just making it making the sensory evidence that supports the mutual inference and more precise and more available just by having it displayed say by putting um putting motion in sound or sound in motion um or EEG electric epigraphic measures of performance or the audience um and visualizing that and that has been done by people like Paul Vishore in in Barcelona more than that to actually um enable the audience to change what the performers are doing you have to make you know or perhaps what other members of the audience are doing so you have to empower them to close that um that circular causality to get that dynamical coupling place so you get you get the right kind of generalized synchrony so that um you know that that sort of um dynamical systems perspective on synchronization of free energy minimization certainly speaks to a particular kind of participation and engagement that does indeed um rest upon action-oriented approaches but crucially it's the action of the audience on the performers not the performers action on the audience that is usually what you need to pay more pay more attention I don't know if that was that the kind of thing you were thinking about? Yeah that's really a useful answer actually yeah we were thinking about that and and some participatory immersive theater type events and other participation in collective meaning making so that's the type of thing that we're looking at and it reminds me of the live stream affordance which is relatively novel but allows people to be asking questions and it enables not just efficient production of material in a one-shot approach but it allows the feedback and I can't help but add that it's that affordance for participation for example speak now or forever hold your peace that expands the wedding into the community because there is the opportunity for feedback it's not just a breakaway click it's actually something that remains integrated through the affordance for participation so I'll turn to the last question for this section how might future modeling involve large-scale patterns in social datasets and working backwards to infer their hidden causes for example in the case of pandemic modeling governance economic other situations? Well this is a very practical and very prescient question because of course a lot of people are asking themselves that now specifically with respect to pandemic models but also the people who are exercised and have the interventional cloud when it comes to COVID are generally also the people who are invested in climate change problems as well so there's a lot of noise out there at the moment about you know how we can harness the data assimilation and modeling advances made during COVID-19 and keep the momentum up to tackle climate change and not just climate but the economic structures and financial structures and informational structures that are deeply interwoven in terms of climate change and my answer is going to be somewhat deflationary and I've had this kind of conversation before again with Maxwell and John Clipping and related friends and I'm too actually to have another conversation with them on certainly open world or open economy in the near future and there's a temptation to take all the high church of the free energy principle and active inference and epistemic foraging and all of that good stuff we were just talking about and say oh well now let's make it work in terms of understanding say the pandemic and you don't need to do that all you need to do is to apply the good scientific principles that things like active inference appeal to to the problem at hand and it all comes back to the generative model so you know all you're saying here is how might future modeling involve large scale patterns and social data to infer that the hidden causes is just a statement of we need the right generative models to make proper sense of the big data at hand and in saying the right generative models we need the equipment both to invert those models in the sense of inferring the parameters interactions using the simple tools what we just talked about they will just be lifting it from the oratory or continuing to you to use MATLAB but the bigger problem is one we talked about which is the selection of the structural learning problem and so this goes beyond just how many layers do I have in my deep network much more important I think it's a factorization it's knowing how many conditionally independent factors do I need to minimize the complexity to get the right granularity the right way of carving up the latent causes behind all the data that is available to me so I think the pandemic modeling is a beautiful example of this because you know the factors that determine whether I infect you can certainly be written down in terms of virology and the and ACE receptors ACE2 receptors and and basically production numbers and transmission strengths and transmission risks and you know the spike proteins but that's only half the story the other half of the story is how likely are you to be at work or at home when I'm at work are you likely to be wearing a face mask are you going to are we going to be one or two meters apart so all these behavioral aspects start to become really important factors and even beyond that when it comes to making sense of the model the likelihood part of the model that actually generates the data become become extremely difficult to optimize when you start to think about what kind of data is at hand for example just notification rates of of new cases per day in the of SARS of coronavirus now you might think oh that's that's really great data it's really difficult data to handle because the different kinds of tests not only have differential false positive and false negative rates but the different ways in which they are deployed really confounds that in terms of the selection bias so are you testing people are symptomatic what's probability being affected if you're symptomatic are you not are you doing survey testing are you doing the same amount of testing this week as you were doing last week all of these what would be if you like from an epidemiological or a behavioral science perspective really uninteresting factors suddenly now become the most important factors in making sense of those data but you only know that when you start to do the the model comparison of the structural learning when you actually commit to writing down the congenital models and that's certainly what I've learned over the past year now connect to a year and a half and you know the future of modeling is is first of all it's obvious what the future is it's basically writing down the right kind of dynamical state space models that account for data but the future is really dealing with the problems of structural learning and model selection for any data but in particular some of the big data at hand in terms of pandemics or trafficking on the web you know or climate change so it's a really exciting opportunity why do people want to do it well once you've got the the most evidenced i.e. that we are the minimum free energy model at hand and you've got posterior over all the model parameters and all the right interactions then you can do all sorts of stuff in terms of reducing people's uncertainty about the future because you quantify the uncertainty and explain to them things that were once uncertain about and what isn't uncertain about has enormous implications for medical health and well-being and possibly even feeding back into finance because you always hear well the the biggest determinant in terms of the markets is the market confidence it's all about the uncertainty so if you can do uncertainty quantification in a principled way using using this kind of modeling you've done a big thing already but then you come to monitoring putative interventions you've now got a direct handle posterior estimate on the latent states you actually want to make decisions on so it's not the notification rates on the number of new cases in california today it's a number of new people that have become infected today and that's a very difficult thing to infer given all of these complicated aspects of the of the genitive model and then of course once you once you've established the validity of this model in terms of its construct and predictive validity then you can intervene on it then you can say well what would happen if i changed this what would happen if i changed that and what would happen now what would happen in the future so that you know you're suddenly in a world of positive modeling where you can start to ask some very powerful questions and also share with everybody who matters and the products of your inference so you can now start to think about having supplementing the weather forecast with the an epidemic forecast you know the virus in your area and tomorrow we expect you know and so you can also do that to the markets and these kinds of things i think are going to be more important when people or when the current generation get to your generation i guess and start to wrestle more with climate change because they're going to want to not just know what but well it's going to rain tomorrow they're going to want to know you know at the level not just the weather but the climate what are the indicators because those indicators and really contextualized and informed their territory models about their place in the world at that global at that global scale but to provide that kind of weather forecasting that meteorology beyond the weather you're going to need to have these you know these state space models properly optimized you know in a in a first principle way in relation to the the marginal likelihood of their or their evidence evidence bounds just read governance here because governance is just policy decision-making based upon counter-pattern outcomes so that is always underwritten by these these basing beliefs but you can't get the basing beliefs unless you've got a model that has the consequences of action in the future there would be your sort of interventions either politically or financially or or otherwise thank you so much again for joining this symposium it was really a special moment for the lab and we look forward to continued interaction so much appreciated and we will see you all soon thanks for everyone who's watching and we hope that you participate in Act In Flap so thanks everyone bye