 All right, hello and welcome everyone. It's December 7th, 2022. This is Active Inference Live Stream number 31.1. We are here with Brett Kagan and we're going to be hearing a presentation and having a discussion on free energy principle and active inference in synthetic biological intelligence. We'll have a presentation followed by a discussion. So feel free to submit any questions if you would like. Brett, thank you for joining. Really looking forward to this presentation and discussion off to you. Thanks so much, Daniel. Thank you everyone for joining us today. Hopefully we've got some interesting stuff. So what I want to talk to you about is the work that we've been doing in what we call synthetic biological intelligence and particularly the use we've made of principles from the free energy principle and active inference frameworks. So if we jump to sort of the first question and we can kind of ask ourselves because we work with neural systems, what is there that's unique about neural systems? And one of the obvious answers to that is that they display this unique ability to collate information and apply it in an adaptive behavior in multiple contexts. And that could be a fitting definition for intelligence if you like. But to actually be able to test this in actual cells in real time, what you have to do is be able to record the information from the cells and provide information back to the cells in real time. So how did we go about setting this up? How did we get these cells? So we did it in two ways. Either we took what's called a human-induced pluripotent stem cell, that's the H-I-P-S-C there that you can see. And what you can do is develop these from any donor blood, tissue or skin and you can basically make a pluripotent stem cell line. And then you can use a number of different methods to turn that into neurons of quite a degree of specificity. We used quite a broad one in most of our work, which is called a dual-smat inhibition, which follows like natural ontological development or ontogeny development, I should say, to sort of create these predominantly cortical cultures in a dish. But you can use other direct methods, such as what's called an NGN2 direct differentiation which gives rise to a more excitatory culture. And then we also took primary cortical neural cultures from mouse and we grew them as a comparison because we wanted to make sure that we were using, at least in some sense, bona fide cortical neurons and the best way to do that is to take it from animals. But fortunately, as you see, we will be able to move and we have moved away from that now in our current practice so we're completely animal testing free at the moment. And then what we did was we plated these onto what's called a high-density multi-electron array. And essentially, I'll show you a bit more of this in a second. This is essentially a CMOS chip, which is a type of chip that you might get in a digital camera. But it's also going to be used in this purpose because what it can do is sense electrical signals, even very small ones, and actually be able to stimulate to them. And so just some evidence that we actually did do the work we said we did. Here's some examples of some cultures that we've grown and they show what we're able to do is use techniques to actually capture key markers of aspects of these cells. So for example, this blue over here, hopefully it's coming up all right on your screens. This shows something called DAPI, which marks all the nucleus of any cell. If you want, you can look at something called NUEN here, NUEN marks neurons. So all the green dots, and you might be able to see it a bit better in this picture, all the green dots show that this is actually a neuron. And then of course, one of the things that we know neurons have is that they send out axons. And that's what this thing called B2-3 tubulin marks in the red. And finally, we wanna know, do they have dendrites? And you can see here in the purple, this is marking dendrites, also a little bit of axons, but it also marks dendrites as well. And then more than that, we wanna know is not just are they neurons because most neurons will have those traits, but also are they cortical neurons? And BRN1 is actually a marker for cortical neurons. So you can see here actually, not only are they neurons, but most of those neurons are cortical specifically. There's of course gonna be different cell types in there. For example, GFAP here marks supporting cell types, such as glia. And you can see here, we have a number of glia. You can also really obviously see that these glia, these in the bigger picture, have quite a different morphology than the neurons. So it's nice that we have sort of this mixed population because it means that we're able to test something that has some degree of comparable characteristics to what you might see in the cortical, in the animal or in our cortical areas. Of course, it's much, much simpler. Magnitudes and magnitude simpler than that, but it is reassuring for the function to know that there are some similarities as a model. And importantly for anyone who has looked at sort of creating cells out of stem cells, one of the big problems is that not all of those cells might be fully differentiated. And if you do have cells that aren't differentiated and still polyponent when you put them in a dish, they can turn into anything. And of course, most of the time what they turn into something invasive that potentially cancerous depending what it is. Fortunately, this K-67 marks dividing cells. So this would sort of mark any cell that wasn't fully differentiated. And we had, this is the case where we had none of them and that was common, but generally we had none to very, very few. And because of that, we're actually able to have these cultures growing for extended periods of time. And this is what's a cryo-scanning electron microscopy image. So these are real biological neurons. They're a little cracked because when you do the cryo aspect of the scanning and you freeze them for the necessary preparation, they do tend to crack a bit, but you can still mostly see that there are a lot of intact neurons, a lot of connections. And then behind that you can actually see all the electrodes. And you can also see the density and the complexity of this. And this is actually an area of the chip that is less dense and interconnected than some of the other areas because if you look at the ones that are highly dense and interconnected, you can't see the chip behind it. We kind of wanted to show the fact that there's a chip behind this. And I think like one of the key things to take away from this simply is the degree of complexity you get here compared to something like an artificial neural network. A lot of the time people look at models and they're quite nice and neat and symmetrical. And that's just not what happens in biology. And as we'll discuss later, there's probably some interesting features that arise out of this degree and this extent of random or pseudo random connectivity. And so what we could then do with this technology is map out. This is an example of like the chip surface and then the firing rate in Hertz can be seen here as a color scale. And we can see over time with different cell types we can get their activity. And different cell types sort of turn on or begin to become active at different times. But importantly, what we can do is get a fairly even distribution of active cells over the surface area fairly consistently. Again, anyone who works in this area knows there's a fair bit of variation inherently and that's stuff we're working on now for the future. But we are able to get this degree of activity over time. And that gives us some useful cell cultures to test. Of course, how do we go about testing that? Which is the big question. And we know that as a living organism, you, me, mice, cats, whatever, we need to have the closed loop feedback to operate. And what does that mean? Essentially, it means that we're able to receive information about the outcomes of our actions and modify our actions as a result of it. So if I reach over for a cell phone, I immediately get sensory feedback in a number of modalities as I do that action. And as I pick it up, I'm aware of the success of it. If I fail, I'm aware of that too. And this is actually this little mouse with a VR headset on is from this Atenger study, which is one of my favorite studies that's been done in recent times, where they actually attached their virtual reality to mice. And I found that if they disconnected the mouse's movement as it was developing from the actual visual information it got, it led to quite catastrophic functional developmental impairments. And it just really shows it shouldn't be surprising so much, but it does show us like that this link with our environment is incredibly important for us as organisms to develop. And so if you're looking at cells in a dish and you wanna try and elicit intelligence, you can immediately say, well, you need to have some sort of linkage, some embodiment which is achieved through this closed loop system to be set up. And so like one of the better ways to sort of do this is to contrast it with an open feedback system because that's what people generally do. So just to emphasize this closed loop means that we stimulate some sort of sensation and that can be done with electrical stimulation. And we can provide then a sensory feedback again through electrical stimulation on how that response alters the world. And it's actually been demonstrated before in a few studies, for example in this back in paper that this closed loop stimulation does result in increased plasticity. But we wanted to try and take it a little further and actually see if we could elicit a very clear goal-directed behavior. And so we set up, this is a little schematic of the dish. And what we did was we almost arbitrarily defined that there are, I shouldn't say it's arbitrary because there was a lot of planning and strategy behind it. But what I mean by this is that within a given framework, for example, here what we did was we designated a sensory region where we put information into electrodes using a combination of rate and place coding. And then we read it out in this sort of counterbalanced region here. So activity in these two regions would make the pattern move up, activity in these two regions would make the pattern move down. But the reason I say it's arbitrary was that we were able to basically change this configuration to a number of ways. And for the most part, although there were some differences, get a degree of statistically significant performance. And that of course just shows the fact that neurons as again, shouldn't be a surprise to anybody, are incredibly plastic and able to adapt and change how they respond otherwise. And I think that's the real interesting thing. Because of course, starting out here, there is no real reason why neurons should behave in any particular way unless they're doing it to accord to some fundamental imperative. And this is just an example of the visualization. You're actually able to go, there's a site if you jump to our website, we'll show that later. It's called SpikeStream. And you can go and play around with this yourself. It's available. So this is an example of the electrode, multi-electrode ray. Each little dot is a potential electrode. We can route the blue ones or the ones we're routing at the moment. Each little bar is an activity. And you can see we put activity in the top and then we read it out at the bottom and it moves these paddle and over time as I'll show you, it gets better. Feel free to go to the SpikeStream and have a look at that. But right now what I wanna discuss is, yes, it's all well and good that a system may be able to organize its behavior in a changing environment. And this is actually an example of a very successful, so these are actual neurons driving this paddle. So it's an example of one of our most successful cultures, but by no means is it a weird exception. We do see this sort of better performance fairly often. Again, there is variation of course. And one of the ways that we wanted to investigate this initially was this idea of the free energy principle, which I'm sure many of you are familiar with. So I'm not gonna go into it in a great amount of detail, but I will provide kind of like a higher level summary of it here. And the key things that we can sort of boil it down to in a very oversimplified sense, essentially, it means that what we wanna do with the free energy principle, it suggests that there is an innate imperative for a system to actually match a prediction that it makes to incoming sensation. And it's gonna be phrased in Bayesian terms. And there's a lot of implications one can take from that. And I love this picture that Karl Kafristen wrote up in his, or developed as part of his nature neuroscience reviews article, if you haven't seen Reddit, it's a great article that covers a lot of good stuff. But simply between the linkages between the different levels of the system. And of course, like it is and like one of the strong arguments for it is that being able to predict and act in your environment is theoretically necessary. So, and the other like implication of course is that there needs to be a statistical boundary between the internal and the external, which is called a Markov Blanket and we're gonna touch on that a bit more later. Diving into the notion of active inference via the free energy principle, formally it's sort of been stated that this is the process of minimizing variational free energy through action and perception. And essentially what it's saying here is that a living organism is going to actively generate this internal model of the external world and then align those models. And broadly speaking, there's two ways we can do that either perception or action. So if I use my cell phone as an example before if I reach over, pick it up, what I've done is I've created a model in my head based on the visual information I've received, where the cell phone is, I've reached over, I've picked it up, very good, my perception matched my reality. Let's say though that I didn't and I dropped it as I tried to pick it up. Well, what could I do? Either I could get better at predicting the world and say that when I reach, I will always drop my cell phone or I could get better at defining where it is in the world. All I could do is get better at the actual reaching action. And really, when you roll down to it, these are the only two ways that a system can get better at predicting its environment either through getting better prediction or getting better at controlling the environment. And so for our cell cultures, what we wanted to do was simply remove one of these options. So it's a very simple implication from a very nuanced theory, but following that, we sort of said if the world became truly random, something that like we as people, like you can't emphasize this too much because like we as people can't really experience something that's truly random, but in a dish with random noise, you could at least approximate it. And so, well, that's what we did. And so we had these two options. So what we then said is that this is true, what the cells will do is actually change their actions to make it accord with the internal model of the world that they have to decide and derive from the sensory stimulation we feed them. So under this model here, that sort of represented in this picture that you can see what we propose is that the neural cells or the system will try to minimize the difference between the internal and the external world to minimize the free energy of the system, i.e. minimize their surprises. And we kind of predicted whether aversive is the right word or not, it would certainly we would say that non-predictable random stimulation should be something that the cells would try to avoid. Well, on the flip side, if we gave them a predictable stimulus when they did something they liked, that might be reinforcing. And again, whether I put these words and quotes because there's a whole language question, no one could show a question around what is it? Is it truly aversive? Is it driving? Is it whatever? But for the sake of communication, we have to pick words right now. So hopefully you understand the context in which they're used. And so what we propose then is that the cells should modify their activity to avoid adopting a state. In this case, that's a pattern of activity that would lead to an increase in surprise. And so what we could see here, we set up some experimental tests and we compared a number of different methods. So we compared this control group. This was simply media in a dish that had the stimulations and feedback applied to it. Media in a dish should not be able to learn without any cells. We had an in silico model that would drive the pattern with random noise. And essentially this was to test the control group you can think the media only control group I should say, you can think of it as testing where we're able to get activity simply through our stimulation. The in silico was saying could we get performance simply through something in the system itself? You know, making sure that the system isn't biased in any obvious way. Then we had a rest control group where there were cells in a dish but no stimulation. And this is asking the question left alone will the spontaneous activity of neurons because neurons always to have activity regardless of whether you're stimulating them or not will that lead to better performance? Again, there's no reason it should if our system is set up properly. And of course if there's a bias it would but that's what we're testing here. And then finally we had both mouse and human cells that were given the information where the ball was were given feedback and we're able to play the game. And what you can see is basically this is a little hard to interpret this graph if you don't stare at heat maps all day. But if you compare to say just the top right which is the number of hits that are occurring in any given minute by the minute on the x-axis you can see that it doesn't really change too much over time for the control group but compare it and obviously the other the in silico and the resting controls are kind of the same as that. If you compare it to the mouse and say the human cultures you can see quite a clear difference here. And if you want more more asterisks for significance because who doesn't love asterisks at least all scientists love asterisks or other symbols of significance. What we can do is you can see over time this control groups the media controlled in silico control the rest control they don't show any difference over time a bit of variation but ultimately no real significant difference over time. In contrast to mouse and the human control human cells both showed learning over time and by the second time point they outperformed all the other cultures all cultures all the other conditions I should say interestingly that the human cells even ended up slightly outperforming the mouse cells and although I would be careful not to over interpret this it was interesting to note that we've run this experiment a number of times and this is a very consistent difference and so it could be an interesting way to start to investigate differences in the future between species if one wanted to set up a study more rigorously to actually test this and these results were replicated it wasn't like we found one measure that suddenly looked like performance and we ignored the rest we looked at a number of them you know some of the other interesting ones are stuff like the percentage of long rallies where it's hit the ball more than three times in a row so four plus in a row and you can see here again significantly changes over the time for the mouse and human cells nothing for the others likewise the aces which is tell like tennis where they missed the ball without even hitting it once this decreases statistically significantly over time for the mouse and human cortical cells not for the others so much so that's interesting but we sort of say like well it's interesting it shouldn't be surprising because we know that neural cells are adaptive and the fact you can see it in a dish is cool but what are the implications what can this actually teach us about how these cells are behaving and one of the things people have looked at in detail previously is functional plasticity so what we looked at here is of course we also assessed functional plasticity and saw that there was a massive difference between the resting state functional plasticity and the gameplay functional plasticity but what does that actually mean what does that actually look like going beyond just the fact that there's plasticity occurring in the culture so what you can do is you can look here at just the correlation in the activity between the cells in the motor regions that I was showing you before and the sensory regions and it's not too surprising that whether or not they're playing the game you have quite a high degree of correlation like the correlations at rest are slightly stronger than the correlations of gameplay but what is interesting actually is if you take the average correlation not just what is the overall correlation but they actually break down the average correlation per culture and session what you do is you actually get a far higher average correlation between the sensory and motor regions when they're playing the game than when they're at rest this makes sense if you need to coordinate your action with incoming stimulation you need to have a degree of correlation between where you receive that action and the activity there and where your activity outputs going out but conversely if you have in this case basically you can think of it as two buttons even though they're spread over four regions what you would actually need to have is a negative correlation or reduced correlation between these regions and if you look at the percentage of exclusive motor region activity in this graph which is where you have activity in either the up or the down region and bear in mind again these account are balanced so this results actually pretty interesting to see what you see here is that during rest vs. gameplay you actually have a higher degree of exclusive motor region activity which would of course be necessary for discrete control in one direction vs. the other and this was quite an exciting result because it was really supportive sort of that there is something quite dynamic and interesting going on in the way that these cultures are reorganizing their activity and likewise this actually just shows the degree that this correlation decreases over time with two linear regressions and you get two very different linear regressions over the first five minutes and the last 15 and we arbitrarily just based on some early work decided to and again not super arbitrarily but based on some early work I should say we decided to sort of split the games up into the first and stabilize and so that was quite supportive to suggest like what we're seeing here is really the tapping in on some of the key adaptive properties of the neural cultures and to take it further a bit more of a focus on the sort of free energy implications here so we can of course reframe surprise it has a link with information entropy and you know as for those of you familiar with this this can of course be captured as a tail just the divergence between the two distributions of what might be occurring and so we wanted to actually calculate here the information entropy of the actual cultures and now it's important to note here this is internal to the cultures both that gameplay and at rest and if we didn't normalize it there's quite a lot of variation going on in any given culture but it becomes much clear once you do normalize it simply to baselines and you can see here that when you give them random feedback so one thing I'll say firstly is that the gameplay normally actually has much lower information entropy than when they're at rest two when you give them random feedback their information entropy increases massively thereby suggesting that the cells indeed should be under the free energy principle should be predisposed to avoid this and that's actually what we saw but what we're interested in is to say well what if we tried different types of feedback like this is all well and good but how do we know that this type of feedback is actually what's leading to these results because it could be anything and so we set out and we introduced a few other control conditions so this is our regular one that I've sort of was showing you before if it was a bit unclear hopefully this helps to clear it up a bit when they played the game they missed the ball they get random unpredictable sensory feedback when they hit the ball they get predictable we then added something called the silent condition which is when they missed a ball they basically had all their feedback removed and yet when they hit the ball nothing they didn't get any additional feedback the game just continued now it is important to note of course that when the game restarts after a miss the direction of the ball is random so there is still some randomness in this game but presumably less so than in the normal stim condition and this can be contrasted with this open loop or no feedback condition they're sending them in this case basically where the game is played information about where the ball is is presented to the cells or the system cells or controls whatever the case is and then but if they missed they don't get anything changed if they hit they don't get anything changed and so really it's asking the question like will a culture just play the game for the sake of playing the game there's no reason it should but maybe they do so we wanted to test that and then in the final condition we included here of course as a rest condition and if this helps to make it clear if you're more graphical this can give you an example so here you can see the ball going up it hits it and then you get like a predictable stimulation here the ball goes it continues and it gets hit again and then it misses and it gets a random feedback and then the game continues silent in contrast you can see that this random feedback here at the same time point is removed for the silent condition likewise there's no predictable stimulation across the culture and then the no feedback it's just the ball becomes completely predictable from the very first serve there is no unpredictability in it at all and so when we look at the results here and obviously if you're interested you can go look at the paper because we dive into this in similar detail as before but just in brief what I'll show you is that here what we've done is just compared it to rest because it helps control the variability a little bit and it makes it look a little neater but essentially the stimulus condition they outperform rest they outperform their starting position over time it replicates what we saw in the first study and the silent one there is a very slight increase it doesn't reach statistical significance but they do end up also statistically slightly better than the no feedback condition which shows pretty much no learning compared to rest at all so that matches up pretty much with what we expect to see like around the free energy principle a condition that gets a small amount of randomness shows a small amount of learning if you want to do that but not a lot something that gets a lot of random information as a result of an action that we deem undesirable shows a greater tendency to avoid that action and something that has no incentive to behave in a particular way doesn't behave in a particular way but what was really interesting and exciting for us also although I'll be honest it made us stop for a few minutes to actually think about why this has happened was that when you look at this normalized information entropy again we got a replication that when you gave them random feedback the actual internal information entropy goes up with the silent condition the information entropy also went up and in fact even more so and we went why would this be the case and of course what we realized quite quickly was well neurons left alone do fire spontaneously and when you take a neuron from a stimulated condition to an unstimulated condition they do tend to re-engage with the random spontaneous activity quite vigorously straight away and so we went and yet it didn't show the same degree of learning so of course like I will say this is one finding that absolutely needs to be followed up more in the future and we are doing that but it does have an interesting implication in terms of Markov blankets so I mentioned this before I flagged it before and this is why because I think this is one of the really interesting results so to go into more detail what a Markov blanket is is it's a statistical boundary that can distinguish an internal state from an external state and this can be at any level so it could be at an individual neuron a cluster of neurons a nuclear distinct functional region of the brain the whole brain, the body in relation to the rest of the world or what's outside it and so the fact that as I was saying that we see that the silent condition even shows more random random or randomness or information entropy than when we're actually providing external random information into the system is actually very, very consistent with what we know about this to casticity of spontaneous activity but what it does also suggest is that cells will respond differently to randomness inside a system versus outside a system and that was a really exciting result to sort of find like almost physical evidence of like perhaps, you know this Markov blanket like really exists of course it makes sense it exists like theoretically there's a lot of reasons I mean even just the simplest notion that we can separate what our internal thoughts are from someone's external thoughts sorry not external thoughts someone's external voice to us I should say suggests that we need to have some sort of barrier to distinguish these types of structured information one internally one external but seeing it was very, very interesting and so we also wanted to of course extend beyond this and sort of not just stop here and look at what other interesting things might be happening and you know one of the other interesting findings that we've recently completed and there's a pre-print of this up now is this idea of neural criticality and so what is criticality for those who aren't very familiar with it in brief it is this state where a population's activity is coordinated basically between two other states so you know physically you could think of it at that point where between where water becomes ice and there's a point just in between which is a transition to bring it back to the brain you can think of it as a brain that firing rate could be either highly coordinated rhythmic oscillatory much as like when we sleep or in catatonia or it could be completely disorganized like in some cases of epilepsy and that then there's a middle ground and there's obviously quite a it's a spectrum so there's quite a lot of middle ground but the closer you get to this sort of critical transition point this is called criticality and theoretically it's been proposed to maximize excuse me there's my timer telling me to take a break it's theoretically it's been proposed to act there's a key criteria to maximize information transmission and capacity and what's also really interesting is that it's been linked to a number of different cognitive behaviors although the field is certainly a bit unclear about what that is so it's been linked for example to responses to drugs, working memory, attention one of my particular favorites is scores of fluid intelligence but these studies have some concerns with how they investigate it and ultimately one of the biggest concerns is that they're doing these for these ones at least in humans we have lots of compensatory mechanisms you have to use FMRI or similar mostly FMRI measures to actually investigate it so there's a lot of barriers in actually concluding this so we set up a study to actually investigate this and essentially again to not go into too much detail about the nuances we looked at three key different metrics there's been a bit of criticism in the field where people focus predominantly on power laws power laws can arise from noise alone so we wanted to avoid that shortfall and look at three different measures and so this is sort of the setup we had we took them we compared them either playing the game or at rest which I described to you before and then we looked at these things called either the DCC the branching ratio or the shape collapse error and we wanted to see how we could group them sorry my slides have frozen try to jump to the next one alrighty so hopefully that's back for everyone now let me just there we go okay so what we saw here really surprisingly was a very stark difference in criticality across a number of measures so you can just see here basically the prediction between the rest and the active is quite a just noticeably by eye very different and of course that's statistically significant diving into more detail we could look at these different measures and this is just some quick overview of what those different measures look like so the DCC the branching ratio so for example the branching ratio critical systems sort of maintain the number of sort of threads of activity that might happen in a subcritical system that variation will die off over time in a super critical system it's going to continue to expand up into a point and then there's a shape collapse error of course which is looking to see how you can collapse different avalanche shapes across a given spectrum and then looking across all of these we get quite strong statistical differences showing quite clearly that when you have the system embodied in an environment it's showing this measure of criticality which is very interesting and one of the interesting things for this actually is that it shows that this criticality is very fundamental and that information input seems to drive it very strongly and to us this does seem like a challenge suggesting that it is in itself inherently a marker of some sort of higher order system but of course I think the most interesting thing is simply that we can identify this and then what it does is open up the possibility to further investigate it with more nuanced tests that were explicitly designed to investigate criticality but what was nice also to see like I guess some suggestion that it does have a role in some sort of information processing in some degree is the fact that here's also just showing you these results in a slightly different measure but looking at the correlations there were some statistically significant correlations between the different measures of criticality and the gameplay performance showing that cultures that showed higher degrees of criticality across all three markers also had better gameplay performance in this hit-miss ratio where they hit the ball more often than they missed it which was of course interesting to see and does suggest that although we think that because this is arising in such a simple system it was such simple input that it's unlikely to represent or at least in of itself be a criteria for intelligence per se that there certainly is some link between processing information and criticality and if you want a more nuanced discussion of this please feel free, there's a pre-print up now while the paper's going through a peer review we discuss it in more detail there and finally what I would say though is that just to sort of really hammer home the point that this is a very fundamental property of these cultures and that is so drastically different when they're playing the game versus rest that when we wanted to classify the cells based on criticality metrics and performance as either playing the game or rest we were able to do this with a 98% success rate if you took it to performance and even if you just ignored performance and you just looked at criticality with random forest we were able to do this with a 92% success rate of characterizing whether they're playing the game or at rest and that's obviously suggests that this is very fundamental that once there's information input in these cells they drastically reorganize their activity and that's really interesting of course one of the questions that we also get asked a lot when we sort of put out the sailing work was a lot of people would say, ah, but who cares? We literally had someone write a comment one of the peer reviewers not in the actually one of the points said to us but this can't actually do anything better than reinforcement learning so how can you conclude anything about biology which was a surprising comment to get but what we wanted to do and said fair enough like we think there's plenty you can learn about biology whether or not it does better or worse in reinforcement learning but it's a fair question like does this do performance wise can it do anything interesting compared to reinforcement learning? We know that if you run reinforcement learning enough you will get superhuman performance with the game of Pong they will solve what they'll beat humans, that's fine they can do the same with protein folding chest go that's all well and good but that does that mean that there is no room for a biological system when it comes to this so what we wanted to do was actually compare it for sample efficiency because that is of course something that reinforcement learning struggles with greatly so what we did was we compared three different deep learning algorithms DQN, A2C and PPO with three different types of information input and against our biological cultures to see how they performed and so just showing you here some results where we took the image vector which had a CNN to run and process it and then feed it into the DPRL algorithms what we're able to see actually was that our mouse and human cells which are here in blue and orange significantly outperformed and outlearned DQN was pretty much always the lowest performer which is fine, we know that it's very sample and efficient but even more sample efficient algorithms like A2C and PPO also failed to show the same degree of performance even though they started higher for whatever reason we can never quite figure out to some quirk of the algorithm they never showed the same degree of learning and this was again mimicked across multiple comparisons but what was brought up to us was this idea of saying well, there is of course a cursor dimensionality so what you're doing is you're feeding in and it was a 40 by 40 image vector to these deep reinforcement learning and of course it's gonna take them longer to converge they have to deal with more information and we went, you know what, that's a fair point so what we did was run it even simpler and we came out and we said, all right, what we'll do is we'll feed them vectors of simply, you know four vector matrix of where the paddle is and where the ball is and that will be much simpler and closer to the game plane so we fed that into them and what we saw was actually they performed us with less information and of course the mouse and human image time because we can't alter the information but in this way it was a post-hoc study the actual mouse and human cortical cells but yeah, they did even worse across multiple metrics and we found again they're outperforming it a little more information than what the tools are getting because the cells are really only just received in the ball position we're not yet able to encode proprioception because the disk-brain system is very simple and you know, although we would argue again this sparsity of information should truly make it harder not easier with less information and certainly with our initial pilot testing we found that the more information you gave and the better they could do we again said, fair enough, let's change the information and put to the reinforcement learning algorithms give them the simplest possible information that matches as close as we could possibly do to what the cells are getting and see how they performed and again they performed even worse suggesting that in fact more information is probably helpful and so I think what we could show here is that like, yes, there is certainly something interesting to going on in neural cells and the way they're able to process information and it comes back to this quote that I love so much which is to say that like if the human brain or in this case even a simple collection of neurons in a dish was so simple that we could understand it we might be so simple that we couldn't and you know, we like to think that the tools we're building such as this is gonna help us to actually simplify it enough that we'll be able to unpick and start to understand the nuances behind this but one of the nice things that we're able to show is that even without fully understanding all the nuances of what's going on inside a dish like this we can see that it does have some distinct traits in terms of intelligence that have promising advantages for information processing above and beyond what's going on in reinforcement learning and we really think this happens home that this is gonna be a powerful tool to investigate these features in more detail. So as sort of a summed up conclusion what we're able to do with the system essentially is have a system of real biological neural cultures that can exhibit a very rudimentary form of natural intelligence through their inherent adaptive traits and these adaptive traits like frankly quite amazing to the extent and variety and diversity of what they're shown and even with the data that we've very generated we're only just scratching the surface but there's a lot of potential that they have and so simply by having this closed loop system of electrophysiological stimulation recording via these multi-electrode arrays we could embed them into a or embody them into a simulated world depending on which term you prefer and they could get better at playing it and moving the paddle to actually be able to perform the game with quite a consistent degree of coordination and with a faster learning rate than multiple reinforcement learning algorithms and of course this is as I said it's a lot of work and we're only just scratching the surface and so one of the things we're excited to do because we're actually not an academic lab we're a small startup based in Melbourne, Australia is you want to open this platform up to everybody. So if you have a question or you think everything I've just told you was completely wrong and that neurons work in a totally different way that's fine, reach out we will in next year be opening up for alpha testing and so the most exciting projects will be supporting to have an early access to this if you want to investigate the properties of these neurons and you have basic Python or Java coding skills you can access them virtually and code them at an environment and see how they respond and test it and go away and write a paper sort of showing how you think that it should all work based on actual data and this is because we acknowledge this is a whole industry like this whole field of research of course and so we want to try and make these tools available for as many people as possible and we're going to make it as easy for you as possible as well and it's going to be fairly affordable. So that's the end of the talk and of course it's not just me working on it there's a whole team and a number of collaborators that we work with these are just the collaborators that sort of touch on this work there are many other very valuable people we work with and are expanding in other areas and so and actually we're fortunate enough that Professor Adil Razi who's one of our closest collaborators has actually been able to join us today for the discussion part of it and so if there are any discussions we're really happy to take questions. All right, wow. Well, thank you very much for that. Excellent presentation. It gives us a lot to jump into. So perhaps Adil, feel free to say hello, introduce yourself and add any general remarks so that we can jump into some more questions. Yeah, thank you, Daniel. Great talk, right? And as always you did, it was a tour de force through so many projects that the lab is really doing amazing work. So I'm busy at Monash University. We're not that far from where Cortical Labs is in the city, we are a bit suburban at the main campus Monash University where you have this computational systems in your science lab. And our lab is using brain imaging and computational models, especially dynamic causal modeling and active inference as frameworks to look into the brain and try to understand what's up. Yeah, how one can look at new mechanisms of, for example, how the brain implements coordination, for example. For this, we use, as I said, mathematical framework of different friends and also dynamic causal modeling. So that's basically what we are doing and happy to contribute to this discussion. It spreads, you know, they should be the one who should be talking more and I would be gladly just sitting here and listen to wonderful interaction we already are having. Awesome, well, my first question just for context was, how did active inference come together with this line of research? Were you studying active inference and then looking for an embodied system? Or was it something like you were studying this very interesting system and looking for some type of framework to help you model it? Absolutely, it's a great question. It goes back historically. So Andy Kitchen, actually, who and Hon, who's one of the founders of the company, they were interested and they were looking for ways to explore it. And actually a deal, you ended up having a conversation with them, I believe, and they were basically looking for a method of like how could we talk to the cells and they came across active inference or did you introduce them to active inference? I would say that Andy and Hon at that time that we're talking about late 2018, early 2019 and they were already interested in free energy principle. But then I just moved to Melbourne from London in mid-2018 and I ended up giving a talk at Alfred where Cortical Lab is big incubated at the time and I was discussing these ideas and so it was kind of, you know, interaction which just happened. And then we thought, yeah, well, okay, there's lots of energy and there we go. We had, we workshopped some of the early ideas at that time and thought about, you know, Hon and Andy were really thinking about how to actually, what could be the first thing that we should do which would then really showcase the power of the discipline. Of course, the discipline exact name came later on. I think that's probably Brad is probably the child. No, no, actually, Discrade is a name, definitely goes to Andy as well. Okay. I would have given it some sort of esoteric name that everyone would have gone, what the hell is that? So, no, it's good call by Andy to do that. Yeah, yeah. But of course, like it does have a lot of really useful features, the whole active inference framework that make it quite an ideal thing to test. You know, the whole action perception cycle, the fact that there are like implications that you can simply take and be able to test the biological plausibility of it or make it very useful as a system for us to adopt. Having said that though, and like a deal can attest to this, like we also, well, I think that it was, I think as Aneel said, calls it, maybe other people call it that too. It was a bit of an adversarial collaboration in some aspects as well because we were interested in the idea that we had no stake in proving or disproving it. We were simply testing it. And so we had a number of discussions as we went through, you know, what if we see X, Y, and Z? What would be the states just kind of challenge it, not just support it? And so I think that was one of the things that made it a really exciting collaboration in that there was a high degree of skepticism amongst the team about pretty much everything we did. And I think that's what came together and making it quite a nice paper because we were, like as everyone should be in science but let's be realistic, seldom are, we were trying to be our own worst critics because we were trying to figure out like what else could be explaining it. And you know, and I think there are interesting future directions we can explore but certainly the results as they are did find support that certainly is a key characteristic of how cells respond. Awesome. So one preemptive question about learning and then we'll drive back towards active inference. So what kinds of mechanisms do the neurons use to engage game mode and also to learn? Are they changing their topology in terms of their connectivity? Are they changing what happens at synapses? Are there changes happening inside of cells? What happens when the game mode turns on and how does it actually improve throughout the course of that experiment? Look, that's a great question and the reality is we don't have the full answer for that yet. Most likely it's going to be a combination of the latter two you mentioned. Changes that are happening potentially inside cells or inside synapses. I mean, it's not necessarily too useful to try and break those apart. The idea of like larger scale morphological changes are less likely simply due to the timescale that we see and the fact that we don't typically, although there will be some future work sort of showing some other things as we've moved on and advanced with our work. We typically in this study at least didn't see learning across days. So it's unlikely there are massive changes in sort of connectivity. It's more likely that it's functional changes in plasticity and how they respond to information. And it could be a number of things. Like it could be, of course, it's probably some sort of relationship and balance between heavy and homeostatic plasticity, long-term potentiation to long-term depression. And then there's so many mechanisms that you can break down to look at how that is from, like one interesting candidate could be looking at like phosphorylation of coflin, which interestingly takes I think about five minutes to translocate from inside to outside synapses and then is known to alter long-term potentiation, which would roughly accord at that time. But this is something we need to investigate more in the future. Higher level to that. I mean, Adeel might have some ideas to sort of discuss higher level than this, you know, less reductive and more high level. Yeah, more computational. So yeah, I mean, you know, we are really interested in learning mechanisms, especially formative and transformer theory, what are the learning mechanisms in artificial, that we can mimic in artificial neural networks. So one of the, you know, while we have the biological chip, which can learn, can we actually now understand how it's doing this and can we in the short term, you know, have better artificial neural networks? Of course, the, you know, rear-reacted in this team is taking to us that we actually don't need the artificial neural networks anymore, we'll just have the biological chips, right? So that's exciting. So we have been looking at, you know, certain things, some of the more unresolved problems in deep learning and in machine learning, for example, lifelong learning and continued learning, where we know that artificial neural networks, they break down in tasks where you have to do multiple tasks in series and they could do very good human-like performances on a single task, but then you train them on the second one and then they forget what they learned in the first one, unlike humans, who could run, swim, speak without, you know, forgetting what, you know, otherwise it's gonna be, I could just drop it, say. So that's one thing. Another thing, you know, credit assignment, say, one of the bigger problems. How do you actually solve the credit assignment problem, which is again an unsold, a long-standing problem in deep learning that if, so what sort of choices we make that result in the words in the future. So what are, you know, and many a times what happens, there's a delay, there's a delay in the coding what behaviors resulted in the word, for example. So looking at what elements and what choices that we made that resulted in a long-term gain is another problem that we really don't know right now how do we actually going to, but these are very big problems which are unsold. And I think it just shows you the power of the Dish-Bred system that it can actually help us to, you know, I think there's a real opportunity here for us to actually take those problems as well at the power level, yeah, as Brett was saying. And I think it comes down to also being like the right tool for the right job. So reinforcement learning, CNNs can do amazing things. If you've got the data to train them, if you've got, you know, the rest of the setup that you need and you wanna use it for the right purpose, are they very good for doing stuff in dynamic real time with limited data, fuzzy information, so far no. And so maybe like there will be a case where we can take some of the findings from how neurons work and apply it in an algorithm in itself, you know, I know Adeel and his team are doing some amazing work with, you know, building up active inference models. And I think those results are really gonna, you know, I think excite a lot of people, just as, just as, just as some flagging of some of the work you're doing Adeel, I won't mention anything else when we say we've done that, but it's gonna be really exciting, I think, for people to see what's happening. And, but likewise, I do think that there will be things that biology maintains superiority over the question. And the nice thing about biology, of course, when you talk about generalized intelligence is we have no proof of principle that generalized intelligence can arise out of silicon hardware or quantum hard, you know, quantum-based hardware. We do have proof of principle that it can arise at a biological wet ware. Our brains and brains of flies, of cats, of rats, whatever, show general intelligence to differing degrees. And so to us, it's like not in a question of can biology show generalized intelligence, but how do you get there? And that's a less speculative question, not necessarily an easier question to answer. It's not easier necessarily, but it does give you that ground truth that the thing in itself sort of answer. Wow. So much there. In silico and in principle systems can display open-ended symbolic learning going back to the Turing machine, but for the kind of open-ended enacted intelligence that we might be interested in, in settings like you described, which are real-time, fuzzy, and with small data. And we could also add with a high risk or survival threat those kinds of settings. We have empirical evidence across the surface of the planet that biological systems can make that happen. And we have zero empirical examples of silicon chips making that kind of a functionality arise. So that's a very interesting point. That kind of brings to this question about the similarities and the differences between the artificial neural networks and the biological structure and also between the reinforcement learning paradigm and the active inference paradigm in which rather than maximizing a reward function, we are minimizing an expected free energy or a variational free energy by way of bounding surprise. So in describing the experimental setup, the reward or rather the consequence of succeeding or failing was you adjusting the regularity of the input. And in a reinforcement learning paradigm, that's a little bit of an unconventional manipulation. One might expect, oh, we'll add sugar or we'll add a dopamine agonist or an antagonist to the dish to signal the positive or the negative valence directly, kind of by putting our finger on the scale of the reward function, but rather you were able to train and learn by changing the patterns of regularity of feedback. So how does active inference have similarity and difference with reinforcement learning in terms of the way that we can use patterns of regularity, not just valence of interventions in learning? I'll give an answer and I'm gonna pass over to Adeel who can dive a lot more into the computational side of it than what I could. What I would say though is that there's a few things as well. Like one of them of course is can you even adopt a reinforcement learning paradigm in a dish? It's not like there was a choice between the two because we don't have access to a privileged reward network inside the dish to be able to say good sales, bad sales. You could like in theory, squirt dopamine in but you can't continually squirt dopamine in and you can't do it in a spatial or temporal resolution that's accurate enough, right? It diffuses slowly across media. We can't do it. So for us we needed to find saying that was more fundamental. And I think that's an important point. And I think it's also interesting to note like that there are studies that look at the fact that you can mimic changes that you see with electrical information that you can also achieve through like the application of small molecules such as dopamine or whatever neurotransmitters. And the right stimulation can increase long-term potentiation much in the same way that say dopamine would. And so dopamine seems likely to be a tool. This is speculative of course but it seems likely to be a tool that shortcuts the way that these systems work in line with some broader imperatives such as free energy minimization more so than being something unique and special. It is unique and special in some right like in the way that it works but it must be serving a more fundamental function. Yeah, I think that that's probably like a good way then to sort of lead into the differences between let's say an active frame inference framework and a reinforcement learning framework which I think a deal would be best to discuss. Yeah, great question Daniel. First we started with the differences in silico models of artificial neural networks and the biological neural networks. So first of all, as we see the artificial neural network is a caricature of an actual biological neuron it just mimics that it takes an input and then gives you an output. There's a non-linear function and then once you have many of them and stack them up and make layers they can do amazing things. So this is like artificial neural networks just using the one of the cartoon or whatever biological neural network do it has a lot more than that. It is an extremely complex biological thing. So imagine now that we can reuse all of the power and all the when it comes to what it can in terms of information processing and then build systems how powerful they will be. As we know currently these huge what we call these large scale language models for example, LLMs and all those fundamental models that we have been seeing with billions of parameters to train them it requires gigawatts of power. It's basically like a whole city can be up with once they train now a human brain is just using a small fraction of inedient time and power to do much more than what they could do. They still can't do some causal learning and causal reasoning problems, very basic of them. And while biological neurons can do it very easily as we are doing it right now we are having a back and forth discussion that's a big is one of the most sophisticated and huge neural network would not be able to achieve even for a few minutes. So that's what we are looking at. That's enormous opportunity that we have to actually harness this power and do amazing things. Now coming back to active inference and reinforcement learning, now this has been asked many times and there are papers that Carl has written, I think Nulsar has a paper on where they look at, however, what is biological? So I think active inference is biologically responsible. So the main difference is that when you are an active inference system is optimizing a single quantity that's what we call the free energy or the free energy. And that gives you with a single optimization function you are doing perception, planning and decision making in the same with the same quantity. So let me put this this way, things that how things have happened over the past few decades that it has been always been about how to optimize a function. So once you have let's say a value function or reward in reinforcement learning it's all about how best you can optimize this quantity. While what we are asking, we are saying that what is we are optimizing not how we optimize. So what is that quantity? And that's the ratio of free energy that we're optimizing and it gives you all of that in a single compared to a reinforcement learning system that would require multiple objective functions to optimize. So I think it's not biologically attainable to have that. So I don't think and that's why the active inference is a unifying framework and it probably would rattle and offend few that we think that the active inference subsumes all those computational frameworks like reinforcement learning into dynamic programming and all those derivatives because it is basically what the brains are doing and they're doing it better than what framework that we have been using so far. So I think there are differences, clear differences and however, having said that I would really would like to see in future and probably we'll have a dig at it how you can actually mathematically which is then become unambiguously show provide a proof that where these why active inference appear. So that there's a still a gap there while we could develop in silica active inference agents and we can show them that they are doing better than reinforcement learning agents. And then we have quite a few, there's a whole sort of work that's one of my PhD student, Ashwin Paul is doing and the papers are coming where we are comparing active inference agents with the state of the art, reinforcement learning and deep learning, you know, networks and we are showing of course, the deep active inferences is doing better. But the theoretical process still missing similarly like how active inference is what are the similarities and differences? For example, I would be very interested to know how it's different from Tudorov's work. And again, Ashwin is looking at that and even all Tudorov's PNAS paper in 2009 which looks at efficient computations optimal action, which is far reaching they show that you can both in discrete and on continuous settings. You know, they have shown that the algorithms can beat dynamic programming and in social learning based algorithms. What is this, what is how active inference is different from Tish Pai's bottle, information bottle neck, you know, how, what are the links between universal priors and solomonovs? All those things I still, you know, it's really early days, you know, what are the similarities and differences of active inference with all those big ideas which are therefore, you know, in cybernetics and another literature for many, many years, for decades, what are the differences and in similarity? So it's a good question. We don't have all the answers, but, you know, all the, yeah, so it could be very exciting next week is looking at all those big questions. I just wanted to echo one point you made and then ask a question from the chat. So in reinforcement learning, what you should optimize is almost treated as an obvious question. You want to win the game, you want more points. That's the way to win. You want to stay alive, then you want to optimize survival. So that's your reward function and what to optimize is kind of swept under the rug. And then there's this big question of how to optimize it. And that's what has led to this enormous diversification of different approaches and heuristics for how to optimize and different gradient descent methods and so on in reinforcement learning. In contrast in active inference, what to optimize is actually known. It's the variational or the expected free energy as exactly specified by the generative model. And we know how to optimize it, which is with a variety of locally plausible, biologically inspired update rules, for example, message passing and belief propagation. So that's a very subtle but also paradigmatic difference, which is that the structure of what we're going to optimize as our unified imperative for perception, cognition and action, the free energy functional has methods and software packages that will step by step convergently optimize it. And so the onus is actually to specify the generative model in a useful way rather than treating what should be optimized as just a bygone discussion and then go into all these questions about implementation which are the parts that actually differ the most across systems. So I think that's very insightful. I wanted to ask a question from the chat from Michael. What would be the difference if you change the electrode array to stimulate differently? For example, instead of just growing the neurons on top, what if the neurons synapsed with the electrodes? So what kinds of experimental setups are possible and which ones are interesting and how do you know and what do you choose to do? I think the first thing to say about that, good question. And certainly as the technology develops, we'll be able to ask these questions in better and more interesting ways. Like it's probably worth, well, you know, I've made this comparison before but I'll make it again. Like the work that we've done here is, you know, we're excited by it, but at the same time, like it is probably equivalent to like the early transistors made by Shockley. You know, it's janky. It's kind of ugly. It does a job, but not without limitations. And so, you know, what we're really excited in is as the community hopefully come together and work on this five, 10, 15, 20 years, maybe longer, you know, 70 years of continuous development have led from a transistor being a large, ugly thing to thousands of them, you know, within every device that I can see. So that's like one thing I'll just say. In terms of like, I will point out like that the neurons do in as much as they probably ever can integrate with the electrodes as is. As you saw from that picture I showed at the start, they are deeply integrated into the electrode. Like we cannot, like we could not and let the neuron survive, remove them in any meaningful way and keep that structure. Together we'd have to completely like disassociate their structure and most neurons won't survive that for very long anyway. So they are currently integrated. Like I mean synapses work slightly differently. Yeah, there's a whole collection of traits. Like there is of course electrical activity is a shared language between neurons and the hardware. And that's what makes it possible to stimulate them with electricity because it can sort of mimic the change in ions across the membrane to stimulate action potentials which is how they then communicate with electricity. But the natural potentials of course trigger the release of chemical signals and so forth. And so there's a lot that's going on there. And I think it's about building the bridge. It's not necessarily about, I don't know quite the right way to phrase this, about the completely recapturing everything that's unique to biology in the hardware or unique to the hardware in biology. It's about where's the useful bridge and synthesis between them to elicit the best of both worlds. And again, it comes down to that right tool for the right job, but in this case it's like a different circumstance. Like what's the right tool to process the information? Is it a biological part? Is it the hardware part? And so when we communicate with them what we need to figure out. And again, this is an open question. Like I'm not sure where it will go, but what we need to figure out is like, what's the best way at the time of the technology we have right now? It's not trying to replicate sort of hardware-based synapses to do sort of chemical communication, although maybe in the future it will be. But yeah, it's a very interesting question. Like the base of the answer is like, we don't really know where it could go. And we don't know where it would do. As I mentioned in the talk, like they are incredibly plastic neurons. We can change the configuration. We can change the type of neurons. We can move from 2D to 3D. And this is work that we have ongoing now. And I think that what we're gonna see is just an explosion of results as we start to make inroads in this area. Saying that the neurons might not survive the separation from their cyber physical digital substrate reminds me of the reality of some to all of us today. If we were also stripped from our digital substrate, many critical systems also would not function. And I think that speaks to some of the recent work of Friston et al. at Versus with the idea of the ecosystems of shared intelligence where we're thinking about distributed cognitive processes that are mediated by interfaces. And so some emitted dataset is received by another cognitive entity on the other side of the blanket. And maybe it's person to person. Maybe it's computer to computer or something mixed or some augmented system. But there's a really tractable and elegant way to talk about those heterogeneous systems. So even if in the local context, one algorithm or another outperforms another, there's gonna be an immense value for using active inference and FEP to just talk about how to compose these systems. And so that's very important insight. 100%, yeah, that's a really good point, Daniel, like very valuable. And yeah, it comes down, I think, like fully agree with you. It comes down to like, how do we phrase and think about these even? And that's a challenge. Like even discussing what these systems are and what they're showing is a challenge right now. And we need to work on building up that language together so we can have those useful discussions and figure out how to leverage this distributed network that we're all, I think it's inevitable as that you bring up that that is where, that's where we are. And so we're only gonna go deeper down that direction. Awesome, just a little bit of some questions on information as we head towards the end. It was a very striking finding about the changes in information dynamics when gameplay was activated versus rest as well as during learning. And also that's kind of analogized by, let's say a person in an FMRI and their mind wandering, they're engaging the default mode and then some task comes on and the information dynamics sharpen. And we associate that with the mechanisms of attention. So how did you measure or quantify the information or what do those informational measures mean in the context of neural firing patterns? Yeah, that's a big question. And it's, again, it's something that we need to do more work at diving into actually and pick about what it actually means. At this point, what we're limited to look, well, that's not entirely true. We're not limited to it. What we did look at just for the sake of time and complexity were individual spikes in a given location. And so you can arrange that spatially and temporally. And so what we've looked at, and we've dived into this more work with our recent work with organoids as well, trying to pick apart sort of the functional subunits of computation or intelligence or whatever word you wanna use. But yeah, so essentially what it is is it's the pattern of activity you're seeing across time and space for a given spike in a location versus the other ones. And so there's a number of ways you can look at this. I think the really exciting stuff that's coming out is around sort of trying to break apart. So people started Hodgkin-Huskley models, et cetera. They looked at neuro-spiking and they were like, how does a single neuron spike? What makes a single neuron spike? And they came up with some rules that are mostly true, but now we know we're kind of still wrong because many neurons follow those rules and many do not. And some are circumstance dependent. And then people have sort of extended that and they looked at how the given system changed its activity over time. And then of course you can expand that out and look at like how do pairs of neurons and then trios of neurons and grow that up into a whole population. And so what we've tried to do with this work that I've sort of shared with you today is look at that at a few different levels. And so depending the level depends the exact answer to like what is information, but essentially like the simplest answer is it's the spikes that we can see in this case going on and their relationships to other spikes. I don't know, Dil might have more to expand on that. No, I think you have actually, I think at some point we've also, as you mentioned, we want to look at those functional subunits considering as they're using those mean field models, using those functional units as neural populations. And then hopefully they will emerge as like a few functional subunits. And then can we look at the connectivity patterns using for something like a neural mass models, which have been used quite a lot, something like a gentleman with models, but a bit more dynamics in them. And yeah, so that would be a very interesting to see how the patterns are emerging when they're on to learn, how the patterns are finding patterns and connections and then how these are changing over time as more and more learning is happening. And that would be something, we're all talking about lots of unknowns and lots of very interesting directions that one can take, which makes it all exciting. And I think like our thinking also, at least my thinking has changed over the course of looking at this. When I started out, I kind of thought, well, an individual neuron is the smallest functional unit, theoretically, right? So that's probably what's going on. And then they'll connect up. And then I realized, well, I think like, yeah, it is a single neuron as a functional unit, but so are two neurons and three neurons and the collective, you know, exponential growth as you go out to a whole path. And at the end of the day, when we look at actually at the levels we were looking at, for example, criticality, if you got a little preprint, we broke it down into areas like sensory versus motor areas. And we found that they all kind of behave more or less the same. So does that mean that on the dish, we have one functional unit or 800,000 or everything in between? And looking at the organoid stuff that would move to that that we're not quite ready to share yet. But like, I think what we can sort of start to tentatively conclude from this work is that it is indeed that answer of like one to the combination of the everything. And I think that's what makes the biological systems, and this is why I emphasize it in my talk, look at the amount of connectivity and sort of chaotic connectivity to some extent that's occurring within these systems. And that gives you something that we can't model with neuromorphic or with algorithms, with any degree of, we would the same traits at least, I think I can say without being too controversial with the same traits, we can't yet model that algorithmically yet. And I think that makes these interesting as well. Awesome, one just remark on that before kind of a closing discussion. The Hodgkin-Huxley models were on large single isolated neurons. And so a theme I heard you say multiple times was like the right tool or the right model in the right context. And so when studying sodium and potassium flows across the membrane of a single isolated neuron, that is absolutely an appropriate model and something that provided immense insights. And then as you pointed out, especially if you're interested in properties of systems and connected neurons, it's like a yes and like there are sodium flows, but you could imagine contexts where learning happens and the informational pattern or the entropy of a given neuron does change or it might not change, but the pairwise relationship might change or the pairwise relationship might not change, but some higher order relationship does change. And so that doesn't replace mechanistic or smaller subunit based models, but rather we can broaden the discussion to thinking even about the environment, like of the dish or of our peripersonal space that these cognitive systems are embedded in and talk about distributed cognitive functions without privileging the ecological or the sort of atomic levels of explanation. Yeah, absolutely. I'm thinking for clarifying that if I sort of implied otherwise, yeah. Awesome. No, just to highlight, because despite the ring on the cover of the active inference textbook, there isn't a single model to rule them all. And I think like it's really by getting into the particulars of experiments and also by speaking with researchers and by talking about this kind of cutting edge research that we see how pluralism in the methodological and in the explanations actually plays out and that's just very exciting. So I guess just a closing question and then we can just have any other remarks. Michael asks in the chat, what should someone study or do to get more involved with this research? I'm an undergraduate student interested in pursuing this field for my career and I'd love to work with cortical labs. So I think it really depends the area you wanna go down. Like we're always looking for good people to work with and as I said before, like this is gonna be a whole industry and a whole field of research we hope and believe anyway. So I guess you should pick what it is that interests you and what you're passionate about. It's, there's so much stuff going on, right? You could study electrical engineer, you could study physics, you could study software engineering, you could study neuroscience, you could study stem cell biology, anything in between. These are a multidisciplinary, highly collaborative outputs. You know, I showed the team before we have people from all of those industries and others and it's only the combination of these areas that give rise to the chance to actually make something that it's the gestalt, right? It's greater than the sum of its parts. And I think honestly, that's one of the reasons why we've had some success coming from a sort of startup industry focus is that there's not that many of us and we don't have that much money. But what we do have is like an amazing team of people to work with, amazing collaborators, like a deal to work with and we can bring it all together in one spot. And that's hard to do by its nature and academia. Like by its nature, academia tends to silo and you have exceptions to that rule. Of course, you have like very highly collaborative works that do come out of academia, but they're always, they are always, I've worked in academia side and they're always at a distance, those collaborations. They're not all under one roof with one goal and one purpose. So that's the thing in the app. This is something though. Oh no, just, yeah, please, any general comments, feel free to respond to that and also just any general thoughts you want to add. I was to say, I cannot recommend enough cortical labs. They are highly distributed team with expertise from cell biology to software engineering, to real-time interfacing, to computational models and machine learnings. So anything that you expertise and you want to do, there would be something that's there and really cutting edge. And so, yeah, I think there's this whole project, it just shows the whole breadth of things that we can do from neurons to sentient behavior. And I'm patting my backs for saying sentient today, which hasn't been said. And I should say Brett is also leading a project on that side, which we didn't get chance to talk about about the nomenclature and all that that happens with the semantic of sentience and consciousness and then agency and all that. So, and the whole ethics side of thing. So there are big, big opportunities and questions open challenges and I think you- Actually, Dewey, you make a good point. I should use this opportunity if I may, Daniel. Yeah, we are engaging in a paper. So if you are interested in the words that we use, whether you think that we use them correctly or incorrectly, we would invite anyone. We've made these offers before, but it's a great place to say it again, reach out, we are setting up and we will have a formal invitation somewhat soon. I can't exactly say when because there's so much going on at the moment, but fairly soon we'll have an invitation. Come join us, let's work together and just like figure out what language we want to use as a community because then we can move towards actually building some stuff instead of just being caught up with like, what does this word mean? And that's going to be necessary. So yeah, please reach out to us. Yeah, I'll just note one project at the Institute, which is the active inference ontology where we've been developing definitions and also translations across languages of core and supplemental terms. And we use that to annotate papers. We make natural language descriptions of mathematical formalizations. And we totally agree that ontology-based systems engineering for active inference cyber physical systems is how we're going to treat this with the respect and also bring in the discussions around ethics, philosophy and so on. So this isn't a linguistics debate that excludes other fields. This is like laying down the dance floor so that we can actually have a discussion that's inclusive and meaningful going forward, which is what is so important. Yeah, that's awesome. I got to look this up. I didn't know that you guys had one of those. I'm going to have a look up later. I'll send it to you in an email. Well, thank you so much. What an awesome line of research and a December surprise or treat for us. So super exciting and please always feel welcome to just reach out and join for any other time. Awesome. Thank you so much, Daniel, for having us and everyone watching. All right. Bye. Thank you. Thanks. Bye. Bye.