 Hello and welcome everyone, this is Actinflab Livestream number 36.2, it's January 26, 2022. Welcome to Active Inference Lab. We are a participatory online lab that is communicating, learning, and practicing applied active inference. You can find us at the links here on this page. This is a recorded and an archived Livestream, so please provide us with feedback so we can improve our work. All backgrounds and perspectives are welcome, and we'll be following good video etiquette for Livestreams. Check out activeinference.org if you want to learn more about what Active Inference Lab is up to, or if you want to participate in any of the activities which include Livestreams in the communications organizational unit but a ton of other stuff is happening. Today in 36.2, our third discussion on 36, we're going to continue to learn and discuss this paper by Sims and Pazulo 2021, Modeling Ourselves, what the free energy principle reveals about our implicit notions of representation. And we're going to have a nice jumping off dot two as we do, and have some questions written down and prepared. Other things will spontaneously arise, and if anyone watching live wants to just write a question in a live chat, we can also address that. We'll start with an introduction. So we'll go around and introduce ourselves. People can say anything they want, and they can also add just what got them excited about the paper in general or for this dot two discussion specifically. So I'm Daniel, I'm a researcher in California, and just excited to see how this representation discussion influences our own self-representation and that at the individual and the lab scale. Moving forward, I'll pass it to Blue. Hi, I'm Blue, I'm an independent research consultant in New Mexico, and I am excited by this paper because in our previous discussions, we're, as usual, left with more questions than answers. And also, I kind of am finding myself swayed by the different representational structural content-driven, non-representational, these different arguments, and I just find them very interesting, and I just want to know if at the end of this, if I'll be swayed more or less in certain directions, or maybe just be more open to other interpretations of FEP, and I will pass it to Danielle. Hello, I'm Danielle, I'm a cognitive scientist at Google with a background in language evolution, language development. I'm really interested in thinking about how humans evolve the ability to model other minds, so interested in how this discussion can contribute to that, and generally interested in how really good frameworks like the FEP can help kind of encompass other frameworks and revise them. Awesome. Dean? Hi, I'm Dean. I'm up here in Calgary, in Canada, and a little bit of recency effect. I started watching Ozark the last season, and so I'm trying to plug this paper in through what would Marty Bird do? So that's kind of my little twist on modeling. So back to you, Daniel. Who is Marty Bird, and what would he do? Right. Exactly. Great question. Well, that's all we prepared for today, so we'll go into the discussion. But we have some things written down. Is there anything just off the bat that anybody wants to just ask or jump into, or like a figure or a quotation to begin with, or we can look at some of the things that we had written down? Okay. Let's just start with what we've written down, and then, of course, at any point we can branch off. Okay. I think a big theme that we'll probably return to again and again is this idea of moving back and forth from non-representational forms to representational forms. In the paper, they introduce four different facets of representation, and then how each of those facets can be approached in a representational or non-representational way. And although the image here shows some solid black lines, that was kind of the discussion was, is there movement across these areas or blurriness? So maybe, Dean? What is there to say or start to explore with representations emerging, like almost precipitating or crystallizing out of something non-representational, and then the reverse process of something that is more representational, submerging back into something that's less? Well, one of the things the authors did talk quite a bit about in terms of the functional aspect of this was the vicarious nature. And then they also talked quite a bit about whether or not somebody who's started moving in that radical inactivism space still has to be able to differentiate and include some of the temporal aspects of what might pop up and become something that we can stabilize in model or representation form. But then it has to also be able to disappear, right? It also has to not necessarily consume everything in terms of our attentional field. So I'm not really sure exactly how that works itself through, but I would think that it's difficult to say that even if we're in a flow state, that there isn't some moment when we do reflect in that mirror. So that's kind of where I'm at. I don't have any answers per se, but I don't want to get stuck in the idea that it's just an on-off switch. I think that there's a bit of a moment of dimming and then re-relighting. So that makes me think about certain paths that our thought takes over these eight cells. Like when we're in the representational side, when we're dealing with the representation, how do we take that tetrahedra and look at the four different sides? And then how do we stay within a column looking at just the organizational aspects of a system and then move north and south on that? Do these kinds of knowledge or cognitive transitions happen all the time? Do they have other names that might be more familiar than using some of this philosophical language? Like, what is it? Are there any times in our days where we're dealing with something representationally and we shift to a non-representational version within one of these categories or across categories? So I was just thinking of that actually, like what does an organizational representation look like? Like that is like a map, like a structural representation, a content-related representation, and a functional representation. And what do these representations look like? And how are they different? I was just thinking about that. Let's go to the definition of the organizational representation, just to kind of remember. The fact that FEP requires internal states, states that encode the recognition model that are statistically separated from the external reality. So the organizational area is about how variables inside of systems are separated from variables inside of the system. The pro-representation take is there's an evidentiary boundary, the Markov blanket, and there's something on the inside that is doing something like a representation on what's outside. So that's the pro-representational take. And the non-representational take is where we get into the ecological and the inactivist persuasion. And that's where the authors say that some have approached this Markov blanket structural formalism, even though we're talking about it in the organizational, not the structural facet, in a manner that suggests a non-representational view. And they argue that just partitioning two-coupled systems like agent and environment with a Markov blanket doesn't imply that the behavior of the agent is explained by anything related to an internal model that's doing anything like capturing the structure of the outside world. So you just use organizational and structural in this big hobnob, but my question was really what does an organizational representation look like? And so I'm hearing that you answered that there are clearly defined boundaries in organizational representation or partitions between internal and external states. So I have a clearly, when I have a representation in my mind that's organizational, it is like your boundary is different from like, is it only your own boundary or is it like the boundaries of each individual agent are organizationally represented in your mind or is it just the boundary between you and the outside world? And how might that differ from structural and content-related, like, what do these different representations look like? That's what I'm trying to see, like, would they be maps? And how would the maps be different? The structural, which does come up several times in the organizational definition. So we see structure coming up, the structure of the system in how it's organized, but what is the structural specifically? Okay, having representational vehicles that are structurally similar to the state of affairs in the world that they stand for. The pro-representational structural side is kind of like the good regulator theorem of cybernetics. So the organizational side is highlighting that there is some sort of informational encapsulation we might even say, like our previous discussions, but some sort of organization of variables such that there's an inside and an outside that are separated. And the structural side is saying that it's like if there's three things that are connected in the outside world, then not just that there are variables that are organizationally separated in the internal states, but that the structure of those isolated variables is going to do something like recapitulate or have a structural resemblance to the generative process. That's like the niche. That's what's outside generative model. That's what the agent has on board. And so if it's to be useful for control purposes, then it may have to have some sort of structural resemblance to the actual connectivity of the generative process. So generative model and generative process being separated is what's captured in the organizational side. Them having similar resemblances or similar structures is what the structural side is. The structural non-representationalists are suggesting, this is I think tale of two densities with Ramsted et al., or modeling ourselves. The authors suggest that generative models do not meet the requirements of structural representations because the process of exploitable structural representations, so posterior beliefs in the entity, is enacted. And so it doesn't necessarily have to recapitulate the form of the generative process, so maybe one could be skilled in the performance of driving without having variables connected in the way that a car pieces are connected. It comes back to a theme that Active presents to us, perhaps more as a question than an answer, which is how much do we have to know in order to act? Do we have to have an internal model that is very similar to the process, uncorrelated in how similar it is, anti-correlated in how similar it is, and can we have a framework as researchers that helps us compare the structure and organization of generative processes and generative models of those processes? Yes, go for it, Dean. So I'll put this out there and then people push back or say it doesn't make any sense. From what I read in the paper now, because I had to go back over and look at it again, if I were to try to draw a simple comparison between organization and structure, organization would, you could see it as being something, as being either in or out, so you could be inside, you could be inside a cell or outside of a cell, then in our minds we can tell, we will draw those partitions and then we'll decide what's in or out. Structural, I think, takes on more of the entailing question, something that Sir Wal brought up in the chat, which is, so once you've decided whether something's in or not, is there something else that now can be seen as being superordinate to that from the perspective of sort of what's wrapped around now the thing that's in, or what can sit upon theoretically the thing that you've now decided is foundational to whatever it is you're looking at and what can be set aside as not being important to what you're looking at. So again, pull that apart and say that that's wrong, but that's kind of what I try to read into what the readers or what the writers were saying, so it's just my interpretation. Thanks, Blue. So I just wondered in this organizational representation, if you draw a distinction between, I mean clearly there's one between like what's internal and external to yourself, but are you also trying to like draw distinctions between what's internal and external to your computer or your refrigerator or your best friend or your child, like are you making a representation of everything that's internal and external to every other thing or is it only with respect to your own self versus environment? I would think it's when you do that. I don't know that it's implied that you do it all the time, but if you choose to do that, I think that that's something that's pretty easy to tell the difference around. So again, I'm not sure that they were saying that we do this, we're do always constantly doing this, but I think what they were trying to do is show the difference between knowing when something's in or out versus knowing when something's first or second from a structural standpoint. Again, I don't want to put words in the author's mouth, but I did hear that in the words that they were doing to try to parse those two things and give each each their due as opposed to saying they're just the same thing overlapping. Let's try one little thought loop. So let's start with Blue's question of what does an organizational representation look like? So we'll start in that very top left cell and then shift the focus towards the structural components and then challenge the structural representation, but stay within the structural domain and then see if that can result to us making it back to a non-representational organizational framing. So what is a good organizational, what's something that is classic vanilla organizational representation? Well, that's something that is well separated in terms of variables inside and outside the system. So we're not worried about the structure of the variables inside or outside just that they are separated. So computers seem like pretty clear cases where within a program, you could have variables that by design are separated through like an intermediate variable or computers and hardware, like two computer systems could be totally separated except for an interface like a USB port or something. But of course this paper is about that strange loop when it's a cognizing agent, an adaptive active inference agent doing that, representing not necessarily a mere active agent, but should we use a computational example or some sort of inactive human example? You got a good example right here on the page. So how would we shift across the y-axis between organizational and representational? Because I'm assuming we're going clockwise to representational structural. That's not a hard example to make. Yes. So in the human case, this would be like two humans in a conversation. Mm-hmm. In a conversation. Yes. Not outside of a conversation. Perfect. Outside of the conversation, yes. Two humans, when two humans are in conversation, there is separation in terms of their sensory motor systems. So we have checked the box for organizational, representational, starting in the top left cell. All right. Now the question would be, is there structural representation? So that would be like maybe if one person thought of a sentence and then said it, and then the other person thought of that sentence too, would that qualify as a structural representation? Because there's a structural resemblance in the model of the two conversions? Yeah. We could also find out if both of us were talking at the same time and suddenly stopped turn-taking that the structure of that model would now fundamentally change. It's still representational and two people arguing over top of one another to the third party isn't necessarily something comprehensible. Never mind to the two people that are inside the conversation, right? Blue? So when I think about conversation, something comes to mind recently. Like really, you need to have like the order of the language specified, like my son is reading now and he's five, and he's taken the book like pop on pop, the Dr. Seuss book, and he'll turn it upside down and he'll say, dod no dough, and like we'll read it that way. And then he wants to read like every sentence backwards like from the end to the beginning, which I'm like, I don't care as long as you're reading because I'm getting practicing reading, but it's completely, but the context is completely different, right? When you're reading like each word in the opposite order. So like when you're talking about a structural representation of a conversation, I think that there needs to be like a literal representation or at least like a core screen representation of in what order things are said in order to glean some kind of meaning. But then are we bleeding into the content related? But like as we're, you know, that's why I really feel like, like, what are these different maps look like? And trying to maybe elucidate what the representations are is valuable, because I think when you have organization, okay, inside, outside, like this is the sentence or he said this and she said that or whatever. And then you try to put structure on it, like that gives it then content or context that they can then, you know, and so on and so forth. Dean? No, I don't have much to add to that other than I just don't think that that this loop that we've just selected this, we said there was going to be certain amount of randomness in the last live stream. And we've chosen this now. Let's walk, let's walk through it. I don't see it being difficult to follow that path and find examples. So we have a superordinate right now structurally we want to carry out this directionality from the representational side of things to the non representational side. And blue, if you're if your kid wants to turn the book upside down, that's that's actually a feature, right? Because they're not already stuck in this only one way of doing it thing. So yeah. Let's complete the red loop. And then then the green and the orange will come into play. Okay. So we started with two people in conversation. That's organizational separation of cognition. This brings us to the structural facet, which is where there's a structural resemblance in the representations of the conversions. So there maybe it is the case where they're both thinking about a similar topology of variables. So there's some structural representation. But now let's stay within the structural column go from B to C. Okay, so the two people are in a conversation. It's kind of like a dance. So what if those two people's representation is so different, that it's actually not structurally resembling. So maybe one person is the more experienced in the discipline and the other is less experienced, like one person's super good at fixing motors. The other person is just has no idea. So the representation let's take would be like, well, they're both thinking about the motor. But then when we start to see like a cognitive asymmetry, all of a sudden, we fall out of structural representation list between these two systems, because their cognitive models don't necessarily have similar structure. Yes, Danielle. Oh wait, unmute it then go for it. So I might be missing something here, but I thought that we can only start to talk about the structural dimension here. Once in this example, the two people in the conversation are representing the same thing. And then the question is, how much information or how does each of their representations do each of their representations have to look similar to one another. But like, part of it has to be that they are representing the same thing. Is that sort of necessary component? If they're not representing the same thing, then we're having a different conversation. Well, that may come to the functional that might come to the content related like the novice and the expert. The representations are both about the engine. But there's so structurally different in their representation of the engine so that although the content in the aboutness of the representation is similar, so still qualifying as a representation in that sense, there's a structural non resemblance such that by that criteria in that situation, we've fallen out of the representational cell B into something non representational potentially blue. What do you think? So I think like two people in a conversation can be trying to represent the same thing. Like, like, yes, they both need to be trying to represent that have the same representation like they're trying to converge on a conversation, a meaningful conversation, presumably. But they don't always converge. And then that's where you get like misunderstandings or or like, you know, we both think that we think the same. We both agree that we're thinking that we have the same structural representation, but perhaps they're different and then and then the communication breaks down. Okay. So the structures are so different that we're recognizing it as a continuum from total overlap of structural resemblance to total non overlap. But it would be important to specify that null hypothesis. So we've kind of scooted into C. How do we stay in that space of like the novice and the expert who have different cognitive representations and then use that to challenge even organizational representationalism? Can I blue then? Oh, Dean first and then blue. Go ahead, Dean. All right. I just thought I want to just go back to the to Daniel's point. So if we're if we're structuring a conversation normally we turn take. But if we're playing in a band, we're all playing at once and then we stop playing. So I think we can start with structure too. It's just deciding how we want to signal. Right. So I don't think it I don't think there's a necessarily an order that we have to start with organization first. We can actually start with structure first as well. Yeah. And I think a conversation is a particularly challenging example because we're representing two things. What we're representing with the other person is trying to stay and then presumably what the conversation is about is something else in a world that we could be representing. Exactly. So I think we're representing a degree of information sharing. And I think that this is going to like lump into auto poesis, which I've been really wanting to kind of get into. But but when we have, like I'm my own person and I'm communicating with Daniel from, you know, 50,000 miles away, we both feel like we're getting on the same page. When we actually like do get on the same page, like do we form some kind of like separate cognitive unit? Like at what degree of model overlap is the degree of information sharing so high that then we are worthy of our own Markov blanket, like we're sure that we understand that we are the same unit. Right. And I think that this idea of self assembly and information sharing and model overlap, like I think that there's an important path to traverse down this way. Because if you don't have full overlap of the model, like your or do you need full overlap to form a self assembling system to be, you know, to form a higher level? It's just an interesting thought. That's kind of where the C to D transition takes us, which would be like, so we have the drummers waiting for this and has a model resembling this, and then the singer has a totally different structure. And then when we think about the whole cognitive system, it doesn't deny what we recognize in a that there still is like the sensory motor separation. But we've reached that point by those people or rules being part of a larger, non separable extended cognitive system. So almost by recognizing the interaction. So here we focused on the separation in a, and that's what granted us the representational cell. But we've returned to seeing that partial information encapsulation within some type of broader structure where none of the entities have like the band level representation. So hopefully other people can think of other like ABCD examples. But when we were at B, we kind of started branching off into like talking about the content related things, the aboutness. So here it's like, there are two people do have a structural resemblance in their internal model. And it's about the same thing. But then what if one person sees the aboutness in such a different way that again, we fall out of the content related representation? And then how would we move from there back to see where all of a sudden the cognitive models, in a sense, don't have structural representations, resemblance, because they're about different things now. Danielle. So would this be like in the case of two, two things that are so analogous that they share a lot of structural similarities and things of like Deadry Gettner's work that you can actually draw lines between the individual aspects of the concepts, but they could be about totally different things. But two people talking about these things can form some sort of structural similarity in the representations. And it's just like an entry point into being able to think about all other things. But the aboutness really has nothing to do with it. Dean, I might want to ask me on that. Yeah, I don't know if I don't know if this answers that Danielle, but from my space, I'm thinking the difference between say a distracted driver who's looking down at their phone between the seats and somebody who's got that information and heads up display on their windshield. So content twice but structurally presented, one as a distracted state and one as a contiguous state. So I don't know if that answers your point, but I think we know the difference in terms of where the content is structured and how we're trying to contextualize each matter. So again, staying with this sort of educational setting. What is it important to see a transference or a new emergence of? Is it that the structural model of the learner is moving into more resemblance with the teacher? Is it that the aboutness of the learner is moving towards the same aboutness of the teacher? Or let's just remind about what the functional is supporting vicarious use before or in the absence of external events. So does the teaching conversation, will it be a functional representation when vicariously without the teacher, the student can carry out what the teacher could do? Which might be having aboutness or not. It might be with a structural cognitive resemblance or not. It might involve total informational isolation, putting a squarely an A or it might involve some type of challenging of that in D. The reason why there's not going to be, I think, a precise answer because as the paper lays out, even for the same scenario, people do have different perspectives. So it's not like we're trying to take a scenario then classify it into, well, that one's like an A, B, E, G, or an A, C, E, H, like that is probably not what one of the outcomes could be. But these are all perspectives that we could take on a given scenario. And they do reveal interesting things about the systems of studies. Anything else to add on this sort of like looping around the cells or should we push on? Push. Okay, Blue, what were you thinking here? I was just taking notes. So maybe we should talk about content-related and functional. So I was just taking notes on what these are, organizational, like how are the representations, and I was trying to draw a picture. So I will draw my picture if you will talk about maybe functional representation, like what does a functional representation look like? Or content-related? What's inside of those? And then I will draw my picture and then you can show it. Or I'll just draw while you're talking. Sure. So the content-related is whether the generative models need to explicitly model the ways external states produce sensations, environmental models, or the ways actions produce sensations. Is it really important that internal states resemble external states? Or is it sufficient that they afford accurate action control? So let's just say there's a spotlight that's shining photons onto the eye. The content of the cognitive representation would be more in the representational camp if the cognitive model were truly explicitly modeling the ways in which the photons hit the retina, maybe. But this or is a bit challenging because the ways actions produce sensations, that could be a model of like, well, if I blink, it becomes darker and when I open my eyes, it's lighter. But that's a totally disjoint question from understanding how a spotlight works. But those two scenarios of the spotlight's cognitive explicit modeling, the environmental model, or the sensory motor model of blinking and it becoming darker, those are both content-related. And Dean? Well, I don't know if this helps or not, but one of the things that I used to bring up around this idea was the puzzle concept, i.e., so you got a box and it says a thousand-piece puzzle. And when you open up the box, is that representational or is it not representational until you've figured out whether all thousand pieces have now been duly assembled? So from a content perspective, do you have what you need in order to be able to tell the difference between something which is representative and that which is not? So again, I don't know if that's helpful in terms of sort of trying to see a difference between that, say, in structure, like where it's placed relative to the context or not, but that was the way I tried to surface it. One evolution or handshake that hinges on that or of the environmental models and the sensory motor models, they do unpack a little bit. So one approach consists in starting from the sensory motor models. So that would be when I blink, it becomes darker, when my eyes are open, it's lighter, but progressively extends them to incorporate extra variables that describe external causes of sensation. So if there's a light on one side but not on the other side, then it can be the case that that blinking is true no matter how you're turned, but then that could be enriched or augmented with a model of parameters that describes external causes of sensation like there's a light over there and not over there. And so actually that little loop that we just took was from the sensory motor B, the sensory motor model in B, moving to a representational model of the content, like there's a light over there and then does that, well, at least that was the BTE move, but then maybe that could do something else. So maybe blue will clarify there, but let's go back to just review functional, like what is the functional and it's good to revisit these in the zero, one, two, three, four, five, because it's the contribution of the paper and maybe there are other facets, other columns to add and there's just a lot of complexity even as this is. So the functional one is about supporting vicarious use before or in the absence of external events. So in that story with the lights and the blinking, it only had that model only has to be instantiated while the in that room with the photons hitting the eye and the blinking happening. However, we can imagine a situation that's vicariously detached or before that setting, like what would happen if there were a light over here and a light over here and it was the case that blinking made it darker. And this is where they trace back to Piaget that representations should vicariously stand for something external in their absence and afford vicarious operations. So there's a light over on the left side, you reach over and you unscrew it, what is in your hands. Now that is seeming to be in the functional facet because we're talking about the role the representation is playing and the fact that it is standing for something external in the absence, there's no physical light that we're talking about. It's not like we're unscrewing it and then verifying that it's in our hands. Who's for and against that? What functional roles do internal models play during free energy minimization and does minimization require the internal manipulation of variables in ways that resemble vicarious operations in the classical Piaget account of representation? Yes, Blue. So this might be like, I don't know, maybe it's the easiest and also like the hardest to get a grip on. So I think about like do we, so if we have a representation of internal variables that like are functional, like a functional representation of a functional internal representation, is this like doing like you manipulate the variables internally and then manipulate them externally? Like you plan to reach for the cup to take a drink of water and then you reach for the cup to take a drink of water. But you have to have like the internal execution prior to or maybe simultaneous with the external execution. Is that like functional representation? Like you plan the action and do the action maybe separate or maybe together? Is that what functional representation looks like? Let's go with the reaching for the cup because as Danielle noted, like conversation improvised spontaneous conversation of reflexive entities is one of the hardest cases. But it's the one that we have. So that's awesome. But we're going to be talking about a adaptive active inference agent reaching for a mere active inference agent. So in the Sims other paper we discussed, now we're talking about the case of unidirectional integration rather than the multi-scale reciprocal integration. So the person is separated from the cup. There's a separation of the cognitive model of the agent from the cup. The structural side, if the internal model is cup like or it features a cup variable touching a table variable and then another edge that is engaged of the hand touching the cup, for example, that might be a structural similarity. But I think that's part of the debate is how could there be a structural similarity when it's not a cup in your brain, which we kind of talked about last time like how could you have something that structurally resembles a car if it's your brain and body. The contents related encoding environmental contingencies or sensory motor contingencies. So it could be the case that a certain sensory motor action will result in picking up and grasping the cup, whereas another sensory motor action is going to result in the shattering of the cup. And that could be analyzed in terms of correctness. The functional representational question would be like the vicarious detachment of the sensory motor loop from the cognition. So then it would be like if you were to pick up that cup, then you dumped it, what would happen? And then someone says like, well, water would fall out. So in that case, it is like there's a functional representation, whether or not it resembles the structure of the cup. There's a functional representation because that idea of grasping the cup, it's able to be operated on internally vicariously in the absence of direct stimuli related to the cup. Dean. Do you mind if I just read right from the paper for a second because I think it helps in terms of this difference between the sort of the automaticity and reflective aspects of this action. One important implication falling out of this diagnosis is that when considering functional rural aspects, it's often how the details of our chosen process theories are fleshed out and contextualized by the kinds of cognitive phenomena that we are attempting to account for, for that skew or that skew, our interpretation of FEP in one direction or another. For example, one may consider that there are core aspects of FEP such as the possession of a Markov blanket and more ancillary aspects and I think ancillary is the key here, such as the possibility but not the necessity to engage in counterfactual inference, which is only required for planning and it is only the latter more ancillary aspects that call for a representational interpretation under a given process theory. This would imply that when using functional rule as a sole criterion for representational processes, only some FEP agents, namely those that can engage in counterfactual forms of inference would meet the criteria for representation. It is only this subset of FEP agents that would be equated to full-fledged predictive processing agents. So there is an automaticity part to this where it's really not a Markov blanket, you just do it almost subconsciously and that's again the authors are trying to point this out. Yes, if we fail, I drop the cup and now I reflect on that. That's the representational piece but prior to that I was just going through the motions. I didn't need a set of instructions telling me to reach out for the cup. So that's just in the author's words. Thanks a lot for clarifying that. It's like, yes, if one is reflexively grasping at it or accidentally, then grasping the cup, if you're groping around in a dark room and then you happen to grasp the cup, it doesn't have to be representational because there isn't a counterfactual. That's very interesting how they say that it is ancillary so that's kind of like secondary or not essential to it because we can imagine an active inference entity that's taking in-sensory observations, updating its generative model, engaging in policy selection and then resulting in some action output impacting the niche and the cycle begins again. That can be just like a single-layer model that doesn't engage in counterfactuals but they raise that notion that it's only the counterfactuals in cognition that actually like give us the space to have a functional representation. It's like if the shell just represents the shell can't be anything else but then there's another kind of entity who can see the shell as financial or can see it in a different way and then it's those counterfactuals that enable the shell to play a functional representational role in that it stands for something else. Blue, what is this imagery on the bottom? So I just was like trying to really look at or like try to visualize what would these different like representations really look like right? So if we take like the idea of two a person and a cup even right? Okay so on the first one right we have organizational so like I am a thirsty body I'm contained within me there is the cup that contains water over there like I know my boundary I know the boundary of the cup and the representation looks like this and then a structural representation is just me and the cup and my relationship between me and the cup right? So do I have to also have like so my real question here is do these perhaps build on one another? So here you have the structural and then the next one would be content related. Do I also have to have this connecting line in the content related? Is that required? Like do I have to have an idea of the structure to understand the content? And then in the functional like do you need to know the content and the structure and the organizational to have to have an understanding of the function right? Do I need to know like the content related like I know that there's water in the cup right? I assume that there's water in the cup. To find out I have to realize that there's an inside of me and outside of me and inside of the cup and outside of the cup. I have to realize my relationship between me and the cup and then to find out the content like I have to go sample the cup like just who was inside of it and I have to know what's inside of me and then the function like like can I only like I wonder if there's water in that cup like and then if I go to drink it and there's like whiskey in it or something like I'm not going to understand the function until I have all of those things layered on top of each other. And so that's kind of really my question here in Daniel's drawing on my drawings. But really like I just wonder if there's levels of increase in complexity between these different kinds of representations or if it's just merely the structure the organizational and I don't need these boundaries between self and cup in these other models maybe it's just the line without the circles in the structural or or the no external circles and content related. So I just want to know like what you guys maybe think of that if there is a layering here because I was kind of layering them up but I don't know if that's really the right mental representation. Thanks Blue. Dean. I'm not muted good. I think that I think there's two parts to this one is that we are able to differentiate which is what the table allows for and I think that the second part is the the arrow the curved arrow that Daniel drew which essentially represents boundary crossing number one. So the difference between something that's static and something that is translatable and then the second part of it I think without going back down the rabbit hole that we almost fell into in in the point one of this when we started talking about free will and can and will and the orthogonal piece of that I think what it what it speaks to maybe is that it's not a question of free or or encapsulated it's it's a degree of dependency question when you when you throw that curved line on on this sort of tiled representation and start moving across a boundary. So I don't want to call it degrees of dependency will up to a point of passing through zero but I'd like to because I think there is an aspect of that and that's why I think it's kind of it's kind of crazy if we just focus on the static things instead of the moving around the moving through four squares is what I really think gives our minds a good workout. Yep so here's another little workout so blue had two circles in relationship so here the circle is going to reflect an adaptive active inference agent or the full fledged predictive processing agent one that can engage in counterfactuals so the top is two adaptive entities in relationship and the triangle is going to reflect a mere active inference entity like a cup or just something that doesn't engage in counterfactuals so organizational is the separation of the systems so that's very similar because it's just describing there being a Markov blanket separating the systems the structural representation is having vehicles that are structurally similar so it's simple or in the lower case triangle out there in the world is there a structural resemblance that would be very representational whereas if they were cognitively imagining something different than a triangle that would have less resemblance but that gets challenging with adaptive adaptive because we're modeling ourselves modeling each other blah blah blah so here's them modeling the conversation and that's where they're sort of like very complex dynamics which is how can we have a structural similarity about a conversation with another person but it's simpler in the adaptive mere case the content related representations encode environmental contingencies or sensory motor contingencies so like in this case the triangle it is the case that it can rotate clockwise and so this entity does have a good content related representation if it can actually turn the dial to that way and again that's a little bit more complex in the reciprocal case but in the unidirectional case this red arrow is a content related representation of the actual motion in the world and then this last case is the functional representation and here it still is it could be more structural so like more about the actual structure of the cognitive model or it could be more about the encoding of the contingencies but the key piece is this isolation which isn't a Markovian isolation this is actually the true vicarious separation of so here the person or the entity is able to engage in either structural pondering of the triangle or functional pondering of the triangle in the absence of seeing the triangle or grasping the triangle and again though that's a little bit complex when you have multiple counterfactual cognizers asking each other questions I just added a counterfactual to your to your great yeah exactly the red one is um then there's the blue one it could go a different way and then this is this is how we know that it's a true cognizer and a true functional representation yes great call thanks a lot blue because it has to do with the vicarious nature with not directly seeing it but it also has to do with that representation whether more structural triangular or more functional red arrow could be different so there's them having a counterfactual action contingency and here they're also imagining a counterfactual structural position okay I'm going to go to a question from Stephen in the chat Stephen asked how do we resolve the use of the term functional as used in correlational dynamics so that's referring to some of our earlier discussions on like functional and effective connectivity in neuroimaging and the term functional in use uh and the term functional in terms of having cognitive counterfactuals so how is the functional here related to effective and functional connectivity in neuroimaging and time series statistics I'll give a first thought it's not that it can't be linked or connected but they are different namespaces and they're totally different the functional connectivity in the time series is about how changes in one variable through time influence another variable through time or are associated with changes in the other one through time so it's not a mechanistic causal relationship per se but it's just that changes in one variable have an edge reflecting how changes in some other variable change that does not engage either of the pieces that we're highlighting here which is the vicarious or standing for nature nor the counterfactual nature so it is a different use of the term functional functional is an overloaded word and so yeah it's not that you couldn't have a paper that was looking at functional representation and using neuroimaging and using functional and effective connectivity of brain regions while individuals were engaged in functional representation but that would a little bit be like the star was the star of the movie it's just using a word in multiple senses in a way that may give some ambiguity to those who don't parse out the different senses extremely clearly is that fair to say or does anyone have a different thought functional can also mean teleological like the function of something essentially so there's many uses of function and I think it's interesting that it has come up in these different spaces blue so I do agree that like they're they're disconnected but but I am able to perhaps see a relationship in like functional connectivity and also in counterfactuals like so when things are functionally connected like one drives the other right and then the counterfactual is like when things are not functionally connected there's no there's no like you know corresponding correlation right so so in the exploration of functional connectivity you have to have all of the data points like over a time series so you have to like you understand functional connectivity through the exploration of counterfactuals if that makes sense okay how about this functional connectivity the term it is a functional representation that cognitive scientists use because it could be otherwise they're engaging and they're saying region a and b have functional connectivity because I did a time series statistics in spm and we got this value and so that functional connectivity value is able to be used in subsequent internal cognitive representations it stands for something and it can be engaged with in a counterfactual way like what if it would have been stronger then what would our conclusion have been or what if it would have been weaker what would it have been um and then Stephen wrote um these definition these are definitions and ways to name could be where the ontology working group could have a future role yes ontology development in active inference and how will we even make the decisions of what terms to use and what senses and disambiguate the uses and the different corpuses that we are looking at so pretty interesting this graphic actually I hope any of the authors or anyone who is like interested in this topic what do you think about these doodles do you agree disagree let's go to blue's question about autopoiesis what is like interesting or that about just really where it comes in and I think we kind of brought it up earlier in our um in the sharing of a model like like so is sharing of a generative model necessary for self assembly or even the attempt to share a generative model is that that a necessary component of autopoiesis um and at what level like organizationally structurally content-related functionally like at all the levels um I just wonder like how these things are related and when we were talking earlier about having a conversation and trying to make your models align with one another through information sharing and like we saw with the computational boundary of a self about self assembly and autopoiesis via mike levin just about the informational sharing between subunits leads to the formation of a larger cognitive unit and so I was just wondering about model relationships and forming a larger cognitive unit like we're talking about the conversation and then we form we've complete overlap of yes we get it we rock we're there and then we form a larger cognitive unit um and is that on all aspects is it just related to content structure organization or function or all of the above um just what anybody thinks about that all right so and we saw this actually in matt sim's earlier paper the biological symbiosis right yes the autopoietic system is capable of producing and maintaining itself by creating its own parts so it's kind of like the ship of theses plus crew they're able to be modifying and reconstructing the material basis of the system on one hand that maintains an organizational separation of like the cell and the surrounding in terms of the realist interpretation of a markov blanket at the same time the autopoietic process does not need to have well it involves ingressing and outgressing material so that does blur the boundary and structurally it doesn't need to be the case that the autopoietic process of the cell has like a blueprint or a structural resemblance of the cell it could just be subunits that are non-representational they cannot engage um it doesn't even make sense to ask whether the cognitive model has structural similarity with the target because it's a non-cognitive entity and similarly on the functional side the enzymes can't engage in counterfactuals that you know synthesize the lipids and add them to the membrane so is information a representation I don't know I mean we're getting getting into the quantum now so um yeah where's where's information in this it comes up in the paper um 18 17 times based on local information information comprises the generative model past and present information future information information gain no additional information about internal states that's the markov definition high mutual information can be um in the context of a non-representational generalized synchrony like the pendulums that synchronize where is information in this eight-fold distinction so uh the author talked about here if I can read a quote quickly yeah if one only appeals to the organizational aspect of representation the presence of environmental or sensory motor or complex or frugal model does not matter insofar as the internal variables of the model are understood to be separated from external reality by a markov blanket and the generative model is leveraged to infer the causal structure of external reality via self-evidency however these differences matter if one considers structural aspects and the degree of resemblance between hidden variables and environmental dynamics as opposed to action or information gathering dynamics so I don't know here here the content of representations is used to draw a further distinction within the representational view between an internalist sometimes called intellectualist or encodist versus an action-oriented perspective so probably one of the most thrilling academic debates Dean well please don't don't judge what I'm about to say next just bear with me be kind so in this representation for example where we've got this table if we were to look at those lines each one of those horizontal lines because there's three I see and each one of those vertical lines because there's four I see and we realized and we realized them as an standard x y graph we know that there's a relationship between x and y that every one of those horizontal lines relative to y has a zero component to it and we know that every one of those vertical lines relative to x has a zero component to it so most people will look at the blackness of that relative to the white background and they'll just see something material separating but if we actually look at it graphically and we take Daniel's idea of the tetrahedron every time we round an edge and change direction on that tetrahedron there is a zero element that allows for that boundary crossing that we again don't pay as much attention to and so from a from an auto poetic standpoint you can actually see where something like a mark of blanket both separates and leaves the potential open for that transfer or that crossing or that transition so when we're modeling ourselves the last paragraph of the section five of the paper they wrote arriving at an fvp as synthesis so some some non-zero number and zero view depends upon which representational criterion we are assuming when either considering fvp central constructs or considering specific cognitive phenomena through the lens of a process theory under fvp do we include the zero piece of this or not tentatively fully or do we willfully ignore it hence in the end the debate about fvp may reveal more about us our criteria whether we want this to be a material thing and not not include the zero aspect of it and our interest in particular facets of cognition than does about the representational status of fvp so I just want to kind of bring that again don't judge me because I could be way wrong here and I may be overfitting but I think that that auto poetic part of it we can see both the separation and we can see the the portal in a line if we choose to include both that's kind of what I wanted to bring up okay very interesting dean thanks here's the unfolded tetrahedra there's a few unfold there's a few shapes of paper that can be folded into a tetrahedra one looks more like a parallelogram and one looks more like an equilateral triangle we're just kind of like folds up like the petals of a flower so these zero lines so there's the zero point of within a facet representational and non-representational that's almost like the two sides of the paper like how many triangles are here oh four how about eight how many units how many triangle units of paint do you need to paint this four no eight two sides so the zero is like the thickness of the paper and that's also when you fold it up that's like inside versus outside of the tet and then these zero y lines are like the movements from one face to another face where it's also thin enough as to be imperceptible especially when it's laid flat but it makes all the difference to move from one side to another how do we use moving through zero in this eightfold or other cognitive models how do we use that to increase the efficacy or the fluidity or the accessibility of models does it does it help us in explaining that we're not locked out of that self-organization loop that we can actually participate in it is that the first thing that it tells us that we're not we're not locked out of that ability to self-generate that was a question that wasn't that I wasn't questioning that was it was an like an honest sincere question I think so one supportive or a complementary component would be this is sort of a map this is like a cognitive map and someone could say this side of the paper is better than the other side and this is my favorite face but it's presented as part of a larger knowledge structure such that the learner could engage in counterfactuals like what if we were on the other side of the paper or what if we were on a different face so this allows for example individuals to reflect their understanding of the lay of the land but also communicate their preferences and as the paper shows different individuals do have different outcomes in terms of their cognitive conclusions and that's what reveals the super fascinating thing about us and that's why the paper is modeling ourselves what the free energy reveals about our implicit notions of representation not free energy principle modeling representation what the free energy principle reveals about representation which would be a slightly different paper and title so just the scholarship and the sense making phase that only very few have to engage in like samson pazulo have is to make these maps and then bring the perspectives that were not integrated to be shown as just different locations in a phase space like we put four zeros but so it's not as simple as just like two numbers but you know if there was if it was just two topics being against each other every perspective could have like a position in that phase space and then what do we do with that yes daniel i guess i'm thinking about the ourselves part of modeling ourselves i'm not really thinking about all humans and the way that our minds work it's more of the the academic conversations that have been happening around the sort of intellectual tradition that's been happening around representation and so this the contribution of this paper as i see it is that look we've got this framework that does a lot of work for us in a lot of different disciplines and if we apply it to this conversation this intellectual debate that's been happening we can reveal implicit assumptions of the debate itself and what emerges is that perhaps does an artifact of particular examples that have been used to understand representation we see one side of the coin or another we can see these we pull on these different dimensions that are laid out in this paper so i guess it's just important to think about and disagree if you if you disagree but the the ourselves is really just about the way that philosophers and and cognitive scientists have been thinking about this nothing is true and this is not revealing something about cognition per se it's it's more about the implicit assumptions that have been kind of unfolding and how how we're thinking about the the discipline thanks for that um just like there was a sort of hinge on the functional being used in two different ways another sense of representation one that might be more commonly used outside of cognitive science would be like representation in terms of different perspectives or identities in a group and then that made me think of nothing about us without us which is used in different contexts but basically has to do with participatory decision making that decisions are not being made in a way removed from the people who it influences and that has to do with representation no taxation without representation for example and so cognitive science is actually in a nothing about us without us because if there's just an academic discussion especially an unexamined one with a lot of implicit baggage about a hyper intellectualized hyper abstracted scenario or like we like to study representational organizational cases where it's super clean these are the ones that we study these are the model systems these are the kind of careers that have been built in academia for example but it's about us in the broader sense like humans and then even us as cognitive entities and so it is the philosophers and the literature that is in this paper as Danielle kind of highlighted that's the modeling ourselves the selves there are the participants in the discussion around representation which of course has been textual abstract English speaking qualitative other adjectives so it's another interesting connection there blue so definitely like that the subjectivity that Danielle highlighted is a big part of perhaps why we have these organizational structurally different like viewpoints but maybe something that I don't think is really emphasized too much in this paper is kind of the the realist versus instrumentalist viewpoint of active inference and so there's a pretty big argument in the literature and I think probably the math is not the territory maybe gets into it the best maybe or in the most detail of the papers that we've studied here on the stream but you know there's people that say like active inference is a way to model the cognitive process or the action perception process that we all undergo and then there are people who say that this is actually happening in a computational way in the brain it's not a model of it is how it works and so there's this ongoing like vision fusion feud happening in the active inference community also and I just wanted to highlight that because yeah because you may not know thanks a lot blue I pulled out a few quotes because I also noticed that when we it was one of the earlier quotes when we were talking about the I think organizational element so look at how often reality comes into play in these discussions so internal variables of a model okay so we're talking about instrumentalism right like models are understood to be separated from external reality by a markup blanket wait reality and the generative model model is leveraged to infer the causal structure of external reality so it flips four times in that sentence from reality from model to reality back to model back to reality it's a very it doesn't make it right or wrong it's just a very interesting epistemic artifact because within the reality of the model like the reality of the map it is the case that the internal variables are separated and if the model is trying to do determination on that level of reality like maps trying to infer maps that is reality but then that's not the sort of layer one external reality in the sense that many people often mean or use it Dean yeah I think Danny if you take the words reality and model and push them out into even more extreme tales you could you could see the model you can see people who are pushing back against the model idea and they're arguing you're just too rigid right you're just too stuck and you can see people that are taking the non-representational sort of reality view that the variance uh maintained or retained view as say you can see the people that are on the model end of the continuum saying that's just chaos there's nothing about that that we can actually make sense of because the cognitive overload right now is just swamping me right and so I think what these I think what these authors were trying to do is say well can we move back and forth instead of getting stuck on those really far out positions in terms of this in terms of this way of being able to see when we have these densities right and then out of those densities out of those densities re reset back to some sort of invariance and then re reform right so I think that's and I hope that kind of speaks to what Daniel was saying too because sure we can we can point out the most extreme example that like at the very end of the continuum or we can actually talk about how we're going to draw these unidirectional or bi-directional lines and boundary cross and I think that's what they were trying to open us our thinking up to but maybe maybe I'm just being Pollyanna here because I really like the paper. Interesting point here's um realism is also used in another way which is kind of like it's a school of literature or a thread of literature and so then I found this paper affective realism evoke realism beyond representation and so this is in the case of literary expression so that's an activity that cognitive entities are engaged in and we just see a lot of terms that we talk about in active like agency affect process and here it's not about reality realism territory it's like realization and realization is an interesting word and I know verveki and others use it as well because realization it is about the model but when something is realized it becomes real in the model but usually we wouldn't say that something is realized like oh I just realized that um zebras have nine legs it's like you engage in a counterfactual but is it a realization if the model is updated in a way that isn't concordant with reality and I agree that the paper gives us a lot of nuance to talk about these areas because we can talk about how well the isolation of the variables doesn't change or the structure of the legs and the zebra doesn't change or it could still run right so why not nine legs is it still a zebra like there's a lot of open threads but the way that they use model and reality in light of a lot of the other discussions we've had is really illustrative blue so one of my favorite things to think about in terms of like the realism versus the instrumentalism is the markup blanket and so like I don't I'm just looking for the picture about like of the person sleeping under the markup blanket like like really as a partition for a system like do systems have markup blankets or are they can they be modeled with markup blankets but like and I always just like think of that cute little picture with the little person sleeping under the markup blanket like do I have markup blanket around me because that's really where it tends to break down in my mind it's like yes there's a boundary um but I'm not sure that it's a frist and blanket or a pearl blanket or a markup blanket like maybe it's just skin um yes Dean uh I hate answering a question with another question but in the conclusion of the paper the authors speak specifically to the free energy principle can be very heuristic and so I'm wondering in the instrumentalist versus versus realist debate whether it has to necessarily get up to that abstract level of markup blanket or whether we can just say that sandwich between those two are rules that we demarcate that we we lay down and that we act have act as a parsing mechanism I don't know what do you think blue like our rules our rules things that we can just get our hands on right today instead of having to go all the way up to figuring out what a markup blanket is under this set of circumstances no I can definitely rules right so I mean we all agree that there are lots in physics like if you drop an apple it will fall to the floor like like we all abide by these rules they might not apply at every scale but um I definitely think that a rule-based construction or understanding like does that then bridge the realism versus instrumentalism debate perhaps so building on the point that the ourselves in the title is the people reading the paper so not just like people in the psych department but it's the ourselves the people who are in that epistemic commons so using that but also the contents related representations in terms of correctness or truth so if that if the ourselves are those who are pondering FEP then the heuristic that they refer to we conclude by highlighting the heuristic power of the FEP to advance our understanding of the notion of internal representation it is pretty meta because we're representing the debate about people representing the debate about representation um however the fact that it has a heuristic utility within the content related aspect at the very least it does have representational impact I'm not sure where that takes us there's probably other ways to take it but I think something about um the diversity of conclusions that people have and how that shines a light back on us or is a mirror and how the FEP has been applied here in this paper as a heuristic for sense making in this meta debate says something I hope Danielle I hope I'm addressing what you're saying and not taking this in another direction but I stepping back I'm thinking of FEP as just one of many different tools of thought that we have that allow us to align our our representations and if it is an effective tool we all start having a conversation that makes sense to each other that's mutually intelligible in this case it is kind of meta because it's about how the academics the intellectual conversation about representations have failed to converge on something there's this debate there are many layers of the debate and so what it reveals is that we haven't yet converged that the community hasn't converged and so FEP is is this thing that we can use that reveals those things and allows us to then better because it's revealing these things we can all attend to those things that were implicit and are now made explicit we're converging our representations so I think that kind of addresses what you're saying it's about representations but it's also this this tool like there are many other types of tools that allow us to realize what we've been taking for granted and align our our representations thanks a lot Dean yeah just just to add on to that if you're like me you think rules and tools are are interchangeable and the other thing you think is that if it's a heuristic it speaks kind of that to that vicarious function in terms of content so I I just want to throw that on top because I think there's some agreement around that we use the rules to make that next very vicarious play the next before we make the next move so I mean anybody that doesn't are just kind of kidding themselves I think but maybe I'm wrong okay here's a little bit of FEP in a heuristic case so we've talked about the Helmholtz decomposition of vector fields like in stream number 32 about how there's this pragmatic or sort of hill climbing or gradient descending component whether you take the optimistic biology climbing mountain probable go straight up the mountain of fitness or the physicists gradient descent into the bottom of the well in either case there's that irrotational component that's associated with pragmatism and then there's this epistemic component that's associated with the solenoidal isocontor so we've talked about that Helmholtz decomposition so here's one way in which the FEP is being heuristically applied so Danielle said the academic conversation on representation has not converged and it it has not converged and it has also not even coherent like there isn't a shared ontology there's not a shared narrative there's not shared words or regime of attention or what people are even focused on and so what the FEP is doing literally like they did here with this distinction it does a few things it it holds space for not overfitting like let's just say in 2055 all the people in the world or all the cognitive scientists they all agreed oh great we converged let's lock that in but FEP even then will remind us like not to overfit and also not to center one side of the tetrahedra we could say this is the side that has these attributes or we focus on this side for these reasons and no one goes there to this other side or there's a reason why we don't go there or we take a journey we travel through there but we don't live there or something like that it brings a level of coherence to the discussion that doesn't necessarily have the goal of perfect static convergence but still there'll be solenoidal flow and differences in opinion even then who said the FEP wasn't useful well and and maybe this is obvious but is is that not what any good theory does like blue said well gravity is true because if I drop a nap it will fall yeah so the theory of gravity does the same thing it allows us to kind of converge our representations it's just that the theory of gravity is not itself about representation yes this this is like the hall of mirrors this paper and conversation because it is like it's the group of people in the hall of mirrors and they're all wearing mirror suits in a weird way but it is our gym for building the kind of frameworks that then can reduce our uncertainty a ton potentially in similar cases and the strange loop of reflexive cognitive entities is going to be this open-ended bount for a long time to come but just like how we started here with a really complex reciprocal case of two adaptive entities but then it was a lot less ambiguous here and I wonder if actinth informed system design will frame an increasing amount of pieces especially in a digital world into this bottom tier offloading some of the repetitive tasks attention consuming tasks anxiety producing stimuli into these type of interactions which opens up new spaces for these kinds of interactions with the two adaptive entities to return to that having or possessing a mark of blanket just one quick point was which we already even read the quote which was a core aspect of FEP such as the possession of a mark of blanket so is the mark of blanket the system or does it have one those are both realist or is it modeled as having is it modeled as being those are instrumentalist ways of similar framings and how do we know what the core aspects are what does that stem from so there's a ton to learn and discuss there and then let's look at any in just our last few minutes any of the other topics and if anybody wants to highlight one of these or also maybe as we jump off from 36.2 to 37 and beyond when we discuss free energy a user's guide but also we just continue on our paths like why does it matter not all these papers maybe make the argument for why this whole representation debate matters so what is the pragmatics that make this meaningful make it important to fund people working on it or the way that influences the technologies that we're developing rather than just a pure info gain reduction of uncertainty question so any topics here or any like thoughts on how this will or could or should influence our action selection going forward i'm comfortable with long silences but i'm gonna i'm gonna throw this out there i think um i think people will be able to use this tool to become more creative not just be able to reduce their minimum reduce or minimize their prediction errors but actually have the confidences to come up with new and different ways of looking at problems that we probably didn't have in our in our toolkit before in terms of being able to come up with something different and so i think um the first thing that it does is it allows for a boundary crossing that that maybe we constructed in our minds before but we're seeing less and less of it as being a hard stop and there's now maybe a little bit more more say in whether to go or no or don't go so that's what i would think that this kind of work is is is maybe opening up unless somebody comes and shuts it down and i can't speak to what outside forces would find terribly threatening but i'm sure there are some out there outside blue so i'm not sure if this has any pragmatic value but i'm i'm certainly interested in um kind of really exploring like sensory motor engagement and structural representation and maybe like synesthesia um i know daniel's been to me i will but like you know when you're playing these invisible like the invisible harp you know and and like could you structure a room like draw a map in a room and your goal is to like kind of like an escape room but your goal is to find navigate your way through uh based on found right like you ting the things and you have to find the right sound you play marry had a little lamb and then you get to your prize at the end or you get to get out of the room and so as i wonder creatively like that like um dean was saying like really these maps um these representations can we get literally out of our own representation and into the feeling of an alternate representation and what would that feel like or be like and that's also why i brought up like blindness like you know if you ever wake up in like a totally like my power went out the other night it was like black i mean no moon nothing like it's just like whoa total darkness right like no little blue light from the plug or nothing right so um in these like total darkness situations you have this representation of like what does my house look like where's my flashlight like you have to be able to navigate yourself in a totally dark situation it's which is very foreign to go find a flashlight and so i i like these kind of boundary pushing um experiences and because they really force us to um examine our own model and perhaps allow us to explore the kind of models that other people use thinking through other minds is this case with adaptive agents in dialogue and it's it's like thinking through other representations and this paper and discussion opens up not to then simply converge but opens up and holds ways to think and act through other types of representations not just the same type but in a different cognitive entity it comes up a lot and it'd be awesome to see how just as so many other themes that we've had raised by very prescient papers like 14 math is not the territory 20 emperors new markov blanket and some of the further nuancing about the marcovian assumptions this representational and non-representational and these words like organization structure content and function they will come up again and maybe they'll be citing this paper maybe they won't but like it's almost like when we see in a future paper just one facet of the tet and they'll say here's how it is it's this one side and it's like well no there's other sides and we could be inside or we could be outside and someone could still say i prefer this one but we know there's a bigger system that we can at least come to the table around dean i'm not i'm not marketing but it's going to come up in 37 it's going to come up a bunch of times in 37 so that's just the next paper though anyway hold on to your socks because if you think you have to wait for it to come up again it's going to come back up again next week so don't touch that dial thanks a lot for the awesome sequence this was a somewhat complex or intellectual paper i mean they're all written but this one was like challenging topics a lot of different perspectives and then not only did they hold it open for all of the different perspectives but then rather than pick a winner they flipped the table in the end and posed it not as an answered competition for us to then have like domain specific clarity of action within but really a challenge on multiple levels for researchers who are familiar with f.e.p. and not the big question we asked was can the free energy principle help us advance or even resolve the long lasting debate on internal representation in philosophy of mind we certainly asked so if anybody has any last comments they want to make on anything they're welcome to the last word so i'll get this in real quick i think that if we're still arguing about whether or not representation and modeling is heavily dependent like that there is a dependency i think that the more philosophical views that we see come up around f.e.p. and active inference the harder and harder it is to make the case that we can remove that dependency or somehow just see the scale-free aspects of this tool that there's actually a heavy dependency on what we're going to get out of representation or non-representation based on our awareness or our realization that every time we cross a boundary that is just an independent act that's a highly dependent act so that'll be my last word so i just want to give a final comment you know we talk about instrumentalism and realism and then there's like third access right of utility and i really think that this paper was super useful in prompting new questions in holding up these different views of representation and non-representation and in these different categories and allowing us to really kind of view them through like a partitioned framework that i think will drive future questions and and really be useful in helping researchers formulate their theories. Danielle if you're still here thanks for joining for your first stream do you have any last comments? Well this was a wonderful introduction i'm curious how representative it is of other conversations it was super heady in a really pleasant way. It was not representative or representational of other conversations no thanks a lot for the awesome convo everyone's always welcome to join these discussions or other lab activities so thanks again to the authors and all the participants here see