 Hello and welcome. It's March 29, 2023. We're here in active guest stream number 40.1 with Vanya Visa. We're going to be hearing a presentation in the first section and then having a discussion. So thank you, Vanya, for joining again, really looking forward to your presentation. Thank you very much. So this presentation is based on a pre-print that can be found at Phil Archive. And this is work in progress. And I probably or hopefully upload a revised version of this pre-print in a few weeks. So if you're watching this or if you're reading the pre-print and have any comments or questions, feedback is very much welcome. And I'm also, again, grateful for having the opportunity to present here, because I'm really looking forward to the discussion and any comments that people may have watching this video. So do contact me if you have questions or comments. Now I want to start this presentation with an idea from this wonderful novella by Ted Chiang, the Lifecycle of Software Objects. And if you're not familiar with the novella, it doesn't matter. So there are some digital entities or digians, as they're called in the novella, who at first live in a purely virtual environment. So these are virtual entities like virtual pets. And human users can interact with them in a virtual environment. Then in some passages of the novella, there are scenes in which some digians are, as it were, downloaded from this purely virtual environment and are implemented in a physically embodied robot. So the program that is running within this computer simulation and is controlling the purely virtual body is downloaded into a physically embodied computer and is controlling a physical robot, which can interact with its physical environment, non-virtual environment, in the same way in which the virtual agent, the virtual digian, interacts with the virtual environment. And I find this idea really fascinating and powerful. So the novella suggests that these virtual entities are conscious and that they can switch back and forth, as it were, between this purely virtual environment and the physical non-virtual environment. And the idea is that these entities remain conscious when they're in the robot, but also when they return to the virtual environment. And to some people, it may be un-intuitive or counter-intuitive that a purely virtual entity in a simulated environment can be conscious, but maybe some of these people will find it less counter-intuitive to imagine a conscious robot. And so the idea is fascinating to me because it suggests that if you can switch back and forth from the virtual environment to the physical environment and in the other direction and it really doesn't matter where the software is implemented, the system remains conscious. And I will come back to this idea at the end of my presentation and we will see what the account that I'm presenting here suggests with respect to this scenario. So this is the overview of my presentation, which is based on the pre-print. I'm just omitting a lot of details. So I'll start by saying just a few things about the pre-energy principle. Then I will ask what is the difference between an unconscious simulation of a conscious system and a conscious computational system? In other words, what's the difference between what is called weak and strong artificial consciousness? So strong artificial consciousness is an actually conscious artificial system and weak artificial consciousness is constituted by an artificial system that maybe behaves as if it were conscious or that simulates a conscious being but is not actually conscious. And I'm interested in the question, what's the difference? And a particular version of this question is whether a computer simulation in a computer with a von Neumann architecture can be conscious. So I start with the pre-energy principle and I don't want to go into the details. So we start with the description of a physical system. So the x is the physical system and the fundamental assumption is that the dynamics, the physical dynamics of this system can be decomposed into two components. So we on one hand have a deterministic flow term F and then some noise term, stochastic noise term. So we end up with a stochastic differential equation which describes how the physical system evolves over time. The further assumption is that we can decompose the states of the system into internal states mu, external states eta, sensory states s and active states a which together, so sensory and active states together constitute the blanket states b. And this then is a particular system that can be partitioned into internal and external states separated by blanket states. And a further assumption is that the dynamics of this system can also be partitioned in the following sense that we can decompose this flow term into individual flow terms for the different states. So we have a flow term for internal states mu and can also describe the dynamics of internal states in terms of such a differential equation here. Okay, so these are just technical fundamental assumptions made by the free energy principle but then an interesting thing starts here when we look at how else we can describe internal states and the dynamics of internal states. So the idea is that we can map internal states mu to probability distributions. So this is your formulation in terms of states and then the idea is that at every point in time the system will be in certain states so there will be internal states relative to each time point and we can map each of these states to some probability distribution q of external states given blanket states. Okay, so we can do that but then the question is why should we do that and the idea is that maybe we can re-describe the flow term which is essential for characterizing the dynamics of internal states. Can we re-describe these flow terms in terms of the probability distribution encoded by internal states and the free energy principle answers? Yes, we can rewrite this term in terms of a variational free energy functional which specifies how the probability distribution q which is parametrized by mu and there are also mu changes over time. So we can re-formulate the physical dynamics of internal states. We can rewrite the flow term in terms of variational free energy or in terms of minimizing variational free energy with respect to a probability distribution encoded by internal states and this is then called Bayesian mechanics and it's called Bayesian mechanics because minimizing variational free energy involves minimizing a divergence term between the internally encoded probability distribution q and a posterior over external states given blanket states. Okay, so to sum this up I will give these ideas some labels. So I've already talked about the physical dynamics and these are the dynamics of a particular system described by stochastic differential equation involving a flow term and a noise term and recall that a particular system is a system that can be partitioned into internal, external and blanket states and then according to the free energy principle we can re-describe these physical dynamics in terms of as computational dynamics. We can re-formulate them in terms of a description of the system's internal states mu as performing approximative Bayesian inference by minimizing variational free energy with respect to probability density encoded by mu. So in short we can reformulate the physical dynamics as a computational process that involves a form of Bayesian inference. Now a further assumption that I want to make here is that we can capture the computational correlates of consciousness in terms of the computational dynamics as described by the free energy principle. So here the computational correlates of consciousness or CCCs are defined as a computational dynamics of a physical system of a conscious physical system as specified by the free energy principle. Okay, so this is not just any physical system, it's a conscious physical system. I've got to add this on the slide. All right, so given these initial assumptions and definitions can we say something about the difference between weak and strong artificial consciousness? And I think we can. So let's start with the idea of a computational correlates of consciousness. Computations are medium independent. So it's plausible that a digital computer can implement computational correlates of consciousness. But then the question if we want to distinguish weak from strong artificial consciousness is is every system that implements CCCs conscious or do systems have to implement these computations in the right way and what would that be? So to better understand this question, I find it useful to look at this diagram and see where we started. So we have a description of a conscious system's physical dynamics in the bottom left and then according to the free energy principle we can re-formulate this, re-describe this as some computational dynamics and by assumption these include the computational correlates of consciousness and they must be implemented by the physical dynamics of the conscious system. Now, given that these are medium independent properties they can also be instantiated by other systems and here I'm assuming that these computations can also be implemented by a digital computer and the digital computer as a physical system has some physical dynamics and if we can apply the free energy principle to it we can reformulate these physical dynamics as computational dynamics and so then the question becomes, do these computational dynamics do they include the computational correlates of consciousness, whatever they are? So this is a crucial question that we have to answer if we want to know what the difference between weak and strong artificial consciousness is and maybe there is no difference. So before I present an argument to the effect that in general the computational dynamics of a digital computer will not entail the computational correlates of consciousness before I present this argument I will present some observations on the free energy principle so note that systems that conform to the free energy principle sustain their existence by minimizing variation of free energy but the reverse does not hold. So you can run simulations of agents that minimize variation of free energy and you can do that on a computer which does not thereby sustain its existence so it would continue to exist regardless of whether it runs these simulations or not. Okay and just to illustrate this with a quotation which may be familiar to many so this is by Prismat out many theories in the biologica sciences are answers to the question what must things do in order to exist the free energy principle turns this question on its head and asks if things exist what must they do more formally if we can define what it means to be something can we identify the physics or dynamics that a thing must possess and Jacob Howie puts a similar point as follows the free energy principle analyzes the concept of existence of particular self-organizing systems so the idea is that continuing to exist sustaining one's existence means minimizing variational free energy and so any processes that contribute to minimizing variational free energy thereby contribute to the sustained existence of the system but such processes can also be implemented in other ways by different systems which do not thereby sustain their existence and this I want to suggest is the crucial difference between systems that merely simulate conscious systems and systems that actually are conscious okay so and I will hopefully clarify this idea in a moment by presenting an argument but before that I need a definition so I want to introduce a notion of intrinsic computation and these are just computations that contribute to the sustained existence of the systems so these are the computations that figure in the computational reformulation of the systems physical dynamics according to the free energy principle and intrinsic here means that these computations are observer independent so it's not the case that we as observers say okay we can interpret the system as performing these computations or it's useful for us to use them as computational devices the idea is that these computations are processes that are intrinsic to the system that only depend on properties of the system itself and not on relations that the system has to observers or to other beings okay so here's the argument which in a way is already contained in the pre-print but I did not formulate the argument in this explicit way so I'm grateful to Tomi Korbach who provided some comments on my pre-print and who also suggested a reconstruction of the argument that I'm presenting in the pre-print and this formulation here is based on Tomi's suggestions and I hope this clarifies the argument that I'm putting forward developing in the pre-print okay so the first assumption in the argument is a version of computationalism about consciousness so the idea is that causal roles that characterize phenomenal consciousness are medium independent and can be captured in terms of computation these are computational correlates of consciousness so the idea is that consciousness does not require a particular type of substrate it can be and principle can be realized by different types of system as long as these systems implement the right computations so this assumption as I intended does not entail that implementing the right computations is sufficient for being conscious it's here just meant as a necessary condition and the idea is that there are informative interesting computational correlates of consciousness and they can in principle be formed implemented by different types of system then comes the second assumption according to which every conscious system and its computational correlates can be described by a mechanical theory that conforms to the free energy principle so this is an assumption about the scope of the free energy principle and it's at least clear that the free energy principle is intended to have a very wide scope that's not meant just to apply to the human brain it's not just meant to apply to living systems but it's meant to apply to a very wide class of self-organizing systems and this idea or this intention that the free energy principle has a very wide scope is also evidenced by the fact that recent formulations and recent developments in research on the free energy principles seek to relax certain assumptions that were made in previous version so as to make the free energy principle applicable to a very wide class of systems and not for instance just to systems in a non-equilibrium steady state for instance just as one example alright so and the idea is that we can apply the free energy principle to conscious systems that means we can move from a description of physical dynamics to a description of computational dynamics and these will include computational correlates of consciousness the third assumption makes a connection between intrinsic computational consciousness the system is conscious only if it sustains its existence by virtue of the computational correlates it realizes such computational correlates of consciousness are intrinsic computations computations that contribute to the sustained existence of the system so if a system follows certain or implements certain physical dynamics which under the FEP can be described as computational dynamics that entail computational correlates of consciousness whatever they are then the system is conscious but it's only conscious if it's in this direction so in principle it's possible to implement the same computational processes without being conscious and this is in becomes clear in a conclusion or as described by conclusion one it's possible to instantiate computational correlates of consciousness in a physical system without instantiating consciousness namely if realizing the computational correlates of consciousness does not contribute to the sustained existence of the system so assuming that a digital computer can perform the computations that characterize consciousness in conscious systems it's likely according to this proposal it's likely that the computer will not thereby be conscious because it could implement different computations and it really doesn't matter what computations it performs it will continue to exist as a physical system regardless of the computational processes that it implements so it does not sustain its existence by virtue of the computations it performs alright and a condition that is entailed by this is that from the point of view of the free energy principle computation of the physical system is intrinsic if it matches the physical the system's physical dynamics and this will be important also for the question about consciousness in computer simulations so on the second conclusion that I want to draw here is that computers with a von Neumann architecture cannot implement intrinsic computations because their physical dynamics induces a causal flow that is different from the causal flow entailed by the computational dynamics so there's no match between the physical and the computational dynamics or between the physical dynamics and the computations that are performed within the computer simulation I will unpack this in the third part of my presentation so could there be consciousness in a computer simulation and a computer with a classical architecture so this is an idea that I already presented in previous works so in a system that conforms to the free energy principle there's a basic flow from internal states via active states to external states and from external states via sensory states to internal states so we have these circular dynamics and in a computer with a von Neumann architecture the basic causal flow is a bit different so we have memory and we have a separate processing unit and the units that store the values of the different variables that stand for internal, external and blanket states they are in the memory unit and there's no direct causal interaction between these units but it's always mediated by the CPU so that's the basic idea and therefore there's a difference in the causal flow and that cannot, this means that the physical dynamics of the computer does not have a or if we want to reformulate the physical dynamics of the computer in terms of minimizing variation of free energy we end up with something that cannot match the, that we end up with computational processes that cannot be identical to the computations that are simulated by the computer because then there would have to be a match between the states that represent the probability distributions encoded by internal states and the internal states themselves okay so from this I suggest we can derive two necessary conditions for consciousness one is the flow condition, the causal flow of a physical system's computational dynamics which may realize computational correlates of consciousness must match the causal flow of the system's physical dynamics and then there's a second condition which I call the existential condition where the system sustains existence by virtue of realizing computational correlates of consciousness so if it's a conscious physical system then this is the case alright, just a few observations or notes about this so a system can satisfy the first flow condition without thereby satisfying the existential conditions the existential condition is stronger than the flow condition but if a system satisfies the existential conditions then it also satisfies the flow conditions and not all systems that satisfy both of these conditions are conscious in other words neither of them are sufficient for consciousness these are really just strictly necessary conditions for consciousness alright, let me return to the idea that I presented in the beginning from Ted Chiang's novella so from the point of view of the account that I presented here is it possible to download a virtual entity to a robot and will it be conscious or can it be conscious in the simulation? so I should just assume here that these systems satisfy the flow condition the first condition so assuming that this is a very special computer simulation in which the physical states that represent internal, external and blanket states that these physical states also directly, causally interact with each other and according to the account proposed here the computer simulation there would still not be consciousness in the computer simulation because the computer does not sustain its existence by virtue of performing these computations it could also run different simulations which do not involve these digital entities and it would continue to exist so according to this the account presented here there would not be consciousness in this system and if we download the part of the program that controls the virtual agent and download it to a physically embodied robot could that robot be conscious? well in principle it could if the robot sustains its existence by virtue of performing the computations that are also performed by the computer simulation and now this result might be a bit strange because contrary to what the novella suggests it would suggest that the system cannot switch back and forth between the virtual and the physical non-virtual environment without losing consciousness and I admit that this is a counter-intuitive implication but maybe it only seems counter-intuitive maybe it's not technically possible to download a virtual entity which was trained in a purely virtual environment and downloaded to implement it in a physical robot and thereby allowing the physical robot to interact with its environment in the same way as a virtual entity can interact with a virtual environment so I know that in robotics there's a certain strategy to train robots or to develop the controller of the physical robot I mean that you first train it in a virtual and simulated environment and then you use that to control the robot maybe this only works for robots that have certain limitations in their sensory motor abilities maybe it doesn't work for highly sophisticated robots that are more like conscious organisms which can react adaptively and very flexible in a flexible way and interact flexibly with the environment on the basis maybe of effectively guided effectively shaped representations so this would be maybe an empirical hypothesis derived from my account that this strategy is a sim-to-real strategy of simulating an entity in a virtual environment and then applying that to a physical entity the prediction would be that this has at some point reached limits and will not be successful if on the other hand this hypothesis or this prediction will turn out to be false I think we should reconsider the account that I'm presenting here let me conclude I've asked what is the difference between weak and strong artificial consciousness and the suggestion is that it's about intrinsic computation that's the difference actually conscious systems sustain their existence at least in part by virtue of performing certain computations and a mere simulation of a conscious system does not thereby sustain its existence it could also perform different computations and still continue to exist this condition entails the flow condition which is important for the second question could a computer simulation and a computer with a van Neumann architecture be conscious? No, because it violates the flow condition okay, here's an ad for the Journal of Philosophy on the Mind Sciences which I'm running together with Sasha Fing, Jennifer Wint and Regina Fabri and I thank you for your attention awesome, thank you I'm just getting my video back in the game while I'm getting everything back on the stream first just thank you for the presentation and wanted to pick up with that robotics intro and conclusion to what extent does the embodiment in the robot matter beyond the ability of the virtual simulation to do things like eject the CD drive or ultimately do physical things just not the kinds of physical things that we see human children do like play with toys when you were thinking about the virtualized simulation were you thinking about one that had no access to sensors and actuators or what happens when the digital simulation also has access in some limited way to the ability to sense and act on the outside world if you have any thoughts I'm you're muted sorry about that, on the stream it was audible but not to you but now I've resolved the zoom so we can from your consciousness perspective consider it a new question so I just wanted to pick up with that robotics example a digital simulation may still have sensors engaging with the world and may still be able to undertake actuation if only something like flipping a switch on a processor so what exactly about the embodiment do you think matters for it to have that kind of adaptivity and maybe even open-endedness and learning that you pointed to as an important property yeah, good question so I think there are many different aspects that you're touching upon let me start with this one so I mentioned my correspondence with Tommy Korbach and one thing that he suggested was that couldn't we regard a language model as a system that has certain sensory motor abilities so it can interact with a physical environment via a linguistic interface so it receives text as input and it outputs text and can't we regard this as sensing and acting and so this I think is similar to one of the aspects that your question touches upon so what is it about the embodiment and can there be different forms of embodiment that might still lead to consciousness and I don't have a full reply to this just two things that I would mention here the first is that there are certainly some analogies between what a language model does when it interacts with a user and a system that maybe an organism that interacts with its environment via perception and action but there are also some disanalogies so when it comes to interacting with the environment when an organism interacts with its environment their temporal constraints it's really important that it does the right thing at the right time and reacts fast enough and so on and for a language model that does not play a big role so there's one difference and then there's also a difference in the format of the representations and in principle you could say okay why does this matter and maybe in the end it doesn't but I just want to mention that there are some crucial disanalogies that I think would be important to take into consideration and evaluate then there's a question okay is there something about these more low level sensory motor skills that is crucial when it comes to consciousness now on the one hand we know that an organism can be conscious without having any language skills without being able to speak or understand language so this is why when it comes to functions of consciousness for instance people don't look for or usually don't look for linguistic abilities but more for sensory motor abilities or forms of learning that enable or improve interaction with the world and so this is one thing but of course in principle it could still be that a system that lacks these low level skills is conscious but then I would say if it's possible to be conscious without having these sensory motor skills then it's not really about any ability for interaction with an environment that a system is conscious but then it's merely because of some internal computations that the system performs so if you're thinking about systems that might be islands of awareness systems that don't receive any sensory input and don't produce any motor output then I wouldn't want to rule out that such systems are conscious and these such systems don't have any ways of interacting with their environment so I'm happy to accept that that interaction with the environment is not required for consciousness but coming back to language models this would mean that being able to interact with users using language is similarly not required for consciousness so I would say if such a system is conscious then it's not because of its ability to interact with the environment but because of the internal processes and what may be crucial is not the actual interaction with the environment but just the potential for interaction with the physical environment and in principle it might very well be that a language model can facilitate interaction with the physical environment for instance if it's connected to a physically embodied robot and I mean there's research going on on such things researchers who try to improve the abilities of their robots by connecting them or interfacing them with a language model which can then help translate commands into motor sequences and so what this suggests is that maybe there are some crucial types of representation or computational processes that are actually already implemented in language models or in other systems that don't really not directly interact with the physical environment and I'm happy to accept that but I would add that according to the proposal that I've presented here you need something more for that you need really the so it's not sufficient to just implement these computations but they must have a meaning for the system itself in a sense that the system will sustain its existence by virtue of these processes if it is connected to the real physical environment now maybe that I've not addressed every aspect of your questions so feel free to restate your question it's great it got to a great place because I wanted to follow up with this intrinsic computation concept so you mentioned that their observer independent to the extent that anything or any process can be and that these are the kinds of processes that actually enable the persistence of that thing according to the FEP and so I was thinking about this first in a bottom up way like this is the firmware this is the Linux kernel this is the kind of enabling software that is supporting potentially extraneous higher order functions and that is allowing some kind of separation or delineation between what we can say are like the vital and intrinsic like almost the homeostatic functions and then second order functions but then I thought about the ecology that the computation is deployed in and so let's just say that the software is being the software simulation is being used in a scientific context or to run Photoshop and another cognitive agent keeps that simulation alive because it's performing some function and so from that bottom up sense the Photoshop program is not the Linux kernel so it's not like the Photoshop in principle is not sustaining the persistence of the entity in a bottom up fashion however in deployment it actually becomes a necessary condition imposed externally and that would also be extremely observer dependent or subjective so how do we think about in standard modern computers or in wetware what are these intrinsic computations thank you that's a great question and so I can say more about intrinsic computation in a moment but just one comment before that so I think you're touching on a very important issue and that's a question what is a system in the first place so if we want to consider the question what kinds of entities can be conscious and we have to first say okay what are the entities that we're looking at in the first place what are these things and why can't we look at a piece of software or some process that is running an app that a user is using what was the example that you gave Photoshop or something so why can't we regard Photoshop and a particular instance of this software as a system which exists for some time and it will continue to exist if it's useful for a human being and it's useful by virtue of the computation that computations that are realized by it and so can't we say that the system sustains its existence by virtue of the computations that it performs and so that's I think a very crucial question because it's so fundamental so what is a system and currently I think that the free energy principle is sufficient to answer this question but it may be that it's still too general so I mean the free energy principle gives us an answer to the question if something exists what must it do and so it analyzes this notion of the continued existence of self-organizing systems and maybe it does apply to the processes that implement Photoshop in the computer of a human being who uses this software and if that's the case then I think we have to add some further constraints to or it may be necessary to add some further constraints on the kinds of systems that we want to regard as potentially conscious or that we want to consider when it comes to the question what kinds of systems can be conscious and one suggestion or idea that I think would be worthwhile to explore here is to investigate to what extent something like the free integrated information theory may complement the free energy principle in this regard because integrated information theory has a very strong emphasis on the notion of existence of what it means to exist what it means to be what it means to intrinsically exist and not just in the eye of a beholder and maybe we need to add constraints from something like integrated information theory which would then tell us what types of systems really exist and then maybe it would say the processes that are implementing Photoshop in this computer don't really exist in an intrinsic sense so this would then be my suggestion to add some constraints but of course you could say well no such constraints are completely unnecessary we don't need to exclude certain systems and we can regard the processes that implement Photoshop as a real system that sustains its existence but my intuition would be at least currently that such processes would at least in a classic system with a classical hardware would be too scattered and there would be so many physical processes interfering as it were that you can regard this as a separate thing which is distinct from other things that are happening in the computer but I would say this is more an observer dependent property because those processes that are so entangled with other processes in the physical system that they are not particular systems in the sense of the free energy principle Awesome and again kind of journeys us to another question about the topological or geometric concordances between physical and computational dynamics and it made me think of a few different kinds of systems the Von Neumann architecture described with the CPU and the RAM regular desktop computer situation of course we have unconventional computing paradigms quantum and analog computers we have the brain and the SPM package in which statistical inferences help us model how brain regions that aren't connected through for example axons can still have an edge in a Bayesian graph they can still have a causal effective or functional connectivity without being structurally connected and vice versa and then to kind of bring it back to the computer and what you brought up about whether or not those processes would be too scattered what about a computer either a distributed computing cluster or a strict virtualization scheme on the computer such that certain variables were isolated would it be enough for the system to have some kind of constraint again through a physical networked layout or through a virtualized system so that the computational flow and the causal flow could be played with in a different way and then how would we know what would we be looking for surely not simply test retest accuracy or efficiency in some way that's not what consciousness is so it comes down to what are we looking for or talking about if we did have the ability to design different systems that did have different overlap with their computational and structural aspects Yeah, oh great question so I think it's very important to look at different types of computer architecture different types of hardware and you mentioned some so there's actually something that was pointed out to me by Johannes Kleiner so he pointed me to the notion of computation and memory or something similar and if I understand that correctly it is a slight departure from the von Neumann architecture because you actually do computations within the memory unit so it's not that you have additional memory units within the CPU but you perform computations in the memory unit itself which would be a way of satisfying the flow constraint the flow condition that I mentioned and similarly there may be other architecture maybe neuromorphic hardware that also could implement simulations in a way that satisfies the flow condition and as I pointed out in the presentation the flow condition is just one necessary condition which is weaker than the existential condition so all of these systems even if they satisfy the flow condition would not thereby also satisfy the stronger existential condition but as may have become clear I am not completely certain about this existential condition or rather there may be some ways in which one can describe what's going on in such systems that you can actually say the processes that implement these computations they are systems, particular systems to which the free energy principle can be applied even if maybe the material on which these processes run is not directly affected by these computational processes so maybe the if we think about maybe neuromorphic computer chips maybe the chips themselves they will continue to exist no matter what computations they perform but if what's going on if the activity that implements the computations within such a chip is sophisticated enough or has certain properties maybe this activity can be regarded as an instance of a particular system to which the free energy principle can be applied so I'm not completely certain about this and I think this is something that has to be clarified yeah, I am maybe it will be useful also to see what people are actually working on the free energy principle what would say about this that's a really important point yeah, few things that makes me think of first is kind of like the hard problem of virtualization or something in that some modification of a hard or easy question and then also about the difference between a case where the material substrate of computation is its own self-referent sustainer and another case in which the material basis of a computation is projecting let's just say a non-equilibrium study state like a holograph or it's creating robots it's spinning out robots or it's doing 3D printing so in that situation depending on how the system boundaries are the viability of one part maybe even like it's kind of semantic germline is required for the continued propagation of some other part of the persistent system but that part is actually disposable even though it's the embodiments I'll go to a question in the live chat and also I think it was a mem computing was the memory based computing and agree there's a lot of interesting threads there Ali asks a question in the chat he wrote what can we say about LLMs in understanding and what is the relationship between understanding and consciousness thank you and that's a very good or these are very good questions and of course it depends on on the notion of understanding and if we conceive of understanding and understanding and if we conceive of understanding of understanding as a way of grasping inferential relations or associative relations between different notions if we and so there certainly is some form of understanding that large language models have they can relate concepts to one another and their representations of concepts in a vector space preserves some important parts of the structure of conceptual spaces of our language so in the sense they do understand some things and then there are other forms of understanding and about which I will say something in a moment but because that's a bit more complicated and I cannot say as much about it I think but the question whether understanding entails consciousness so understanding in the sense you mentioned I would say does not entail does not require consciousness so you can have that without consciousness and another question is does consciousness require some other form of understanding maybe and I or it is my proposal here would suggest that yes there is some form of understanding that is required by consciousness and this is an understanding which is related to knowing what things in the world some words or representations refer to so this requires that you actually have a concept of there being a world of which you are a part and I don't think that large language models understand that they are things in the world they don't know that there is something out there of which they are a part and I think that's crucial for a certain form of understanding and in order for certain things to be meaningful for you you must somehow have a concept of of the world and a sense of that you are part of this world and how you relate to things in the world and so on now this I think is required for some form of understanding and I think it's also required for consciousness at least according to the proposal that I presented here and the idea is that if a system not only has some internal representations that it manipulates but if by virtue of doing so it sustains its existence then this puts some very strong constraints on what the system will do with these representations so if you think about a self-driving car and in principle it may compute all sorts of things and it will not have a meaning for the system for the car itself but if these processes are realized in a way that the system's continued existence depends on these computations and this puts some more constraints on what it can do with these internal representations and which will give such a system maybe some form of common sense and allowing it to draw certain inferences and avoiding other inferences and so on so that will be my hypothesis that there's a certain form of understanding that requires these that also requires the conditions that I think are necessary for consciousness in that I hear a shift from semantics and semantic embeddings to true semiosis and the abductive process of generation of embedded meanings that's very interesting a few more questions one you've been writing excellent and diverse papers on different mathematical formalisms of consciousness for some years now so how have you seen the empirical side and the theoretical side of scientific study of consciousness developing in the last several years and how do you believe that the current inflection points in artificial intelligence are like recontextualizing or modifying your agenda or bringing different relevance to your work yeah that's a good question and in general I would say that there's been since the 90s an explosion of empirical work on consciousness partly also due to certain methods and paradigms that were developed and imaging technology that improved and so on so we have better ways of empirically studying consciousness and so in the past years there's been or decades a lot of empirical data on consciousness has been gathered and many things have been found out and I think there are two reasons for which many people are now driven to theoretical approaches again or looking more closely at theoretical approaches the first is that data alone doesn't give you understanding so you need ways of interpreting data and constraining experiments and so on and I think some evidence for this is are these recent adversarial collaborations in which components of different theories of consciousness team up between experiments that would lead to or could lead to evidence which more strongly favors one but not the other theory and part of why such adversarial collaborations are needed or are perceived as necessary by some is that in previous years with certain theoretical assumptions in mind or with some with their path theory maybe global workspace theory they've been or other theories of course they've then been looking for evidence that would confirm their theoretical assumptions and maybe designing experiments in a way that is more likely to yield certain kinds of evidence or confirming evidence supporting evidence for their own theoretical assumptions so realizing that in order to make progress and constrain also the class of theories that you have realizing that it's important to do more rigorous experiments and explore ways in which also find evidence that disconfirms certain theoretical assumptions that I think is one of the motivations behind these adversarial collaborations and also makes people a bit more self-conscious of the theoretical assumptions that they're making yeah then another big issue is of course that the scope of empirical and theoretical approaches has to be expanded we now have good evidence that many animals that were previously thought to be unconscious actually are conscious animals such as octopuses and there's at least reason to take the hypothesis seriously that some insects also be conscious maybe bees are conscious and in order to make progress here theoretical approaches are needed to help structure debates and determine what kind of evidence would be supportive for certain hypotheses or which type of kinds of evidence would be relevant to answer the distribution question which entities which animals are conscious so that's one thing and then of course you mentioned progress in AI and people are now seriously considering what artificial systems might be conscious and whether some existing systems might actually already be conscious and we need theories to make progress on this or maybe not not necessarily theories but theoretical work and and also there's a very difficult problem here that artificial systems are in many ways not only unlike human beings but also unlike other animals whereas in research on animal consciousness you can try and draw some analogies between human beings and other animals it's less straightforward when it comes to artificial systems and this means that empirical research alone will be less helpful we also need sophisticated theoretical approaches to deal with these issues and there will always remain some uncertainty just because artificial systems can be so, can be physically or physiologically so different from conscious organisms and it's not clear how best to deal with these uncertainties we need theoretical approaches to make sense of all this and I would say we have to we might face the reality at some point so it may be maybe it will turn out that we won't ever know whether certain systems are conscious or not I'm very skeptical that we'll someday find a theory of consciousness that will tell us exactly which entities are conscious and which are not I mean integrated information theory is a proposal for such a theory and it's for this reason very important that theories like IIT are being developed on the other hand it makes very strong predictions about what kinds of systems can be conscious and which can't so it would agree with a proposal I presented here that a computer simulation in a classical hardware will not be conscious and many artificial systems will not be conscious even if they have all the abilities that we have in consciousness even if they can interact with the environment have sophisticated cognitive abilities and so on so it's very important that such theories are developed but I think it's also very hard to gain certainty and maybe so this is what I'm what one strategy that I'm pursuing is to try and find necessary conditions for consciousness which would allow us to rule out that certain systems are conscious so maybe we will never have a theory of consciousness that says with certainty if a system an artificial system has X or XYZ then it is conscious but maybe we will have some theoretical approaches that are empirically informed and which strongly suggest that if a system does not have X then it is not conscious so maybe this will be all that we can hope to achieve when it comes to understanding artificial consciousness and I see the account that I'm presenting here as contributing to this project by proposing some necessary conditions for consciousness and as I already indicated I'm not completely certain about either of these conditions but maybe something like the weaker condition, the flow condition maybe that would something like that could be a useful necessary condition for consciousness which would rule out consciousness in a wide class of systems but which would of for example not rule out the possibility of consciousness in a computer simulation not in all computer simulations awesome, the via negativa of consciousness status so one question on measurement and then a closing question on ethics so you mentioned the distribution question what is the distribution of consciousness in our scenario what kind of distribution is that is it a Z axis there's a scalar quantity that is going to summarize the density of some distribution in space like we were doing some kind of topographical map or is that distribution multi-dimensional are there different dimensions to consciousness so to what extent do we even aim for a unidimensional scalar representation potentially using IIT or other measures or to what extent will we have a plurality of consciousness measures without necessarily like a higher or lower thanks, yeah that's a very tough question so as you already alluded to according to some theories such as IIT consciousness does come in degrees but it varies along a single scale the degree of consciousness and classically there's also this distinction between the level of consciousness and the contents of consciousness so the level would be defined in terms of wakefulness or vigilance and the contents in terms of that which is experience and the idea would be that you can distinguish between levels of consciousness in a unidimensional way but then more recent proposals according to which consciousness is multi-dimensional as you already mentioned and there which means that it may be impossible to define a unidimensional degree of consciousness maybe you can only order conscious experiences along these different dimensions individually but not have a total order on conscious states yeah so I think these are to a large extent unresolved questions and I don't have a strong opinion on this but I do think that consciousness at least some crucial dimensions of consciousness that come in degrees and open to the idea that there will be borderline cases between systems that are clearly conscious and systems that are clearly unconscious so open to the idea so the possibility that there will be some systems indeterminate whether they are conscious or not and apart from IIT would not imply this of course but there some computational approaches to consciousness would specify some properties I mean if you think about the counterfactual depth of internal representations for instance that comes in degrees and it's not clear at what point a system will be conscious and sees to be unconscious what degree of counterfactual depth is necessary for consciousness and yeah so open to the possibility of borderline cases of consciousness and similarly even if we try to apply theories of consciousness to other animals then there may be cases in which it's not clear whether what a theory of consciousness would say so Jonathan Birch has a wonderful paper about the problem of trying to determine which non-human animals are consciousness and the idea is that if you try to use a theory of consciousness that was developed on the basis of what we know about consciousness in human beings then there may be many cases in which it's not clear whether the theory applies or not or whether the conditions for consciousness are fulfilled or not so if you think about global space theory what is a global workspace I mean we can for human beings this tinguish between different consuming systems local processors that have access to the global workspace and which receive the contents that are represented in the workspace and which are thereby consciously processed there are animals that may that don't have as many cognitive sub processes that don't have such sophisticated cognitive abilities that may maybe have something like a global workspace but with fewer consuming systems so when does it seize to be a global workspace in the sense that will be required for consciousness and because of such determinacy it may be more useful than to look for other markers of consciousness evidence for consciousness such as learning abilities that's what Birch proposes in part I think inspired by work by Eva Geblonka and Simone Ginsburg who suggests that certain forms of learning are transition markers for consciousness which provides strong evidence for the presence of consciousness and what about the distribution questions so this is more about evidence for consciousness in different types of systems and what will the distribution look like in the end I don't know but what I'm mainly interested in trying to find out how we can determine whether a system is conscious or how we can rule out that a system is conscious and what in the end this will tell us about the distribution of consciousness I think is a great open question awesome well as we say in the ant colony the whole nest is our workspace so different systems will do it differently and in closing I know that you're teaching a course on this so to compress it will be a challenge but could you conclude with a statement or actually a pair of statements on AI ethics one addressed to natural humans one addressed to the machines um yeah so AI ethics so I don't want to say anything about AI ethics in general but just about consciousness so when it comes to human beings um what I find interesting is the question what makes what is required for being a moral agent so most people would agree that consciousness gives people or gives entities gives organisms at least some moral status even if it's maybe not required for having a moral standing um but it's at least sufficient for some form of moral status and then maybe some organisms matter more than other organisms even if they are similarly conscious but maybe some animals matter more than other animals because of their cognitive abilities and so on and then the question is what um what do we have to add in order to turn a moral patient as it were into a moral agent a being that can act in ways that are not just in accordance with certain moral principles or not in accordance with them but that can act because of certain moral considerations and is it necessary to be conscious in order to be a moral agent and what I find interesting here is to explore to what extent accounts that look for necessary conditions for consciousness may also yield some necessary conditions for being a moral agent so even if consciousness may not be required for being a moral agent maybe there are certain necessary conditions for consciousness that are also necessary for moral agency and this can then also be applied to artificial systems and principle but of course when it comes to artificial systems it's the interesting question is should we even create artificial systems that might be conscious or do we have a duty to create conscious artificial systems and my own position would be that in the case of animals it's already happened we may not know with certainty which animals are conscious and which are not but regardless of our epistemic situation regardless of what we know about these animals they are conscious or are not conscious they feel pain or don't feel pain they suffer or they do not suffer so the most we can do is to try and minimize the suffering in existing animals but when it comes to artificial systems we can at least the moment say that most artificial systems are very unlikely to be conscious and we have the unique opportunity now to really think hard about whether we want to risk creating conscious artificial systems and I think we learned at least two things two interesting things in the past weeks or months about artificial systems artificial intelligence and consciousness and one is that AGI may not require consciousness I think most people believe that anyway but there was some there was a list of possibility that AGI artificial general intelligence might require consciousness and what we've seen with the latest generations of language models is that they at least come very close to some form of artificial general intelligence and it's very unlikely that they're conscious and so this gives me at least some confidence that future more sophisticated systems that will have a general form of intelligence will also not be conscious or unlikely to be conscious and I think that's good because it will not there will be there will be no further incentive to create artificial conscious systems in order to achieve AGI because it seems that you can achieve that without creating conscious systems another thing which is a bit more concerning maybe is that we've also seen that it's very difficult to regulate developments in AGI and big companies can just decide to put systems on the market and make them available for everyone and they will use them for whatever purposes without any regulation and so if it will maybe not too far future be feasible to create conscious systems if we do gain a better understanding of what it would mean for an artificial system to be conscious or maybe how to create one people will do it and it will be very difficult to regulate this so this may be a problem for the not too far future a problem for a guest stream number 40.2 yes thank you Vanya very insightful best of luck with your works and hope to talk to you later thank you Daniel it's been a pleasure thank you so much