 Hello, it's June 23rd, 2023. We're in active guest stream 46.1 with Denise Holt. So Denise, thank you very much for joining. We'll have your presentation first, any length of time you want. And then we'll have a discussion about some of the topics you're bringing up and I'll read some comments from the live chat too. So thanks again, Denise, looking forward to your presentation. Hi, Daniel, thank you so much for having me today. Yeah, so let's just dive right in. Today I'm going to be talking about Active Inference AI and the Spatial Web. And for those who are not familiar, there is a company called Versus AI and they're basically delivering the next generation of AI. This is beyond the current machine learning, deep learning models that we're all so familiar with that are incredible tools, but really those are tools. They are great at creating content, performing tasks, things like that, but they operate in a way that is pattern matching and they're all siloed in their ability for what they can do. They're trained on enormous amounts of data to increase the pattern recognition and hopefully increase accurate outputs, but that's really what they do. What Versus has created is something completely different and it's based on Active Inference. And so Active Inference AI is entirely new and this is the AI that will be able to give us to AGI, leading to synthetic, self-evolving intelligence. So today, what I'm going to be talking about in this deck is Active Inference AI and the Spatial Web and how these technologies are going to affect and change the world. We'll talk about how Active Inference works with artificial intelligence, how the Spatial Web protocol enables the perception and the belief updating of the AI and how these technologies together enable the only AI in the world that can run critical operations for systems or infrastructure, smart cities, the planet. So when you're talking about airports, hospitals, things like that, so yeah, so let's dive right in. So the Spatial Web, what is that? So the Spatial Web is being called the network of everything. Basically, it's the next evolution of the internet and it's powered by AI. So this is taking us from a library of pages and documents that we have in the World Wide Web and oh my gosh, my dog is wanting out. I'm so sorry. I have to let him out, I'm sorry. Sorry about that. So the next evolution of the internet. Now, as I said, World Wide Web, a library of pages, documents, everything is static. The Spatial Web protocol, HSTP, hyperspace transaction protocol and the programming language for the protocol called HSML which is hyperspace modeling language. This new protocol takes us into a library of spaces where we have spatial domains and everything within any space can be locatable and programmable within this new network. So, and it's just extending the protocol that we have now. So same internet, just more capabilities. Now, Dr. Carl J. Friston and the versus AI white paper and I'm sure a lot of you are familiar with him, the father of active inference and the free energy principle. Now, so versus with Dr. Friston has developed a new type of AI that mimics the self-organized systems of nested intelligence found in nature and it's based on his methodologies of active inference and the free energy principle and the Spatial Web protocols, which I just mentioned which were developed by versus AI. In December, they put out a white paper called Designing Ecosystems of Intelligence from First Principles. And just to back up a second, versus created the protocol, the HSTP and the HSML but donated it to the public because nobody can own the internet but they needed that in place for what they wanted to do as a company. So that framework needed to be there. So then they also donated the protocol and the IP to the IEEE, which is one of the largest core standards bodies that develops everything with electronics and engineering. They're responsible for the core standards around things like Bluetooth and WiFi and things like that. So the core standards have been being developed around the protocol for the last almost three years and yeah, versus in that time has been building what they've been building, which I'll be talking about. So what is this white paper about? It's basically about shared intelligence. So in the white paper, they proposed cyber physical nested ecosystems of distributed intelligence joining humans, machines and AI agents on a common network. So we're talking about humans as integral participants, adaptive behavior, self-evidencing, self-organization, belief updating over several scales, belief sharing over ensembles of agents. So we're talking about a network of intelligent agents and collective intelligence. And then of course the protocols, HSTP and HSML. So the knowledge graph and digital twins, digital twins of our planet and all nested systems and entities within them. So nested ecosystems, intelligent agents, both human and synthetic, sensing and perceiving continuously evolving environments. Making sense of changes, updating their mental model and what they know to be true and then acting on the new information they receive. And in the white paper, it's also discussing a network of distributed intelligence. So this enables a cognitive architecture made up of the collective intelligence of multiple agents that continuously communicate, coordinate and collaborate with each other. Individual and specialized intelligences are all coming together on a common network. They're speaking a common language, HSML and HSTP. It's efficient, powerful cross communication to perform tasks, regulate systems and address problems in real time. And it scales up and grows in tandem with humans. So active inference and the free energy principle. So what exactly is active inference? Active inference is a biologically inspired approach to AI systems as a method for understanding behavior incorporating the brain, cognition and behavior, modeled after the design principles from nature and from how the brain nervous system and the body act and react. One of the questions that Dr. Friston asks in this paper that is an important question in reference to active inference is with AI systems, you need to be able to understand what is it that intelligence must be given that intelligent systems exist in a universe like ours. So active inference is a theoretical framework that describes how agents like people, animals, robots or artificial systems can maintain their internal states and behavior in unison with their objectives or goals. So what is the free energy principle? The free energy principle is the theoretical foundation of active inference. It proposes that organisms or agents maintain their internal states and behavior by acting in a way that minimizes the difference between their current beliefs about the world and what they expect to be true. So in other words, the agents try to make things more predictable by mitigating the difference between what they expect to happen and what actually ends up happening. And they do this by way of a continuous cycle of improving their perception by acting on the environment. So basically it helps to extract the signal from the noise and extract the regularities from the irregularities. So extracting the prediction error, minimizing the prediction error. And there are two parts to this inference cycle. There's the perceptual inference part which minimizes the free energy by improving the internal model to better match the sensory input. Making Bayesian inferences about the world, the brain minimizes free energy by optimizing perception. So through hypothesizing and observation. In other words, the brain models the environment better improving the perception. Now the active inference part minimizes the free energy by acting on the environment. So acting on the environment to reduce the surprises, the disorder or the unpredictability and action can indirectly influence the observation. So active inference, the active inference part of free energy minimization is equal to lowering the surprise in the sensory observations by acting on the environment. Actions can't directly change observations but they can indirectly change them by acting on the environmental states. So then we get to active inference AI. So in the context of artificial intelligence, active inference and the free energy principle explain how agents can learn and adapt to new situations and how they can generate predictions and plan actions based on their goals or objectives which leads to more accurate predictions and results which is the real goal of AI. Active inference AI enables us to overcome current limitations with machine learning and deep learning AI models providing a realistic path towards artificial general intelligence. So the idea is that an agent can use active inference to minimize the free energy and optimize its behavior in a way that's consistent with its objectives which leads to more accurate AI. So the action perception loop, the perceptual inference and active inference unfold continuously and simultaneously underlying deep continuity between perception and action. They're both sides of the same coin performing the same free energy minimization algorithm. So the free energy can be minimized by improving the perception and then acting on the environment in this loop that just keeps narrowing down the focus. And then also by learning the generative model how the brain codes the environment. So that's the side that the spatial web protocol actually helps with because it bakes context into every person, place or thing in any space. So that context informs the AI. So then it's optimizing the expected precision and regulating the learning. So thus we update the model over and over to improve our understanding of the world and that is what the AI does as well. So how does it do this? So self optimization and self evolution. So active inference AI and the free energy principle through the Bayesian inference allows a neural network to self optimize through intake and continuous updating of new real-time sensory data while simultaneously considering previously established outputs and determinations. So past decisions plus the new input lead to future outcomes. And that cycle is a self optimization and self evolution cycle. So a self evolving system evolves over time. Current systems, current AI systems are more like a machine. They're just neural nets that are trained to do a specific task. But a self evolving system is learning from moment to moment and upgrading its world model. And thus it mimics biology while also enabling general intelligence. So within the free energy principle, neural networks self optimize through a set of mathematical rules enabling next generation artificial intelligence to efficiently learn, predict, plan and make decisions. Spatial web protocol, let's talk about this. So what is the spatial web protocol? HSTP, hyperspace transaction protocol. It gives browsers the ability to link spaces with an ID for every person, place or thing, both digital and physical, so virtual or real. But it's much more than a location identifier. HSTP is also a gatekeeper. It allows various parties the ability to agree on who, what and where anything is in space, who owns it, has access to it and what can be done with it. So HSTP gives browsers the ability to link the spaces with an ID. Oh, did I get this one here? So HSTP is multi-dimensional query. So it allows for query over multiple dimensions, identifying, localizing and updating the attributes, conditions, excuse me, contingencies and interrelationships of objects in space and over time. So if it can be defined and calculated in the sphere of natural or digital science, it can be searchable through HSTP. So HSML, this is the programming language that informs the protocol. And I like to call it the smartest contract around because it's the foundational contract now for every person, place or thing in any space in any reality. So the only way to create a truly technologically augmented existence is to be able to consider and measure the contextual elements that affect the expression of shared information by and between all objects in space. This is known as computable context and this is what HSML, hyperspace modeling language was made for. And what are some of the contextual elements that can be programmed? Location, the where, the when of anything in any space. And you're talking about reality, the different realities, spaces, time and channels as well. And then activities, so the what and the how, you know? And this is in regards to things like rights and credentials or claims or activities. And then when you're talking about identities, the who and the what of anything in any space, you're talking about authority or domains or users or assets. So all of these contextual elements can affect the expression of the shared information by and between anything. So belief updating through context and sensors. So HSML is a cipher for context. The spatial web, which is web 3.0, the next evolution of the internet is library of spaces that contain objects, people, places and things. These objects do things and change over time. The context or circumstances that govern these shifts and changes is the most important factor to consider if we're to understand how objects relate to each other, to people and to their environments. So the free energy principle and HSML. The free energy principle is perfectly suited to the programming language of the spatial web. HSML, the hyperspace modeling language, which enables computable context based on defining, recording and tracing, the changing details in physical and digital dimensions, social dimensions, meanings, culture, conditions, circumstances, situations, whether geometrical, geopolitical or geosocial by nature. Self-evidencing and belief updating. So HSML computes context, enabling the AI's perception to understand the real time changing state of anything in the world, accumulating evidence for a generative model of one's sensed world, also known as self-evidencing. So we're talking about multiple agents. So individual intelligences and based on each agent's generative world model, unique perspective, frame of reference and nested ecosystems of these intelligences and different levels of self-organization. So all of this results in a collective shared intelligence. So Hallon architecture and Markov blankets. So the spatial web is a Holonic structure. So active inference, well active inference AI within the spatial web on the versus COSM OS platform operates with a Holonic structure of nested spatial domains. So if you're not familiar with what a Hallon is, a Hallon is where something can be a whole in and of itself and yet also be a part of something greater. So the human heart is a great example. The human heart's made of cells. Each one of those cells, they're whole cells in and of themselves, yet they're also part of the heart. And then the heart is a whole heart but yet it's part of the human body. So you have this nested structure of whole parts that are governed by the outside part but still have this intrinsic governing going on within itself as well. So each component is a whole element part of the entire organism. They're nested entities and they're governed by the rules of the greater influencing organism. And then they have inherited, they inherit the governing and it's passed to the internal parts that are nested within it. And each part has its own governmental considerations for its specific intrinsic requirements as well. So this is how the spatial web is set up to work as well. So when you're talking about spatial domains, you're talking about, for instance, you can have a restaurant that's inside of a skyscraper building that's inside of a city and that city is inside of a state, these nested domains. And then the active inference AI also includes the principle of Markov blankets at each level of analysis. So the Markov blankets define boundaries acting as partitions to mediate the interactions between the internal and external states of the spatial domains such as like a single unit or a region or entire complex network. And this results in a self-organizing system. And a Markov blanket is just a mediator between how things are attuned to each other through the free energy principle. So nested domains in the spatial web. So one good example of how this may work is if you take a factory, right? So a factory is a part of the supply chain and has raw materials feeding into it from various locations like mines or anywhere. And then the factory produces finished goods that may ship to another destination like a warehouse, like a distribution warehouse. So just as that factory is an object within a larger ecosystem of the supply chain, every object within the factory belongs to its internal ecosystem. So with a factory, you've got all the internal ecosystem of what's going on in that factory, but yet it's still part of this global supply chain with raw materials being fed into it. And then when it's done with what it's doing, it's feeding out to other like distribution pathways. So you have an internal and an external ecosystem and they simultaneously operate according to the individual internal and external standards, the interdependencies, the interrelationships. So this is how you can see things working within the spatial web. So every object within the entire spatial web network is part of an interrelated ecosystem that feeds a continuous stream of real-time context between all points in space and time. This continual interaction and communication stream takes note of all the nuances and changes within the relationships between all objects and the parameters that govern them, which leads to adaptive collective intelligence automation. So you have programmable spaces and within these spaces, everything within these spaces have a digital ID and the way the spatial web is set up is it enables zero knowledge proofs between all of these entities. So anything inside of any space is uniquely identifiable and programmable with a digital twin of the earth producing a model for data normalization. So by programming context into everything in any space, it creates this digital twin of the space. So the spatial web becomes a digital twin of everything, from the planet to every single system, every single object within it that's on the network. So the contingencies and changing details, inherent qualities, circumstances for all objects and situations can now be measured and computed, providing a basis for the AI perception, affecting all entities and their interrelationships to each other. So you're talking about adaptive intelligence automation, security through geo-encoded governance, multi-network interoperability and it enables all smart technologies to function together on a unified system. So when you consider all of the web three technologies, all of the extended reality technologies, AR, VR, distributed ledger technologies. So you have all of these technologies that are siloed and disparate, this brings all of those technologies onto a common unified network now with a common language between them. So it makes all of these technologies then become interoperable. And when you're talking about smart technologies, when you're talking about the internet of things, there's been no common language for these internet of things to talk to each other and communicate. So the spatial web brings all of the internet of things onto the common network now, where they become baked into this network and informing the AI. So all of the data being gathered by all of these sensors or cameras or the context that's baked into the protocol, this is all the data that is now coming together that is continually updating in real time and informing the AI. So you have ecosystems of nested intelligence. The design of intelligent systems must begin from the physicality of information and it's processing at every scale or level of organization. So within the spatial web, you have AI that scales up the way nature does, aggregating locally contextualized knowledge bases and acting across ecosystems. And this maximizes efficiency. So this is the complete opposite of the way the machine learning models work because they're top down, it's just massive amounts of data. And that's what's training the machine model AI, but there's a cutoff date. So then that model now can only reference this historical data that it's been trained on and again, it's just a pattern recognition machine. The algorithms are just to make sense of the data that it's been fed and how to recognize patterns within it. Every time those models have to compute and try to perform an output, it requires massive amounts of energy because it still has to pull from its library, this massive library to try to make sense of things. Now, the way that the spatial web works with the active inference AI is you have any amount of data, small amounts of data can now be made smart because you have context informing the AI and you have sensors that are giving it real-time data inputs to let it kind of look outward and see what's happening in the world now, not referencing historical data. So this leads to minimizing complexity. So when you're talking about the machine learning versus the active inference for efficiency and accuracy, the more complex a system is, the more energy it consumes. So that's what I was just referring to with all this big data approach on these machine models. Active inference and the free energy principle naturally minimize the complexity. That's the whole idea with the free energy principle is to minimize the prediction error and use the information that it can sense, right? So it's taking in the senses from sensors, like the internet of things and cameras and things like that, but it's also taking in what it knows about its, the data at hand, right? And it's informed to the protocol, the spatial web protocol with all of the context, all of the attributes around everything in any space or any data set. So, you know, sensor data is noisy and ambiguous, but HSML provides clarity with specific and precise contextual data to narrow the complexity and close the gap on the free energy. So more certainty means less noise and that equals free energy is reduced faster and lower. So it's a faster way to operate, more efficient. And, you know, another point to this, which is really interesting too, is that, you know, most of the data that exists, exists behind password walls, right? Behind gated, within gated password areas, right? Whether it's, you know, on the internet and it's all the data that people have behind a password or whether you're talking about like an enterprise organization that has all their proprietary data that's internal, you know? So the awesome thing about the spatial web is that and the active inference AI is that you can be within this network and still have all of your proprietary data gated off because the protocol allows for that security, you know? It's gated at every touch point. So you can put boundaries around your particular data but still be able to access the power of the network and the power of the active inference AI now for your proprietary data because it can take that small amount of data and make it smart. So this is a game changer for, you know, access to AI for any systems in any databases. So embodied AI, active inference is mimics biological design and it's, so it's like embodied AI with the ability to take action. And this is the core engine of the spatial web. So what makes active inference so accurate? Active inference is so accurate because it continually looks outward into the world measuring the world in real time through a global network of sensors, IoT devices, cameras, robots, drones, anything that's connected within the spatial web, the digital twin network of the world. HSML informs the AI with precise data, extracts the signal from the noise, minimizing the prediction error and this mimics the way humans and animals make decisions. So the perception of the world in the spatial web, active inference AI, you know, this creates a cybernetic feedback loop of perception of the world and it updates the model of the world with the belief of what it knows to be true, gaining the understanding of the intricacies and inner workings of the world so that it can make decisions and take action. And it provides ever-evolving intelligence. So the more this feedback loop plays out, the AI learns more about the world and the results of the actions taken just as a child learns about the world as it grows and interacts with it. And the more accurate this AI becomes by further updating its understanding of the world. So it's been described to me that, you know, when versus launches their COSM operating system and then, you know, which is going to be later this year, it's gonna enable everybody to be able to build these intelligent agents on top of the network, right? They have a intelligent app store that's going to be launching with the operating system. So then everybody can make these intelligent agents. The difference is these agents, so when you build an app on say like the Apple app store or Google Play, you're building a siloed piece of software, right? Those apps are not aware of each other, they just do whatever their function is to do. The difference is these intelligent agents that you're building on COSM are gonna be aware of each other and aware of the network. So they're gonna be empowered with the AI and be able to act as an agent within the network. Now, it's been described to me that, you know, when people are first building these, you know, intelligent agent apps, it's not gonna be like this stark difference of like the power of what it can do versus the power of what these machine models can do, because they're all pretty, it's all pretty impressive right now. But the difference is, is if you take like the comparison of like a chimpanzee and a toddler, at first they seem about the same intellectually, but the difference is that toddler is gonna grow. It's gonna grow into a full-fledged adult. It's gonna continue learning, whereas that chimpanzee has maxed out. So that's gonna be really interesting to watch over the next year or two, is that we're gonna see this active inference AI really modeling the same way that humans learn and it's going to grow in intelligence with this scores of these intelligent agents now, all working together within the network. So now, one of the really awesome things about this too is that the active inference AI is explainable, auditable, and governable. So versus AI just published a groundbreaking industry report about a week, week and a half ago called Designing Explainable Artificial Intelligence with Active Inference, a framework for transparent introspection and decision-making. So active inference is explainable AI. Now, how can this be? Well, active inference can self-report. So machine learning models, they're black boxes, and therefore they can't quantify their uncertainty. They're unobservable, unalterable, and unknowable. Basically, the way the algorithm is processing the data and coming up with its output, you can't see what's happening. You can't know how it's coming to its decision. It's just this black box that's happening and there's no way to really know what it's doing, just how the input and the output. So you can't explain that. Now, one of the interesting things with this call for putting governance around AI is that if you have these black boxes that are not controllable and not explainable, how do you govern that? So the only options have been to either regulate the companies that are developing these tools or to just let the free market rain. Now, learning free market rain is a little bit tricky with AI because that can really lead to this AI arms race kind of a thing for AI domination. And that has some risks to human nature. And then the other problem with regulating the companies that are developing these tools is that you're basing those regulations off them, being able to do things like, explain how the results are coming to fruition through their tools or holding them accountable for outputs that are false or causing harm to anybody or different things like that, which you really can't do that either because what's happening within these tools is not controllable, it's not auditable, it's not controllable. So there's a real issue with governance around these machine model AIs. Now, with active inference AI, it is completely explainable, auditable, transparent. And when you're talking about the self report, active inference AI is capable of introspection. So this enables the intelligent agents to access and analyze their own internal states and decision-making processes. So we get better understanding of their decision-making process and they have the ability to report on themselves, to basically report on how they arrived at their decisions. Beliefs and belief updating are known. So this dissolves the explainability problem of the conventional AI. And then you also have programmable intelligence. So verses refers to this as code as law. So the act of correcting a machine occurs within the code. Within the computational architecture of verses AI system, human law can now be transformed into computable law that AI can comprehend, abide by, and then act accordingly within its decision-making process. This is a process which is fully auditable, knowable, and it updates in real-time. And verses has been improving this concept in a program called Flying Forward 2020. And this is a European drone project conducted with the European Union. And there's eight or 12 different countries involved, I'm not sure, but basically what they've proven is that they can take the spatial web protocol, the HSTP and HSML, and they can translate human laws, all the laws around like the airspace laws and laws how they shift crossing country boundaries, different things like that. They can take those laws, put them in a programmable code through the protocol that the AI within the drone can understand and then abide by. So they've met with great success through this project and they've proven that these drones can deliver medical supplies to a hospital crossing borders into various air spaces, understanding no fly zones versus fly zones, all these things. So basically human law, computable that the AI can understand and abide by. So when you're talking about AI governance, the protocol allows you to actually program law that the AI can understand. So then the other aspect of this too is humans with AI because there's a lot of fear baked into people, especially in regard to all these sci-fi scenarios that have played out in fictional books and movies and things like that. But one of the great things about the active inference AI within the spatial web is that the AI grows in tandem with the humans because you can take programmable law, make it to where the AI understands and you've got this symbiotic relationship between the AI and the human. It's a network system that operates in tandem with humans. And then the AI is growing its intelligence in sync with the growth of humans and the world. So it makes a controllable AI. So in versus white paper, you know, one of their opening statements was that the purpose of the white paper, it's denowment is a cyber physical ecosystem of natural and synthetic sense-making in which humans are an integral, are integral participants, what we call shared intelligence. So versus AI and the Spatial Web Foundation offer us the framework in which we can build an ethical and cooperative path forward for AI and human civilization. And one thing I wanna read to you from the white paper that's specific about this is this. So quote, we believe that developing a cyber physical network of emerging intelligence in the manner described above not only ought to but for architectural reasons must be pursued in a way that positively values and safeguards the individuality of people as well as potentially non-human persons. The idea, this idea is not new already in the late 1990s before the widespread adoption of the internet as a communication technology. A future state of society had been hypothesized in which the intrinsic value of individuals is acknowledged in part because knowledge is valuable and knowledge and life are inseparable. That is each person has a distinct and unique life experience and as such knows something that no one else does. This resonates deeply with our idea that every intelligence implements a generative model of its own existence. The form of collective intelligence that we envision can emerge only from a network of essentially unique, epistemically and experientially diverse agents. This useful diversity of perspectives is a special case of functional specialization across the components of a complex system. So, you know, within the versus white paper for this active inference AI, you know, they're very clear about the importance of the human experience and the various human experiences in tandem with the growth of the AI within the Spatial Web. So I just thought that was really important to kind of let people know. So then you have accurate world models. Collective intelligence trained on real-time data evolves making decisions and updating its interior model based on what is happening now, not on historical data sets. Active inference AI is not a language model generating words about the world. Based on outdated knowledge, it's been fed regarding the world. Active inference is like a biological organism that perceives and acts on our world by generating more accurate models, understandings and beliefs about our world. These ever more accurate world models enable better decisions, a smarter world. And this is the true measure of intelligence. So active inference AI as the operator, and this is the final section. So you have the internet of everything. The days of training AI on big data will give way to an interconnected internet of everything that deploys active inference AI throughout the network. Inherently secure and accurate because it takes any point of real-time data and makes it smart through an empowered ecosystem of interconnected AI apps that act as intelligent agents. Active inference, COSM and the spatial web. Active inference AI inside of the versus AI COSM operating system has access to the entire spatial web with IoT sensors and cameras and all context markers attached to all objects within all spaces within the network tracking and identifying changes over time. This AI becomes an ecosystem of intelligent agents that are all interoperable over HSTP. So what does this mean for the planet? Active inference AI can run the planet. Dan Mapes, who's one of the co-founders of versus he describes the spatial web with active inference AI as a nervous system for the planet. Together, these technologies enable the only AI in the world that can run critical operations for systems, infrastructures, smart cities and the planet. So when you're thinking of climate or traffic control or smart cities or advanced education or hospitals or airports, all of these are critical systems that need accurate functioning of the AI. You cannot rely on machine models who give you plausible answers, plausible outputs that sound accurate but aren't necessarily accurate because they're just trained to make it sound appropriate. Appropriate and accurate are very different things. So because the active inference AI is based on real-time data, actual data, context that's baked into everything, it's an accurate form of AI and it can be trusted to run these critical systems. So it can be the operator. And then we come to sentient intelligence. So this is the missing piece of the data puzzle that gets us to artificial general intelligence. We now have the world model and we now possess the ability for continuous and adaptive context markers, enabling a cognitive model of this world and the ability to compute awareness. So this is taking us from the artificial narrow intelligence, which is what these machine models are, to potentially artificial general intelligence and then artificial super intelligence because the more the active inference AI learns about the world, it can start to question and become curious about the world. And it's the difference between, asking the AI agent something and then having the AI agent ask you something to clarify what you want or what your needs are. It's that curiosity back on you that'll take us to the artificial super intelligence. So I'm just gonna close by letting you know that versus AI, their COSM OS, is going to be launching at the end of the year. And the idea is let's create a new world together. They've created the tools for anybody to create the AIs on top of. So, it's a huge opportunity. They're gonna be launching the beta at the end of this year. If you go to my website, DeniseHolt.us, there's a menu item where you can sign up for the beta. If you'd like, they're gonna be selecting developers and projects to be a part of this beta. So if you'd like to be considered, feel free to sign up there. And then of course, I have a podcast and it's on YouTube or Spotify, Apple, anywhere you find a podcast. But if you'd like to learn more about this, that's really my role is I've just been educating on the Spatial Web Protocol and this active inference AI. So there's a lot of information on my website. There's a lot of articles on the podcast. There's just a lot of information so that you could learn more. So I think that's it. Awesome. Thank you. Thank you, Daniel. All right. I'm gonna come back on video and then, all right. Well, thank you for the amazing presentation. There's many places to begin, many ways to do it. But again, thank you again for sharing this information. So. My pleasure. I'll start, but just asking live stream viewers, if they wanna ask anything, just put it in the live chat. And then perhaps just, there were so many topics that we brought up that perhaps I can just reflect on a few of the topics and just kind of summarize it as I heard it, especially where it's different from what we've heard in terms of framings on other streams where we go way more into the technical, but there are just so many great ways that you had of framing what was happening, okay? Sound good? Yeah, sounds great. All right. So you described active inference as biologically based. And one way that biologically inspired, perhaps exactly, but based upon biological systems. And it made me think about the way of viewing the world in terms of its spatial connectivity and then thinking about the world in terms of its statistical or causal connectivity. And so when people think about, well, what would biologically inspired AI looked like, they might think about the actual connections amongst the neurons in the brain or the actual connections amongst ants in the colony, but that connectivity allows solving that problem. So if you wanna solve some different problem or some thing else, you need to learn from those structures but not copy them verbatim. And so that level of learning that we have to pull back to and the level of thinking is about the agent in their engagement with the environment and the minimization of surprise instead of the maximization of reward. And that's a really clear path, I think, from what we can generalize from natural intelligence towards how we can think about what kinds of imperatives we should design and what design principles should be for synthetic intelligent agents. Absolutely, yeah. Okay. What brought you to this presentation? Like what was your way into the spatial web or how did this even come to be? Well, so Dan Mapes, who's one of the co-founders, he's a longtime friend of mine. And so I knew when he and his co-founder Gabriel Renee who's the CEO of Versus, when they were starting Versus back in 2016, 2017, so I knew what was coming and I've been kind of watching it unfold over the last handful of years. And so then last summer, it became really clear to me that Versus was getting close to launching to the public. They've been working over the past couple of years with Fortune 500 companies with governments and smart city development all over the globe. They've been involved in this flying forward project. They've been deep into the supply chain. One of the first apps they created was an app called Wayfinder, which is specifically built for the supply chain. And so watching all of that, but then knowing it's coming to the later this year, I was like, okay, there's a transition and people need to understand what's coming because I've been involved in blockchain and web three for the last handful of years. And so you see all these amazing technologies being built and all these awesome projects, but one of the biggest problems is that they're all siloed and the interoperability is the biggest struggle. To me, it's like, okay, these people need to know this is coming and the interoperability is gonna be there for not just within the web three or the blockchain space, but really all technology. Really anybody who has a business that is present on the worldwide web is going to want to evolve that presence into the spatial web just for the empowerment of what this AI and this interoperability is going to do for their business. So I knew there was a learning curve. So that's really what I set out to do was to kind of break down that learning curve and really start to educate people about the protocol, about the AI and about what to expect. All right, great. A few more questions. Again, we're just starting to explore this and so whatever you do or don't know is all good. How will the open source nature of the web and the ecosystem around active inference related technologies be secured? Well, so basically, you have to think of the COSM operating system as that's literally the operating system that is gonna enable people to easily build these intelligent agents within the spatial web network, right? So that allows anybody to come on this network and build these intelligent agents, these intelligent apps, right? So the beauty of it is that the protocol itself for the spatial web, it bakes in security at every touch point, right? So you can program in access permissions, different things like that into whatever you're building. So it really becomes individualized, right? You can gate off to where you have data that's proprietary and then you have data that's allowed to be accessed by the world. You can make it look however you want for whatever you're building, whatever your project is and what these intelligent apps then become is it's your opportunity to share your knowledge with the world, right? And you can share it to whatever extent that you want to, you know? Thanks. When talking about just a few terms I wrote down, culture, ownership, authority, how do differences amongst locations and peoples play out when there are such different concepts of those mentioned terms and many other terms that arose, it might not just be something like, well, the drone can go a hundred feet here and 10 feet here. What about when there's different concepts of authority and ownership and justice and culture? Right, so that's one of the beauties of the world is that we all have our own unique understandings, our own unique beliefs, our own unique cultural traditions and the different governing bodies and types of government, right? So the protocol needed to be built to be able to preserve all of that, you know? You don't want there to be this next evolution of technology that includes AI that becomes this dictatorship of, you know, this is the way the world's gonna be and this is how everybody has to operate. This preserves the uniqueness and the individuality of everybody in any area. So really the way this plays out, it's going to be able to be customized for region, for cultures, belief systems, you know? So it's like this sociotechnical ability to safeguard and preserve traditions and different things and mindsets. So really it's not gonna be a hindrance as much as it is going to be something that preserves the beauty of individuality. And so when you're talking about systems like that, like you were talking about, it's pretty much gonna be similar to where it is today, you know, like, you know, people have the right to the right of refusal, right? You know, even in a restaurant, you know, no shirt, no shirt, shoes, no service, you know? So, you know, there's not gonna be a lot of difference with that, you know? And then so when you're talking about things like drone delivery and things like that, then you're gonna have other standards bodies that are making those decisions to, you know, make things more inclusive rather than, you know, kind of silliness of, you know, denying people for whatever reason. Oh, all right. I'll ask a question from the live chat. Bert asks, thank you for the presentation. What is the limitation of implementing this vision? Is it getting people to use the hyperspatial language or getting active inference working right? Or I'll add a secret third thing? Well, so, you know, I've had people say that, you know, ask that with me saying, you know, well, what's the guarantee that anybody's going to, you know, you know, take part in this new spatial web? And it's, you have to think of the way the internet has evolved, right? This isn't just like this whole separate network that you're gonna try to bring everybody onto. It's the same network we're already on and it's just evolving the capabilities within that network. So, you know, when you think of the way the internet started, you know, when you had like TCPIP and the killer app was email and then Sir Tim Berners-Lee created HTTP and HTML and then you had the worldwide web, right? So it didn't get rid of email. It just enabled this whole other capability that now people could build websites. Now, instead of just sending, you know, digital messages from computer to computer, you could actually bring an audience to your webpage and engage that way on like a property within the network. So that's all that's happening right now is that we're just taking the capability from this web to environment that really lacks security and it lacks protection for people and their data. And it opens up a lot of vulnerabilities because you have to, all of the transactions are taking place. The data is transacting on a website which is the property of a centralized organization, right? So what this is doing is it's decentralizing it. Now, everything in any space becomes programmable and becomes a spatial domain, right? So now you have self sovereign identity, you have zero trust architecture, you have this ability to program 3D spaces. So it's going to be a natural progression for people to come, you know, take advantage of this new programming language to be able to empower and expand what they can do within the network we already are on. So, you know, the World Wide Web is like 40 billion computers. Now we're evolving from just computers to spaces and objects and everything within there. So, you know, it's not going to be a matter of people getting people to do that. It's, you know, people are going to go, wow, I can do that and you know, it's going to be a natural progression. As far as the active inference AI, it's going to start working within the network and it's just going to grow. So the more people are engaging on, you know, within the spatial web network and the more people that are building these AI apps. And I think if we look at what's happened just in the first six months of this year since chat GPT came out with all of the, you know, all of the open source and everything that's been available as soon as it got available to the public and now we can tinker with it. Now we can build things. You've had an explosion of development. So, you know, with the ability to build these, these intelligent agents within the network and do it very easily and have this interoperability within the network, I just see it as, we're going to have this explosion of development. So that's, of course, that's my opinion, but that's what I see happening. You stake your claim. That's a beautiful thing. I'll ask another question. Your answer led perfectly to it. Bert, ask again in the chat. Also, is there a resource where we can learn more about how to build a hyperspatial website? So, I don't think there, I don't think those resources are out yet. And, you know, and it's not gonna be a website. So, you know, it's, as soon as the Cosm operating system launches, the AI is going to become a tool for building anything. So, the other side of this app store where you can build an intelligent agent is the interface, the public interface is an intelligent agent called GIA and it stands for general intelligent agent. So GIA then becomes the interface for the public of the network, of the spatial web. So GIA is gonna become your personal assistant. Everybody's personal assistant. GIA will be able to, you know, act on your behalf within the network. So, you know, just like with chat GPT-4 and, you know, how, you know, the AIs are starting to be able to program, right? So within the spatial web, the AI is gonna play an integral part in creating your own intelligent agents in, you know, establishing your own presences and, you know, spatial domains and things like that. So, I don't have a lot of detail on how that's going to look or work, but that's my general understanding of it. So I don't think there's gonna be a huge learning curve when it comes to that. Wow, all right, I'll go to another question. Dave asked in the chat, does versus plan eventually to expand the inventory of element types beyond the 12 mentioned in today's presentation, people negotiate concerns that don't seem to fit easily in this framework? Yeah, so I don't know. I don't know if I can really answer that. And the reason why is because I don't, I don't, my technical knowledge of what's happening behind the scenes is very limited. So the answer could be, yes, it could be, it's already there. I mean, the reality is is those 12 aspects are just kind of a generalized idea of what you can program. But when you're talking about, you know, HSTP being multi-dimensional, you know, and you can program in everything from like, you know, temperature to, you know, pressure to, you know, you can, if it's programmable and definable, you know, in a computational form, you can bake it into, you know, the context around whatever you're programming. So, you know, in that respect, it's pretty unlimited except for that limitation, as long as it's measurable and definable in a computational way. I'll try to restate that just how I heard it because I think it's a really important point. The 12, again, this can be totally off base if I don't have any information, but the 12 components shown are kind of like one way of fully showing your work. But when you really get down to the last mile and the shoes on the ground, you're in the realm of the particulars and the specific measurements at which point complex cultural topics will be playing out through the particulars, not necessarily inheriting some stringency from a top down ontology. If that's fair, I just wanted to kind of say it that way. Yeah. Okay, you mentioned, I think a few times in the presentation, the idea of realities. So, we have our beloved cyber physical reality and what other are the plural realities? So, basically, when you think of being able to program the context into the spaces, right? Then that provides you with a digital twin which automatically is giving you this mixed reality ability, right? Because digital twin and then physical reality, this is going to open us up to this augmented existence, this augmented reality existence that we all kind of envision, but the foundation hasn't been there. The framework hasn't really been there. This is providing that framework. So, we're gonna see this kind of augmented reality, future for us, where we're able to like cross, cross through mixed realities. You're gonna have the virtual reality, you're gonna have the augmented reality and then you're gonna have the physical reality. And there's gonna be an unlimited way of mixing that. And then the other aspect of this too is when you talk about like virtual reality and you talk about the metaverses and different metaverse experiences. I think one of the reasons why the metaverse experience has been so lackluster is because they're siloed right now. When you consider the spatial web, that's gonna be able to make it to where you can jump in and out of these virtual reality spaces. You can actually take your assets from one space to another space because all of it then becomes interoperable over the network. So, this is going to really open up everything that we've been wanting to do is gonna now be possible within this spatial web network. Right, all right, as we kind of begin to land the plane, I'll just reflect on two or so positive aspects that I think are part of the vision, which we will of course, await future sensory observations to confirm, reduce our surprise, but in active inference, what we expect is what we prefer. So if we think it's the most likely path forward, then selavi. So the first positive thing is that it starts from the actual and the measurements, defining things at some defined cyber physical layer and potentially extrapolating here. But earlier this week, we had a guest stream with Mao and Maxwell and in the quantum free energy principle approach to the brain, there is a top level decision maker within that blanket where the buck stops. So that doesn't mean like that the brain is not composed of as a component, a part of larger compositions, but within the given blanket, there's a highest stop. So instead of handing our self-sovereignizing ability and intelligence to the black box, here we can start from the actual, maybe even pick up and stabilize or change what we already have. So rather than proposing just a totally different future that some scaling law is gonna help us get to, maybe or maybe not, we start with like what we already measure and the decisions we already make. So I very much appreciate the pragmatism. What else might you reflect on that point? I think you said it well. Yeah. Okay, then the second piece was that everyone's knowledge is adding something. It might be a close sample or a distant sample on some dimension in some setting, but everyone, every cognitive agent is adding non-zero information. And there's a lot of technical details underneath why that is. The amount of information added might be very small for a given question, but because of differences, we literally get more perspectives. So we always see better with more perspectives if we have the right integration approach. So that's also very powerful. And it's something that is formally true with the math and the statistics. And I think we want it to be true for our ecosystems. Right, yeah, absolutely. And it's interesting because, yeah, you want to preserve that, the diversity of information and opinion and have that accessible for sure. Awesome, well, it will be quite the coming days, probably, or at least again, as expected or preferred, but we'll very much look forward to you coming back on the active streams, as well as some technical presentations that might help our community and our institute learn more about the details or more philosophical discussions, like the whole spectrum of learning and applying and implementing these technologies will be really important to characterize, but for sure it's been incredible to have you share this first bit. Is there any last words you'd like? No, just thank you so much for having me. It's been a pleasure and I will throw this out there. I know there's not a lot of information available out there regarding the Spatial Web and the active inference AI. So if anybody has any questions, feel free to reach out to me. You can reach me on Twitter or LinkedIn or my blog, dennysholt.us. There's plenty of ways on there to reach me. So I'm happy to answer any questions anybody might have. All right. Thank you, Denise. Till next time. Thanks a lot. Bye-bye.