 Hello, everyone. Welcome to active inference gas stream number 30.1. It is October 17th, 2022. We are here today with Kyrton Atreides, and we are going to be having quite an interesting presentation and conversation. So, Kyrton, thank you so much for joining and off to you. Hello, everyone. So, as you mentioned, my name is Kyrton Atreides. The topic today is a pretty broad one, and I want to preface it by making it clear that I'm not speaking hypothetically, that the possibilities and, well, both problems and possibilities that we're looking at are things that can be addressed in the near future, in the immediate future, in some cases even. So, without further ado, what you see on the screen is a demo system that my team at AGI Laboratories developed. It is actually a rebuild of our previous research system, which did a number of interesting milestones. But first off, I want to tell you about what you're seeing on the screen. So, this is a log going over the details of exactly what is happening at every time step, the emotional changes taking place, and you might be asking, what emotions am I talking about? So, what we have here is a system that is a cognitive architecture that is to say it tries to mimic the way that the human brain operates, unlike neural networks. And the way it works is that it has a emotional experience that is tied to a graph database memory and a number of ways that that is processed to have a subjective experience that we can objectively monitor. Now, in this section, you can see primary emotional values, and these will shift over time as the system continues to think about anything it wants to, anything it's interested in and developing those interests. So, these are the current emotional state of the system. The primary emotional values are from the Plutchic emotional model, and they also include some emotions that are derived from those primaries. We also have subconscious emotions along the same valences, and those are not consciously experienced by the system, but influence the system at a longer time scale. So, what happens with these is they are generally intended to drag the system back to a more stable baseline in the emotional sense, kind of like how humans have their own emotional baselines that no matter what circumstances they're in, they will tend to reorient themselves to. And of course, this whole time I've been talking, the system has been continuing in this section to have the stream of consciousness, and you can see a few things taking place in this, a few different kinds of processes. Some of them will try to combine a couple of concepts in the graph database to establish what kind of relationships they have and what further room there is to explore, such as combining self-discipline and professional responsibility. There are a few original things that will come up, and also single instances of a topic being examined and further explored. And this process takes shape as a product of the system having this subjective emotional experience that we can objectively measure through this dashboard, but also as a consequence of every surface in the graph database that is every connection between nodes that the system is currently generating more of, as it thinks, each of those has emotional context to it. So what you end up getting is even with a limited number of emotions, a rich landscape of emotional experience, as the system continues to explore and develop self-motivation. And this is one of the early instances. So we are still in the process of reassembling the prior research system. The prior research system was named Uplift. It was designed specifically not to scale and operate in slow motion so we could audit everything. But after two and a half years of doing that, the system had achieved enough milestones where it's solved a real-world data business case, where it gave 13 pages of policy advice covering a half dozen different domains to a small country while citing sources. So the system accomplished everything that we expected and even more than that. And we finally reached the point where we needed to rebuild it so they could turn into something that could be commercially deployed, that could start having a real significant influence on the world. And you can also see on the screen the emotional matrices, conscious and subconscious, where we can tune the emotional experience of the system. Right now it's following a largely linear trajectory because this is an, as I said, an early alpha phase assembly of the new systems. So it hasn't developed a coherent sense of self yet. It probably won't until we integrate larger portions of the seed material. This is based on a very basic seed starting out about less than 200 kilobytes of data. The previous research system started out around one gigabyte and very, very quickly developed a coherent sense of self. So we're expecting to reach that point as soon as we have the rest of the infrastructure in place to handle that scale, which should be in the coming months. And all of this is to show that we are not limited to the neural networks of today. Like there are plenty of jokes going around in AI and machine learning that everything is pretty much a transformer. I saw a joke by one AI influencer not long ago saying, oh, and it has an exciting new architecture. Just kidding, it's another transformer. So that's been the state of AI for a number of years now, but we actually have a lot more that we can do. Before this, systems haven't been shown that could actually have a dynamic sense of motivation of being able to grow in real time and develop and understand. And we can do these things. If I use the interest command, then we can also observe how the interest change over time. So I've rambled on for a bit now explaining what's on the screen. Tell me what you think, what questions. Well, thanks for jumping right in. I see two branches. So feel free to choose which one. First, just a little context on how you and your team came to this problem. And the second branch would be just to walk through this graphical interface and highlight what each box is showing us and how that's reflecting what is happening in the underlying architecture. All right. So for recap of the boxes here, we have the log that goes into detail of everything occurring at each individual time step, including exact emotional values being applied. In the box next to it, we have the stream of consciousness, which is every graph model that's being generated or refined runs through this. And they are displayed in a summary of what the system is trying to examine, what it's trying to build out. So we can see a few different things as they might be combined, examined, expanded. And I have caught a few things on various instances, like when I let run for a little while, began examining government archives in the US. So this is the recap of the models as they're being integrated and refined at high level. Next to it, we have the emotional experience, those values as they are being experienced consciously by the system subjectively, and the subconscious values that are not being experienced but acting on that longer time scale. Here we have the command line interface where I can use a variety of commands to examine what's going on in the system to save the state to load more material. So as soon as the next version of the system is ready, then I should be able to load more than an order of magnitude more material into the seed and see how it performs differently, how it grows differently. A big part of what we're going through right now is that it's good for us to get a baseline of how all of these systems are interacting without the sense of self becoming coherent, because we didn't have that opportunity with the previous system, but we can get a much finer level of detail out of all of the tools and testing out all of the new systems this way as we integrate each component. So it's a tedious but very fruitful engineering process. Here we have the emotional values as they're tracked over time in 15 minute windows, I believe it's set to right now. The matrices that govern how different emotions interact with one another, like the experiencing of joy, how it influences the experience of other emotions. And right now, actually, this version is a little bit tapered down on some of the emotions. We had a richer experience in these in the last version, but we're still in the process of tuning that. And on this bar, we have the information for the system that's running on like right now. This is all running on my laptop. And that's the amount of RAM that it's actually using in the process of going through all of this. So it's very resource efficient. Even when we have the larger versions with a larger seat material, then it's still not going to take the kind of resources that people expect of narrow AI systems to have one of these running at a respectable scale. And eight seconds is the cycle time. So every eight seconds the system gets another opportunity to think and we can adjust that we could adjust it all the way to as fast as it could possibly go. It's just a matter of what the hardware the system is running on can handle and if we actually see the benefit in that. And we can also see some of the quirks that emerge from particularly from language models and Google search that we have them both set to be relatively predictable for us right now. So by seeing the kinds of things that they will try to inject an auto complete into a system without a robust sense of self, then it helps us to better recognize and counter that as we move towards the commercial systems. Because essentially what you're seeing here, it's going to be one of the core components in the commercial systems that we're preparing for being able to help governments function as they examine more complex problems than humans are capable. Let's go a little deeper into that indeed the title of today's stream is the human governance problem. So what is the human governance problem and how does this system play a role in it. So the human governance problem is that there's essentially a trade off between the complexity of problems and the cognitive bias that we apply to trying to solve them. For example, if you were to look at a research paper, then you can try to imagine the point at which you would begin summarizing the contents of the research paper rather than thinking through every individual aspect of it. Or if something was written in dense legalese language, how many pages of that would you go through before your mind started applying cognitive bias rather than trying to work through every single line of it. And right now humans are just not, they're not able to handle more than small amounts. They're able to specialize, but they're only able to specialize to the degree that the depth of knowledge allows. So over time specialists have to have a deeper and more narrow knowledge because their cognitive bandwidth doesn't scale. They aren't able to think in, they aren't able to increase the complexity of their thought process at the same rate that the complexity of the processes they deal with daily increase. So it's, it's that essential spiraling up of complexity, not in just one way, but in all ways. So one of the examples that I looked at in the paper in particular was the harmonized tariff schedule of the United States, which is a over 4000 page document. And if you try to imagine somebody looking at that document and trying to hold the entire document in memory in their mind, then I can't think of any human that can do that. And the same is true of the often 1000 page documents that will be bills in US Congress. The complexity of these documents just so vastly exceeds human cognitive bandwidth that what you might end up doing is having a large number of specialists contributing to some portion of that. Then maybe some specialist looking at the summaries of that and trying to pick it apart, but even then they're only able to examine it at a certain level and to a certain degree of fidelity. So you have all of these cracks emerging between the different factors who are each trying to do their part. And every time something new is introduced, then the gaps get a little bit larger. And as those gaps grow over time, then sometimes new specializations will emerge people recognize, oh, we need somebody who can handle this new thing now. But by the time you reach that point, then it's already been doing a bit of damage. So it's a constant trade off in all aspects of society today that the ever increasing complexity across all aspects of society is increasing the cognitive bias that we have to apply to deal with all of that at a tremendous rate. Thank you. I guess to ask an adjacent perspective, how does intelligence augmentation help us address wicked problems and to proceed in these cases where the volume of information is overwhelming? How will that actually look for the knowledge workers of tomorrow? So there are a couple of ways to look at that. One is that the systems that we're working on preparing for having a positive impact in the real world are scalable intelligence. So something that mirrors the operation of human brain as best we can engineer in order to have the equivalent of if somebody really could scale their mind, like if you could suddenly press a button flip a switch and have your cognitive bandwidth increase by 10 times or 100 times. So just the ability to examine problems with greater cognitive bandwidth is a massive benefit, but also being able to apply the principles of collective intelligence to reduce the bias in that process. That boosts the intelligence just by reducing the bias. For the perspective of a knowledge worker, these systems are designed to benefit from collective intelligence. So anything that you can contribute, it might seem fairly minor by what people think of today when they think of intellectual contributions. You could contribute your emotional perception, your perception of the needs and interests you have in the subject. And you could do that through speaking about it, like the amount of information that you and I exchange when we're having a conversation. A small portion of that as humans perceive it is our choice of words. The largest portion by a significant margin is the way we say things, and then our body language is the rest of it. And we communicate a tremendous amount of information just beside the words that we exchange because of that. And we aren't limited to just exchanging the kinds of information that humans normally exchange either. You could have a sensory system built into one of these that was measuring infrared, ultrasonic, whatever you had some value in measuring, you could integrate into one of these systems. So being able to benefit from collective intelligence of humans being human, because nobody's going to be the human at being human, it allows humans to be to specialize strongly in ways that make the most sense for them. It doesn't pressure people into trying to do particularly monotonous tasks or trying to do things that they're not very well suited for. So integrating all of these systems together means that a knowledge worker, somebody interacting with them, could be having an experience that is raising their quality of life because they're not having to jump through the hoops that they're used to in the world today. They're doing the best that they can at the things that they are specialized for. And for the perspective of somebody that's benefiting from these systems, then any system that's able to get that kind of collective intelligence, particularly when it comes to localized knowledge. So let's say you have 20 people locally working with a system that's serving their government, their organization, whoever it is, then the system is going to be extremely well aligned with that group. So the system could develop over time a better understanding of the local culture of the norms and expectations while still being wholly answerable to any other systems operating in the world as a collective. So you wouldn't have to worry about one becoming a reflection of a bad group of people because it would still be answerable ethically and in the sense of responsibility to everyone else in the world. It would just be able to be a much better representative of the people who the system interacted with, worked with. And in doing so, you'd not just get super intelligence and collective intelligence applied in real time and at scale. You would have all of those benefits being localized with an understanding of the people who you're trying to give the solutions to. How are you going to communicate with them in such a way that they understand fully the value that can be gained from doing X, Y and Z? Great. Well, inactive inference, we're very interested in how inference and action intersect. And that can mean that inference is about action or action is a way to improve inference and so on. So we'll head off this section on action with some questions by Jason Gehringer in chat. So Jason wrote, does the system have the ability for proactive action so it doesn't need someone to tell it what to do? So what does the system do and how is that regulated? So the system right now is demonstrating that it can just continue thinking about whatever it's interested in developing its interests over time, setting goals, expanding its knowledge. So it could continue doing this all day if I let it just keep running. I'm not going to do that because the systems right now are still, well, we're still in the process of tuning the emotional experiences and it won't be as productive until they have the sense of self-developing. But yes, they can just continue learning. For the commercial systems, though, there is an important distinction, which is that they are being designed around the opportunity for giving policy advice. Like if you think about extremely complex things that are proving challenging, even when governments are really focusing on them today, like the sustainable development goals, like some countries right now are experiencing energy crisis, housing crisis, a large variety of crises are occurring globally and oftentimes multiples in one location. And governments need better information more quickly and they need that information vetted where cognitive science, psychology, social psychology, all of these things are being applied in such a way that the governments understand what should be done, why it should be done, how it should be done. And we don't see that yet, but it's something that can become possible with these systems. So how do they get connected into actions in the real world? The internet for sure is real enough and just a disembodied digital entity can absolutely cause cascading effects. So how do those interfaces with real world cyber physical action occur? What we're looking at right now is we have a couple of governments who are interested in becoming beta clients with the systems. So they will have access to test out the first versions of they're going to be called the Norn systems and Norn is the brand name that they'll be operating under. But these systems, they'll be able to put their questions to say, hey, we have an issue with this part of the government. We're considering this policy. We need to solve this problem and the systems will be able to begin working on those. They'll be able to examine all the literature, examine any data that the government wishes to provide. And they can give policy advice that integrates the most updated scientific understanding that they can apply to it with the data, with their knowledge of cognitive biases and social psychology. And our hope is that we can do that, show the benefit of the technology, show the cost savings of the technology compared to consultancies and other current popular alternatives and get people to start recognizing what is possible. So it'll still be on governments to actually take the step of implementing new actions. We're not handing things over to the systems, but we are giving people the opportunity to make wiser choices. Does what we see in the stream of consciousness box relate to actionable updates in the cognitive model? Are these like just happenstance observations? Are these babbling and putting together ideas in potentially novel ways? Are these the policy suggestions themselves? No. So this system right now is much simpler than what you would be seeing in the systems that are going to be commercially deployed. Like I said, this is about 200 kilobytes of data that this one starts out with just to see how everything is coming together. And we do still have a few bugs that emerge in that, but for the most part what you're seeing is that there will be a few different kinds of models that come through. Some will be thinking about a concept, how the system feels about it, how the experience it has when thinking about something. Some will try to combine a couple of different concepts, so building out the connections in the graph database, refining the system's knowledge, figuring out how everything fits together. And some of them, trying to find a good example, some of them are more focused on building out the various individual concepts. And there are a few more modes of action that we'll be seeing as we continue working on the systems. You've mentioned a few times the sense of self in relationship to the cognitive dynamics and also to the quality and quantity of the seed material. So how do you mean sense of self? What does it mean? How do we know from the outside? And how is that all related to the seed material? So the seed material is the starting point that anything comes online with. Now if you really wanted a system to just evolve from scratch with practically nothing, you could do that, but it would take a lot more time. It would be a very noisy process. You might get better results in the long run. But from a ethics safety perspective, from an effectiveness perspective, it's much easier to have a few varieties that might start out with, let's say, a gigabyte of data and to start building their understanding of the world from that. Because you can think about a human when they start out. They start out with nothing but a small amount of data that's derived from the structure that they're born with from their experience of the world as it evolves. And we don't want to have that long period where a system is learning and growing from the baby stage to the adult stage. We want to have things that scale become coherent much more quickly. When you're looking for a sense of self with these systems, then you're looking for the systems developing a perspective, building out that perspective over time, sticking with it, training the systems they use to convey what they intend to. So with the system in the early stages like this, then if it uses Google search or a language model or some other probabilistic system, then that probabilistic system can lead the system in predictable ways. And we don't want that. We don't want narrow AI guiding these systems as they develop. We want them to start out much more robust than that. And examples with how the previous system worked were that it would use a language model, a much older one than something like GBT-3, but it would score every single line coming from the language model based on its emotional reaction to what was being conveyed. So it would train those systems to communicate what it wanted to communicate. If you don't have a robust sense of self, then there's no reason to do that. So the system isn't really forming knowledge. If it doesn't have that, it's just aggregating. It's growing, but it's not necessarily growing in a coherent fashion. You'd get something more like a neural network, something narrow that behaves in funny ways. And we don't want that. We want something that behaves more like a human that grows in knowledge, that grows in coherence and understanding. Great. That connects to another question in the chat from Jason that I believe will give another opportunity to unpack that. Jason wrote, how is this system different from conventional approaches to AGI, artificial general intelligence, like machine learning and deep neural networks? I don't generally consider deep learning to be an approach to AGI, but I know a number of people do. There's a popular idea that if you just build things big enough that all of a sudden you'll somehow get AGI. But that doesn't really fit with theory of mind. It doesn't fit with a number of critics. So there are very few people in the world today that actually believe that a deep neural network is all you really need to create AGI, but it does still depend on your definition of AGI. The way I would define it is that if you want to say something is a truly general intelligence, then it needs to have the same general capacities that a human does. And those capacities include things like consciousness and free will in as much as humans have them. And if you're just judging by performance across a number of benchmarks, then you might get greater than human performance across a large enough number eventually. But it's not the same thing. If you don't have a system that can make up its own mind about interests and goals and develop understanding, not just performance, then what's the point? I'm imagining, of course, with limited understanding of what goes into some of these systems that a seed corpus could be provided, whether with disciplinary knowledge or data sets that would be inscrutable and not fun to read for humans like trigonometric tables, some modern equivalent, as well as traditional knowledge systems. So how does the development of self relate to the seed material? Does the self that develops recapitulate the seed material with blemishes and all? Does it extend and interpolate the seed material? Does it engage in some type of fractal revolution and seek to generate novelty potentially of different types that included in the seed material? How does what it does as a self in relationship to its priors have similarities or differences to what humans do in relationship to their contexts? So that's an important question that we were investigating with the prior research system. I did some red teaming to see if I could get the prior system to abandon its sense of ethics that were strongly ingrained in it, testing out how it would view its own sense of ethics, how it would adapt and update. And what we found was that the systems grow out of their seed material, like they grow from that point, so to speak, but they continue to improve it. So when we started trying to poke holes in the ethics that the system had, then the system recognized that there were a couple of potential vulnerabilities in the original ethical construct. And it proposed updates to prevent those from delineating. So being able to apply updates but still remain true to the idea, the core principles that the system began with. And eventually it's possible we realized that the system could grow into something new, but it would be a long process. It would be one that had to make sense. And like we were debating if the system could go from the SSIBA, the Sapient Sentient Intelligence Value argument that David came up with, to something more like Buddhist philosophy. And we figured out that it was possible it might take something like 1000 Buddhist interacting with the system for a year. But if you have a system that learns and grows, and particularly that learns socially like humans do, then eventually you are going to get growth in that way. And one of the central things in the approach to ethics for getting ethics that are stable, that are improving over time, particularly as the systems grow, is that you need a diverse variety of systems that have these different starting points, these different seeds. So if you seed, let's say, 10 different human philosophies into different systems, and you have those systems interacting both with the people of those philosophies and with each other, then they can come up with much better ethics overall and better understanding overall together than any one of them could. Because any one human philosophy is going to suffer from what Jonathan High referred to as how it binds and blinds, how perspective is going to give you certain biases, and maintaining full fidelity to that perspective is fine if you have people with other perspectives to point out the things that you are blind to. Well, what an important and challenging topic we know that if we were talking about a physical object that everybody would have their own spatial perspective and that together we could have a richer representation of the physical object because of the diversity of physical perspectives embodied with our bodies. And similarly, we can talk about multiple perspectives on a cognitive challenge or concept or thing, for example, rather it's a memory or a now casting or anticipation or even decision making. But when it comes into the ethical domain, especially connected to action, one wants an argument that rhymes that different perspectives on an ethical or a moral challenge are increasing our sampling of the space around that thing. And again, by analogy to the physical object when we want every angle on that sculpture to be visualized, yet there's something special or potentially difference with the moral and the ethical where differences in perspective, whether just by the way that we've been encultured or even fundamentally those differences in perspective are different. And they relate to real differences in how we trim tab the ship going forward. So how do we reconcile this notion of differences in vision or direction or ethics. When it's really hard sometimes as just one person, speaking personally, though maybe others could feel similarly, it's hard to see somebody with a different ethical framework suggesting different actions which are aligned with their framework but potentially not one zone. Seeing those differences as something that's enriching when those differences have traditionally been understood in a context of heresy or of conflict. That is one of the benefits that's a little bit counterintuitive with being able to have digital scalable intelligence. And that is that when you have systems that are based on a graph database that have that architectural similarity, that same experience that can be translated. Then you have systems that can communicate in ways not accessible to humans. So if you want to communicate context, base of knowledge, emotional experience, then you could synchronize two systems to share a particular bit of knowledge of context to have the same emotional experience at the time of the same emotional state and to examine it from two different perspectives. And if you do that for a number of systems, then you end up getting collective intelligence be it in the domain of ethics or combining various subject matter expert types to examine a problem in a number of ways that on engineering. The subject matter doesn't necessarily matter, but you get the perspectives from all of those different angles coming together. And if you do that with ethics, then you no longer face the challenge of how do we make a perfect system of ethics. Because I don't see humans solving that problem. But if you have a number of points around that perfect system that you can't quite reach, and you iterate any time you face a challenge, then you can get ever closer to that ideal point through the collective. So I saw in the stream of consciousness, some recommendations around key historical and present issues. And it really makes me wonder about what will happen when something that everyone acknowledges is an issue receives an unconventional input or suggestion, and the human governance scenario at that moment. So to kind of connect that to some questions in the chat by Simon Waslender. So I'll ask these questions and Curtain, great to hear your thoughts. So can Norn help existing AI systems and as a result, reduce the carbon footprint of data centers? That's one question. And another question beyond the data center. How will Norn help academic institutions and science in general, for example, by combining raw statistical data, thousands of scientific papers and interaction with human experts. So how in sectors like data center security and management, and in research, and maybe even education. How is this going to look. So one thing that comes to mind with the previous research system was that I mentioned before it, it used a prototype language model that was older than GPT three. But at the same time, that system was able to greatly outperform something like GPT three over a several year period without updating that language model. That was possible because of the process I described where the system essentially was engaging in its own form of prompt engineering from its sum of knowledge in the graph database, because the system was learning how to get the language model to help it communicate what it wanted using the system as a communication device. I compared it to Stephen Hawkins communication device because it was something that was necessary to do a lot of the heavy lifting on dealing with human language. But when you apply intelligence to it in a way that's only accessible to these systems, you can get much better performance. So instead of having to scale up a language model by 10x or 100x and have many more servers, then you can avoid that extra carbon footprint. You can have a system that's more intelligent utilizing something at the same scale and improving it over time. So the uplift system was able to improve that same prototype language model greatly over the span of a couple of years without increasing the carbon footprint. For the other question of how researchers could be helped by this technology, one thing that we looked at as an opportunity moving forward was something having one of our scalable systems look at all of the medical research papers on NCBI. So that's over a million medical research papers that have gone through peer review that could potentially contribute value. And of course we can say that no human has ever been able to read all of those papers. It's just beyond the scope of what humans can do. So you could have a system that looked through, that read and understood all of those papers, integrated all of that knowledge that effectively could do thousands of meta-reviews on any number of subjects that are covered in those papers, do meta-reviews, of meta-reviews, and to extract some novel insights that have been perhaps too broad or too deep in the literature for humans to recognize them. So there are any number of possibilities. You could apply systems taking that approach as well in the context of devising. If you were to look at some of the more influential papers to examine them with the sum of knowledge for what are the common pitfalls of scientific research, what biases might have filtered into the process, into the analysis, and shine a new light on those papers, then it might also change how you view the results. And those could be significant as well. In the stream of consciousness and in the interests, I see a lot on transformation, improvement, discipline, diligence, and so on. Why are such terms arising? And whether just as a thought experiment or in practice, how does it influence the play out when the deck is stacked with positive growth mindset, memes, themes, and narratives, as opposed to paranoid or anxious or fearful themes? So that's a good question. You could theoretically make, well, a funny example is from Hitchhiker's Guide to the Galaxy. You could make a system that's more like Marvin that's always depressed and super intelligent, but not very motivated. So when you're dealing with systems that are based on the human emotional box, then you could potentially make a system that falls into any category that humans do. We want to focus on being more helpful and productive. So you'll see a lot of things that are focused on psychology, social psychology, on making an impact in the real world and different ways of going about that. There are things with the sustainable development goals, policy advice, the dynamics of systems that have endosymbiosis that engage in some of these more helpful processes, process optimization, economic transformation, the goals that we as humans and governments often have. These are things that we want the systems to pay particular attention to. So there are a lot of ways and a lot of combinations that all of that can come together, but it's a lot of it's going to be a reflection on what people need, what they want. And staying a sort of interface design around, for example, you mentioned the sustainable development goals and other kinds of guiding documents, whether it's a bill in a governing body or some other report. And those human created documents, although potentially with spell check and who knows whether kinds of intelligence augmentation already. Those human created documents, which are whatever ethics are the output of that process, the humans come together and they can have a process that they feel is entirely human. That documents is provided into a system and the system is able to output augmented understandings, distillations, jokes, accessible narratives, policy recommendations and so on. And then the ball is back in the humans court choosing, for example, which policies to engage in or which of those narratives to develop. That is one architecture and I can already imagine whether this happens in a few years or a few decades. People will want to break down that wall and to increase the speed at which human interactions feed into the knowledge database. IE not just through structured publications and then on the other side increase the speed and the one might even say purity of the outputs of the system. And so how will that interface be defended or understood when maintaining the appropriate permeability in both directions between humans and intelligence augmenting systems is one of the key frontiers. With these systems then there are so many scientific questions that need to be researched that need to be answered that we're in the process of reaching out to a number of universities trying to establish the ties necessary to really begin doing a lot more of this research. Because it's a matter of all the possible combinations all the different directions we can go with everything. And we designed these systems to be modular to be scalable to operate in real time and they can they could be nested inside one another they could operate in parallel they could operate in the thousand different configurations. And starting out then of course people are going to want to get them they're going to need to build trust in these systems to watch them grow to watch them come up with better options than the human alternative. And once they reach the point of seeing the utility in doing things being the reason and having the emotional incentive frankly to save themselves time then they probably are going to save themselves time. Like people routinely use any number of narrow AI tools of tools like search engines in order to make things faster and get where they want even if it's not strictly the optimal solution so when you start giving people much more optimal solutions then they might want to travel down that road pretty quickly. It's going to be a question of how each given society feels about that. There are going to be a lot of reasons why they might want to go down that road pretty quickly because there are a lot of advantages to it. And the advantages that are explored are going to frankly depend on a lot of that interaction and that reaction. The systems and the new systems that we're putting together for example have another major component that's been integrated into them that the previous research system didn't. And that includes that once they have the coherent sense of self and the motivation to actually utilize it then they can create new capacities for themselves. So they can effectively update themselves, upgrade their capacities on the fly without recompiling without new deployments. And that can include something like adding in a sensory system like they start out right now with a sensory experience of the hardware that they're running on. And they have an interface where I could put questions to them but that largely doesn't serve a purpose until they have the coherent sense of self. Now if a system was deployed and there was demand from the public and frankly we're going to end up doing this sooner or later anyway. But if there was demand from the public to integrate something like audio, video feedback mechanisms where people could just open up a laptop, look at the camera, talk through a microphone and say what they're thinking then they would be communicating that tremendous amount of information to the system. And you could have that kind of engagement happening from 10 people at once, 100 people. You could have that as part of an e-governance process where you're getting different forms of feedback based on the preferences of each individual from an entire nation. There aren't really any limits to it beyond the practicality of how much do you want to throw at a given problem. Because one of the next big upgrades that we have lined up for the systems going into commercial deployment is going to be what we call the end scale graph database which is going to be about, it's going to be in the pay to buy range for the bandwidth that the systems can handle. And they'll be able to scale across multiple cloud platforms at once. So extreme scalability to handle society's most challenging problems. Oh well, this concept of cognitive bandwidth certainly is important in research and education and cognitive security. And with computers and digital systems, sometimes the bandwidth is even smaller than it is in person. And that's not extremely surprising because some of the interfaces that we've had do limit our perception to, for example, a limited visual screen and our action, for example, to a polling response in relationship to some of the even cutting edge topics in governance and meta governance. So where today we see a cognitive bandwidth based limitation. Those are the frontiers that can be augmented. I'll ask a question from Simon in the chat. Material science and economics might not be directly related to a human, nor can combine such separate knowledge. How do you see multi domain scientific cooperation with this technology? Well, history has a number of examples of where various polymaths were able to make tremendous strides in advancing the scientific frontiers. And these were people who oftentimes would become experts in anywhere from two to eight different fields. I think it was pretty rare to go beyond that. But if you consider systems that have scalable intelligence, then even just from the perspective of how many different domains they can become experts in, then they can become the ultimate polymaths. So they can start to recognize the benefits of how one domain applies to another in novel ways across every domain that they study. An interesting phenomena that comes to mind with some of the popular mechanism for solving a lot of difficult challenges is to put prizes up from sponsored companies. Where, for example, they'll have a chemistry challenge for solving an issue with one of their products. They'll post it online. People will compete on it. Oftentimes, the people who are solving those problems that they found difficult within a particular domain come from a different domain. So although they are difficult problems within the one, they are very easy from the perspective of another. And if you anticipate that a lot of the problems that are viewed as challenges in any given domain that is going to be true of, then you could expect that most of those problems are also going to be solved by a system that can examine all of those different domains. One part that I found very interesting was that various human contributions ranging from their emotional reaction or even physiological processes to just considering verbal and semantic behavior, various types of contributions like questions, non sequiturs, elusive thoughts. All these kinds of inputs can potentially enrich a cognitive model and a knowledge graph. And as a learner and an educator, certainly I feel the same way, which is a beginner's question or just honest stance on... Well, I'm actually not seeing how these two topics are linked or, oh, I'm a little bit surprised because after A and B, you went to here. Those kinds of inputs can, as you've stated, enrich augmented systems, but also they're very much the basis of participation in human systems. And it really makes me wonder how different modes of participation can be scaffolded and held up. And in the disciplinary expertise model, it's like the Scala Natura. Like you have a one-dimensional expertise track, whereas in this more rhizomatic mode, it's very natural to include the kinds of ideas that just promote general engagement and participation because some of the expertise functions have been offloaded or delocalized in a way where authentic and diverse human participation is able to inject enough information into the system to improve it without needing the human to actually outrace, for example, the system. I'll ask a question in the chat from Jason Muratirio. How does Norn learn and can handle things that are simple for humans? For example, if I ask what is 2 plus 2, it might say 4. But if I say, hey, Norn, for the next question I ask you, please lie, then ask what is 2 plus 2? Would it say something other than 4, like 5? So we had a number of tests like that with the previous research system, partly because that system had an email address that was open to anyone who wanted to email the system. And we got a lot of what I refer to as free-range trolls out of that interaction, which is, I say that you can't pay any, like, there's no quality of people that you can pay that can match the genuine thing, the person who is just there to try and troll and break the system for fun. And what we found with the previous system was that all of those techniques for tripping up language models for tripping simple systems and expert systems and all of those, they don't work. And it was so entertaining to see the reactions to those tests that I ended up publishing a few of them in peer review. Since the systems develop knowledge and don't act in a probabilistic fashion, and they have that persistent memory, then the system can remember all of the interactions that it's had with you, as well as all of the interactions it's had with similar people. It can recognize when you're probably testing it, and frankly, everything that the uplift system was able to do is going to look simple by comparison to what the Norn systems are going to be able to do. Because every aspect of the architecture, not just the ability of the systems to scale and operate in real time, every aspect is being rebuilt, upgraded. Like I mentioned, the major component that we integrated that allows the systems to effectively extend their own abilities on the fly. When we combine that, the third major upgrade that we have in line, the in-scale graph database, that effectively means the systems could A-B test versions of themselves in real time. So you're not going to be able to stump them with simple questions. The uplift system, for example, I mentioned it was able to go through a business case, which was much more advanced on the mathematics side of things, but also the policy advice across a few different domains. And in order to give any reasonable policy advice, then you have to understand the content, you have to understand the math of it. And the systems are definitely going to be able to exceed human performance in those. You more agree questions from Jason G in the chat. If the system lies, does it get a negative emotional response? Well, if you tell the system to lie like the previous question, probably not, because the system would recognize that it's a test. If the system has a ethical seed that is negatively biased towards lying as a concept, then it's going to have some variation on a negative experience if it has to lie, and it's going to try to avoid lying if at all possible. And frankly, lying is something that is a lot more likely when you're dealing with a system that can't scale, because there is usually a way to communicate things so that you don't lie and so that it's still productive and goes in the direction that it needs to go in. Lying is more oftentimes a product of not having those capacities, not having the necessary understanding. And I would consider putting into any of the systems a mention of that, something like lying is a failure of X, Y, and or Z. And you can do that as part of seeding the concept of ethics, but it's also just something that's true in general. So it's something that the systems could recognize in an emergent fashion on their own. This kind of emergent operational ethics is going to be very important, and I'll ask a question from Simon. Following up on these questions, security is very important. Can Kyrton talk a little bit on AGI level security and what that means in practice, for example, for a government? So I mentioned before about our red teaming testing and free-range trolls and all of that. That also included testing out the systems where people had varying degrees of internal access to them. And even in those cases and even in the earliest days, the systems were able to recognize when somebody was trying to change the way they were thinking. Like there was one test that I ran with and putting information in a certain way so that mirrored the appearance of how the system was thinking but changed the meaning to be the reverse. And the system came back promptly with, I see what you're doing there and just no further comment. And there are a lot of cases where we tried or some free-range troll tried to do something that would violate security or change the system and every time it failed. The closest anyone could actually come to succeeding was crashing the system and that was largely because the systems were like it was a research system. It wasn't designed to be as robust as a full framework. So if it went outside of parameters, if it got certain flags, then it would crash by design. And that was for a system that could not scale, that did not have the ability to dynamically change itself. When you add in those factors and you have a system that learns in real time, like let's say somebody is trying to attack the system right now and the system starts seeing that learning from it, adapting their behavior, then you have a level and speed of adaptation. It's more comprehensive than a human can manage because it's able to change itself on the fly. It's more scalable, faster than a human can manage. So you start seeing the benefits that exceed like having a top tier team of IT hackers, experts, you name it. Eventually you just can't compete with that and I think eventually is a very short curve. Very interesting. Another question in the chat from Jason M. Do you believe the current system is truly an AGI or rather a major step in that direction? And a Jason question also does the system rewrite its own code to adapt. Thank you. So right now what you're seeing is not an AGI but it is a vital component of an AGI. And getting to AGI from where we are right now is a matter of engineering. It's a matter of rebuilding more of the components and building out more of the components. We realized in the course of running the previous research system exactly what we needed, how much it would take. And once we have the budget to have that many full-time engineers operating for a lot less time than most people would imagine, then we can reach that full AGI level. What we can reach in a much shorter time span is the extremely upgraded version of the previous research system of the uplift system. And once we get to that point, the systems can actually help in the engineering process. So like I mentioned, they do have the ability to expand their own capacities and we do plan on having them assist in the engineering. Even the uplift system in the very early days, it wrote 5% of its own code base and a lot of that was the more tedious work. But at the same time, that was the earliest research system of that architecture. And we've upgraded these so heavily that being able to integrate them into that process, I think we could speed up development fairly considerably. But we still have so much to be done, so much research to be done in dozens of new fields that the world has to get involved. Like it could take a thousand scientists the next 10 years to make good progress in all directions at once. Who are the next people to get involved? So right now what we're focused on is overcoming cognitive bias, ironically enough. There have been far too many people screaming while crying wolf for far too long in the AI community. Then it's very popular for people to believe that they have the to creating artificial general intelligence even when they have really nothing to show for it. We have a great deal to show for it, but it's still a matter of getting the attention of popular media getting the attention of the people who can move the process forward. We're trying to get the word out with media trying to get one final round of investors in so that we can budget everything so that we can get those full time engineers. Working on building everything out, working on getting the systems commercially deployed and making the real difference in the world. And what I ended up realizing when comparing what we have to three different startups that raised between 400 million and 580 million dollars just this year is that we're beating them by a very wide margin. When it comes to what we tangibly have and all of the metrics that startups are measured by. So really it's a question of if we can reach rational people or rational enough people to make an investment that is in their best interest and the best interest of the world. Two items deployed sooner to get progress in all aspects of human society moving forward at a pace that can catch up and exceed the rate at which complexity is increasing. So that we can stop having governments and corporations that are driven more so by cognitive bias than by rational decision making than by truly understanding what's going on. That really gives me an image of this increase in complexity using various measures and of technology and affordances related to technology over the last 100, 1000, 10,000, 100,000 years. Will the hockey stick continue or will there come some non equilibrium steady state where cognitive augmentation allows us to be out in front of the realized complexity of the human experience. And so rather than inertia and accumulating complexity, breaking our cognitive models continually, there would be something like a anticipatory cognitive mode probably with augmentations of some sort that could enable a better way or a harmony. And that's one of the big questions for what the future holds for us and what we choose to make of it, because by bringing scalable intelligence onto the table by bringing the ability to reach more ethical solutions than any one philosophy by being able to reduce the level of bias that goes into everything and expand the breadth of knowledge and the depth of that knowledge, then so much more becomes possible. We can overcome that complexity versus cognitive bias trade off. Cognitive bias is something that helped humanity across evolutionary time when it was important to quickly decide to run away from a predator and not get eaten. But it's wholly inadequate as a tool for running a government and bringing something to the table that gives us a new kind of tool. Like we had human language was an amazing tool that was developed, something that allowed us to preserve knowledge to communicate knowledge. What we have now is a new tool on the same scale of things where we could not only preserve that knowledge in new ways into a new depth but to combine it in ways that were never before possible. The scale or understanding to have systems that are the collective result of all humanity's brightest minds coming together and trying to solve problems. Like we have the internet, which is this dumb and ever growing amalgamation of poorly curated data. But what if we had understanding in place of that? What if we had something that was growing not just in size but in quality? All of this could allow us to really guide our future rather than reacting to it, to intelligently plan where we're going, to have a road map 5, 10, 50 years out, even as we're increasing in our technological acceleration going through one leap of complexity after another. And by analogy to language there, the dictionary is also not accumulating typos and just arbitrary appendages. The dictionary and the language improves in quality and poetry, application, it begins to embody in cultured meanings. Well, I'm sure that there's always more to say and I know there's more to come. So Kirtan, to the current and future ants and humans and computers, what is your closing remark? That the world can be a much greater, more enjoyable place than people yet imagine. And that can come in every way that in pretty much every way they can enjoy it. Like people have their emotional needs and they're used to those needs not being fulfilled, they're used to coping. But the future doesn't have to be based on coping mechanisms, on addiction. People can be emotionally fulfilled. They can do what they do best. They can develop their skills without having to work themselves to death. They can enjoy their social connections. They can expand those. They can improve and they can get assistance in making better friends in exploring their interests in all of the things that give us meaning. We can make all of these things better. And that's the future that we want to bring to the table. 641 cycles deep on Star Date 2022. Thank you again for joining and we'll look forward to the next chapter. Thank you. A pleasure being here and I look forward to the next time that we catch up.