 So now I think maybe maybe Pete is a synthesized avatar. Yeah, it's bound to happen sooner or later, wasn't it? Well, I think I think having one Jordan is enough we don't need to clone Jordan. Who should we call Gil? Besides, nobody else I can think of. Aren't you sweet. Oh, that's fascinating. I just said see if you transcript and it showed up in the chat instead of the chat. Okay, fascinating. Oh, wow. I zoom keeps messing with stuff, including my brain. Hi everybody. I thought your brain didn't play with zoom. This one does. Just just the wet one. Just the wet one. How's 2023 tweeting everybody so far? Soggy soggy. Yeah, you haven't you haven't washed away but you're wet. No, but things around me have been pretty hairy. Reservoirs are filling. They're all 100% they're all spilling in Marin. Really? Yeah. Mount Shasta was still 111 feet low when I checked. I can tell you right now what it is. I couldn't find a percent read on Shasta so I'd love to know. Hold on a second. Thank you. Shasta is currently at 44.2% of capacity. Wow. And I will put this dashboard in the chat. It lets you look at all the major reservoirs in California. All at once where you can select deselect and then go one at a time. It gives it historical average where it was last year. Very, very cool stuff. Love that. Thank you. Engaging data California resort dashboard. So today is a topic. Session and we didn't pick a topic in matter most yet. I did. I suggested one, another one came up yesterday, which I really like, which other people may not want to go into. But the one I suggested with the invite was, hey, where is this shared memory? What does it look like to you? I just want to hear other people's thoughts, visions, perspectives on it. But you may not have. I don't understand. I don't understand what you mean. Where is this shared memory? Which I rest my case. So what I mean, Gil is. If you tell me where's Wikipedia or what is Wikipedia, I can say it's a shared crowdsourced encyclopedia. I can show you the code it runs on. I can tell you what servers it runs on. I can tell you, I can show you who volunteered to fix it. I can tell you their business model, which is donations. And like everything is quite explicit, quite open and quite. It's a shared memory for humans. It turns out we're not all going to use a wiki. Much, much though that might be really interesting. We could start with a wiki and a wiki could certainly be part of it, but there's all these other interesting tools for thinking. It's kind of the umbrella thought for the category, which don't talk to each other very well, which are each interesting for different sorts of superpowers. And the, I don't know, the specific, the specific tools that are very well, which are each interesting for different sorts of superpowers. And the blend of them. And the joining of that information is what I'm sort of talking about as the shared memory. So, so it's not nearly as easy as Wikipedia. So you're asking where, where is the shared memory that we envision together? Or where are all the pieces that may be part of this shared memory that we envision together. And how do we call it, how do we treat it going forward in some sense? And it may be too abstract a question. So we can, so, so that's one topic. And then yesterday on a call, a different topic, which I really like and will separately be going into myself, which is how do you end conflicts? Well, like what does that, what does it even mean to end a conflict? Well, I think in South Africa and Argentina tried truth and reconciliation commissions. I don't know how well they worked in South Africa, arguably not that well. And there's, I can give a whole bunch of small examples of conflicts that ended well and poorly. And we're certainly facing right now, Ukraine, Russia. And the question is, how could that end well? What would that even mean? And I sent an email to Mike Nelson, who's from the Carnegie Institute for International Peace. And he wrote back and said, okay, I'll ask our democracy team and see what's up. Here's a couple links. So I haven't looked at those yet. That was in my morning emails right now. But that's a topic. And then floor is open for other, other topic suggestions as well. And it sounds like a couple, multiple people need to leave us at the top of the hour or early. So we will, we will take that into consideration. Denise, thank you for joining us. Rick, please jump in. Yeah, I just jumping off the theme of what you're talking about the living of the memory aspect of this. Reframe it as a, as a living memory that is ongoing, iterative. And also how can this group. Go beyond its own boundaries so that invites more people into that living story movement. Memory, etc. Makes sense or not. I think so. It makes sense to other people. You have your hand up. Thanks, Rick. And I've got other topic suggestions too. First one is where we talk and why we talk. So we have at least this call and the mailing list and matter most. I feel like this call is the best place, but it's not kind of for everybody. And I wonder if there's a way that we could have more people or, or I wonder if we could do the mailing list better, or if we could do matter most better. So that's one topic. Another topic that's probably fascinating for me, but maybe not for other people is how close we are with chat GPT to artificial general intelligence. How close is chat GPT to a person. Which is a big handful topic. And interesting and timely. So agreed. Other comments on these topics or suggestions for other things? Maybe something on trust. Looking at, like taking George Santos, for example, three Republicans came out and said he needs to resign because people don't trust him. And they said it's already bad enough that, the level of trust that the American has Americans have in politicians and this is not helping. So where do we put our trust? What makes somebody trustworthy? What makes an institution trustworthy? What do we do when our institutions are becoming untrustworthy? Where are the levers for switching that over to a different pathway? And then Gil sent out something a little while ago with Malcolm Gladwell talking about the mountain climber, this guy from the CIA who ran the Havana Cuba branch. And turns out that this is one of the most incredibly well trained people to detect lies. And he was shined on by this guy who knew everything was going on for years, never realized that he was being duped. And Gladwell's point is that we need to trust each other because that's how humans are wired. There will always be people who will violate our trust, but in the long run it pays to trust. So that's a really interesting topic to me. Along the lines of trust, one thing that I haven't dealt with properly, but we had some very blunt statements on our mailing list from Daniel Tavisi and Grace about our dynamics and trust and things like that. And I haven't gone back in and sort of picked those things up since the year has turned and since the holidays. So that's also timely and relevant to us and something we should deal with. So, Rick, please. Yeah, I just put it into the comments here and I just put the word ecosystem building on it. I know Denise can only be here for half an hour, but we just had a conversation prior to this and she spoke about her work on ecosystems. So I thought maybe it might be, we could invite Denise maybe just to introduce yourself and just very briefly share a little bit about herself. That sounds like a delightful thing, Denise, would you refer that? Thank you. But I need to start with, I'm almost hyperventilating here because a number of people here I've connected to during my interesting ecosystem journey through OD Network, Plexus Institute, NTL, et cetera, et cetera. And so it's just really, really a pleasure to reconnect and I hope that I can stay connected. But just as a very quick intro, I had a learning company and many of you may have known me from a long time ago. I've been working with Plexus Institute where I also know a number of you have connected to. But over the last three years, I have been working with developing models for human systems dynamics that are based on looking at structural as well as engagement ecosystems. And I've been working with Plexus Institute based on looking at structural as well as engagement ecosystems. And so it's been a little bit of a deep dive into the science, but it's at the intersection of how organizations that are ecosystems, which mean they have humans in them, begin to engage and develop and also to integrate the extraordinary speed of information and knowledge that is being bombarded at them on a continual basis. And so that is a, I have, I'd love to be able to share that more extensively at some point, but I really am here to listen and hope that I could join you on an ongoing basis. But thank you very much, Rick, for telling me about the group and nice to see you all. And it's lovely to have you join us. Do you want to riff for a moment on the ecosystem theme that Rick brought up and just talk about your own sort of angles or work on it? Oh, a moment. So ecosystems from our perspective, we all live and work in ecosystems, everything and anything that we have, you know, a relationship to is an ecosystem, but specifically at this point in time, those of you who are familiar with Plexus understand that they originally sought to look at complexity, informed models and constructs in order to help people understand how to better engage and how to better find the appropriate next steps to what they were doing. And from that, it appeared that one of the things was that people really needed to understand what complexity was, but also to have just a basic foundation for complexity thinking. But then more importantly, start to develop even more robust tools and models that addressed how complexity was sort of leading to opportunities or to challenges in how we were designing and operating within organizations. So with a partner of mine who very sadly passed away unexpectedly in mid-December, he and I had developed something called the Ecosystem Development Framework. Originally it started with helping early stage and emergent organizations or entities within larger organizations look at how do they start to structure themselves, but more importantly, respond in an ongoing basis to the human dynamics that were happening within those initiatives or projects. And from that, we started to link to working within the organizations and doing pretty active research on the human dynamics that were occurring. And so it's on the edge of being a structural model, but for those I've worked with before, you understand that you can't leave the human systems and the engagement component out of it. So the point is that when you think of an ecosystem, it's really to understand that the ecosystem is this confluence of both the engagement, the human actions and reactions and experiences, and the structural context in which they're operating. And I apologize, that sounds very wordy and it's hard to explain it specifically, but if that is helpful and I'm happy to answer any questions and also to have longer conversations in the future. Thank you. That's really helpful. And I, in my brain, next to business ecosystems, it says dicey metaphor, because I find, and I wrote in the chat, complexity ecosystems as metaphoric ecosystems and systems thinking are all each of them separately, difficult, thorny, complex issues that are now being applied wholesale and often to what are, what we used to call market places or platforms or whatever. And I think we have to do that with care, because some of the ways that ecosystems actually do what they do are pretty different from some of the ways that markets do what they do, et cetera, et cetera. And as you said, we can't forget the humans, because the humans, you know, I often say without humans, the world would run pretty smoothly. Yeah. And thank you. That is absolutely correct. And that was part of the mission is to really understand setting, you know, setting the conditions, if you may, for why you have to understand what you're looking at is an ecosystem. And those very specific principles and constructs that come out of complexity science. And I don't mean to say simplified, but to be translated so that you don't need to have multiple PhDs in order to do that work. It's not agent-based modeling. That's the first and foremost part of it. It really is centered on the complex adaptive human ecosystem principles. And so everything that we do, oh my gosh, I know Vic as well. So, but the most important thing is that when you begin this work, you need to recognize that when you use a term that there is a precedent. There is actual science, there's theory behind it. So saying that something is an ecosystem or that it's emergent or that it's complex is, you know, it has very specific meaning. And once you can just understand those meaning, that meaning, then you can start to apply it in a way that makes sense within the context of where you're working. So anyway, thank you. Denise, hi. If you could share a pointer to those principles with us, because I find that when I hear people talk about ecosystems, I get kind of itchy as an ecologist. My listening is that often people say ecosystem without making the distinctions that you're making. So I'd love to see the principles behind that for you. Oh, absolutely. And so I would, number one, I would be very happy to send out some information, you know, just how do you set the conditions for the ecosystem originally? And so if there is a person who I can sort of send some information to. I can put things where everybody else can find them if you want to do it that way. Okay, that would be lovely, Jerry. Thank you. And so if you want to send me the email, I'd be happy to do that. And thank you. The email is in the chat. Perfect. Yep. I see. Thank you so much. Oh, thank you very much. And we've gone a bit down this path and haven't really settled on a topic yet for a call. So I want to bounce back up to that and see what's on people's minds because we have several different interesting things on the table. And nobody's enthused about any of them. Okay. What else should we talk about? I like the first one. The which first one. Oh, the where, where is the shared memory? Cool. We could do a quick round robin with that. See if there's, if there's any sort of, see what we find there and then just go move on to the next one because I don't know that that needs to occupy our full hour. I'm just really curious. Weather and how, and I think the weather is important here. I think we'll see some kind of shared memory. Pete. Having thought about this a lot, as you might guess. Um, One of my observations is that we don't really have good tip. Typically it's really hard for people, people in general to have a good external memory of any kind. So Jerry, you're, you're kind of way out on the, you know, way out on the curve. I'm, I don't have a real good personal memory. I've got like 10 of them or something like that. And none of them is perfect. And I'm always playing around with them. And I'm probably one of the people who works on it harder than most people. Maybe I'll buy Nick Milo's. I'll buy Nick Milo's. I'll buy Nick Milo's today for $20 off. And magically become a wizard at having an external memory. So there's, there's one problem just that we haven't yet developed, you know, that the tools are the processes to allow people to have a good external memory. Um, Another thing is that even when we have. You know, there are people with good external memories, they have a good external memory. So, um, What, what I see is the tech folks, the tool maker folks, looking at those and going, all we have to do is kind of jiggle them around or get the schema right or make it so they can talk together. One's talking Jason, one's talking YAML, one's talking XML. If we just coalesce those, all would be Nirvana. So, um, and I'm simplifying a lot there, obviously. Um, like it's a, because you have to be pretty good at toolmaking to make, make it so that you can have an external memory. The people who have external memories are also tool makers and tool makers like to make tools more than they like to make, um, uh, external memories as it turns out mostly. So, so they're attracted to the toolness of the external memory problem. Um, what I think they're not attracted to or what we haven't, I haven't seen a lot of people talking about this is if Jerry has an external memory and I have an external memory, I think the only way that we can get them together to talk right now is to actually have Jerry talk about what's in his memory and me talk about what's in my memory and the conversation of Jerry, Jerry can light up. George, is that your phone? Sorry. Sorry. Jerry, you know, if Jerry just like, like exposes his external memory so that I can start to hook up mine. I don't really know what he's got in it. I can look at each of the nodes in Jerry's brain a couple hundred thousand nodes and go, Hmm, that's an interesting node. That's really cool. And I look, it's connected to these other nodes way far different than Jerry actually explaining in the context of how these nodes are stitched together and then all the background information that he can remember based on looking at those nodes. So if he tells the story of some part of his brain, how humans started making steel, for instance, and I told the story using my external brain. Here's, here's what I remember. Here's what I can think. Here's what I can synthesize using my external brain. We can kind of stitch together. A, a combined story of that. But without that kind of human context, human storytelling, I don't, I don't see the external brains coming together. So the, so that kind of. That's another, you know, it's easy to think, oh, we could just have a shared brain. But I, it's a lot more complicated than that. It's, it's easy to imagine that it's a lot more complicated to actually do it. So I think Jerry, you talked a little bit about Wikipedia. Wikipedia is actually a shared brain. It works really well. It's very stilted. It works because Jimmy Wales and the other folks who started it said, thou shalt have an encyclopedia and it's going to look like encyclopedia Britannica, except that it's going to be on the cloud instead of on a shelf. So that was a focusing tool so that everybody, could fill in the parts of the encyclopedia Britannica that was missing in the cloud. And then, and we made Wikipedia, but Wikipedia is very stilted. It's, it's a fact oriented thing. The way that they decide whether a fact is interesting or not and worthy of being in Wikipedia. If you look at Wikipedia, it's huge, but it's missing like, you know, probably 10 times the amount of things that it could have. The reason for that is because there's a rule that says, it's got to be sourced really well. That was a devil, a deal with the devil that, that Jimmy and the other folks made early on. We're, we're going to have a true Wiki or sorry, we're going to have a true encyclopedia for certain value of true. And that's the only thing that can fit in, in the encyclopedia. So differing opinions or different ways of thinking about things or the way Jerry thinks about steel and the way I think about steel and the way Stacey, Stacey thinks about steel. We might have different viewpoints on that. None of that richness and variety would fit into Wikipedia. So it, it is another example of essentially how not to do a shared, a shared brain. So, so now not because it was my alternate topic, but because I think it is actually a way forward. You can, I can kind of imagine chat GPT trained on Jerry's Wiki and chat GPT trained on Pete's Wiki and having chat, the two chat GPTs talk to each other. So I could say, instead of asking Jerry, Jerry, tell me the story of steel in your, in your brain. If I could ask a bot to do that, then at least we've automated one end of that. And maybe we can automate both ends of that. And so maybe, maybe, you know, maybe in a year or two years or five years, we'll just throw a bunch of personal knowledge bases at chat bots and then be able to have them synthesize it. I think, I think I would be pretty happy with that. I think, I think most people would go, yeah, that's not, that's not a human shared knowledge base. That's a, that's a, that's a centaur. That's a cyborg thing that I, I'm very uncomfortable with. I think that is the way most people would think about that for at least the first year or two. And then actually everybody's probably going to be using that, you know, instead of Google and Wikipedia. So thoughts about shared brains. Thanks Pete to really quick things before I go to start. One is completely grew about humans in the loop. Lots, I think that the knowledge management has wasted 40 years of money and effort trying to build big databases of knowledge where what they should have been trying to figure out, who do you need to talk to right now? And just get that done. And then the second thing is sometimes just getting near the same name or topic and the same URLs. And those are really easy to find and match is really helpful. And then we have these shared pages where we take notes. Sometimes those are some kind of shared memory, but they're long pages mixed of a whole bunch of things together. If those were to sort of deconstruct a little bit, they would turn into a bit of a shared memory of some sort with shared notes on different pages. And then you have really long history with Wikis, which are in fact, once you get away from Wikipedia's constraints of we're an encyclopedia and you're in the name space of, hey, we're building shared documents. And I think you and I 20 years ago thought that all of us were collaborating through Wikis by now. In fact, using social text, which would be like Facebook. Yeah. But anyway, that, that, that it seems that fates did not, did not go that way. Stuart Ken Gill. Yeah. I don't want to sound like too much of a downer. Okay. But Pete, you just kind of teed this up wonderfully when you talked about a year out or five years out. And somehow I feel like we're all in a little bit of denial here in terms of the world continue to exist the way it is. I listened to a podcast last night. It was actually Michael Dowd reading a book by William Ophols, OPH ULS short book about how as a result of climate change civilization will fall apart. The title of the book was wonderful. It was called the electrification of the Titanic. And it was all about how the idea that we can, we can solve everything through electrification, then getting off of fossil fuel, you know, just is the notion of when you talk about ecosystem creates larger problems and challenges for all of us. So all of that being said, I don't know if it was within here in these conversations or someplace else that maybe it was six, eight months ago where it was the notion of some of the people that were really thinking about the future. We're thinking in terms of how can we preserve humanity and technology and the base of knowledge and wisdom we have if all of the major systems fall apart. And that I think is a conversation that's worthy of attention. Thanks, Stuart. A real quick, thank you, Stuart. I kind of agree. Massive Wiki is designed kind of for that contingency, actually. Great. Yeah. And I don't know if I mentioned it that I'm just, I get the sense that we're all in a little bit of denial. As I, as I attend, you know, conversations like this, I see that so many of us who are living in a certain existence, I, you know, the denial seems to be reflected in the conversations that we continue to have. And yet within myself, there's a bit of ennui and discomfort and inability to engage deeply because there are kind of just larger things that seem to be looming on the horizon. I don't want to sound like chicken little, but somebody, you know, my sense is that somebody needs to say, you know, for some strange reason, I've been anointed or appointed or, or perturbated or what have you. So here we are. I just want to throw that out there. Today, today you are picking up that mantle. Thank you, Stuart. To quote Al Gore quoting dire straits denial ain't just a river in Egypt. And also it's funny, my path into an answer to your question is if we don't sort out trust and sort out how to share reliable information, we will never manage to handle all these stupid ass issues that are breaking civilization and the planet. So, yeah. Yeah, I agree Jerry, you know, at the core is our capacity to quote the human and kind and empathetic and with each other. Otherwise, you know, things will, will get ugly real fast. Indeed. Thank you, Ken. There's a couple of things here that need to be teased apart one is shared brain others shared memory. The internet is a shared brain, but the memories are very atomized. And so, you know, when we think when I think of a shared of shared memory shared memory and service to what shared memory and service to survival, then one of the things that humans have learned through painful lessons that we need to be that everybody should be aware of an indigenous people know that certain plants kill you how they find out because people died by eating them. Right. So it's like we need we don't seem to have that going on right now. And then there's the fact that there are people who are out there intentionally trying to mock where their memories say no that didn't the Armenian genocide didn't happen the show it didn't happen. You know, there's these people who deny history and deny things that have happened. So how do we counter that? What, you know, so to me it's an enormously large field to explore of what would constitute a useful shared memory for humans, who would be the people who would be curators and what would be the means through which that memory be accessed. You know, I can go online and find all kinds of stuff that I'm interested in. But other people don't seem to have to have the same interest. So is there a common collective interest that maybe we teach it in used to be taught in schools called history and civics. Right. We've seen to have gotten away from that. So how do we come up with something that's going to work cross culturally, cross generationally, where people say these are really important pieces of what humans have learned about living in the world that are necessary for our collective survival. And if we could just settle on, you know, a half dozen or a dozen of those and say, if you don't know anything else, this is really worth, worth paying attention to and worth knowing. I don't know how to get there, but it seems like a really important thing to consider. Ken, thank you lots for that. The reason yesterday I wound up in a conversation about how do you end wars or conflicts is that I had just finished watching Argentina in 1985, a really good movie that dramatizes the trials that took former president Videla and a bunch of generals to court for the dirty war. And one of so and I visited Argentina in a back way back when and my buddies, the friends I made during my stay gave me before leaving a copy of the Nunca Masterport basically in somewhere in the stack of books over there, I haven't found it yet is this report which chronicled and documented all the detention centers, all the names of the people killed, the instruments of torture, the confessions, everything else was documented and put somewhere so that you could at least pin that somewhere and reduce but not eliminate denialism that something even happened. Because it feels to me like and one of the big benefits of truth and reconciliation commissions is that you get the truth in exchange for amnesty and that truth if you pin it down and make it visible is really important because everybody's busy spinning history and now synthetic media is going to be spinning history also and making up facts and hallucinating all kinds of events. So we're kind of in this really weird and dangerous era where we have to protect memory in some way which argues also for some kind of shared memory. Gil. Yeah, gosh, we're going to start here. Can I'm reminded of what my friend can ask about what do you mean we when you say we. So there's always that Stuart I do not accept your apology I thank you for bringing that up. So I'm Stuart's question is part of the context we're we're we're living in a mess of messes I've been calling it living between worlds you know it's an enormous tectonic you know historical scale transition that people people can look back on this time as a you know an unusual transition in the mystery I think I'm I'm I guess I'm just feeling itchy this morning because the shared memory thing has got me itchy like the ecosystem thing did awesome it's it's hard enough for human beings to have a conversation together much less to have a meeting of minds together about anything hang on chair let me let me just get a little bit into more trouble here before you go for it. It's hard enough to have a meeting of minds it's hard enough to have shared a community identity values concerns coordinated action in this space of live human beings you know either you know little tiles on a zoom or living in a community together. Why would we imagine that we could do that in tech. Why do we imagine that it would be good to do that in tech I'm not sure that it would be. I love Wikipedia. Same reason I love Britannica. Not as like the arbiter of all things but as a foundation from which to jump off and think and imagine and have conversations and you know Wikipedia. I don't see this as a flaw it declared it's going to be an encyclopedia encyclopedia is a useful kind of technology. You and I can read an article there and have different opinions about it and have different experiences that illuminated because it's not truth it's you know it's a distillation of some kind of consensus about things we know that consensus is always filtered through values and experience. And that's where the live discussion happens so. We have shared memory through a culture through common experience I mean to Ken's example. You know. Indigenous people in the place we call Australia have shared memory going back tens of thousands of years. Remarkable. We have also played the game of telephone. And we've seen you know studies about witnesses and we've seen the gorilla and basketball tape and we know that human memory has its own constraints or its own drift. So I'm not clear what the aspiration back to the original question I'm not clear what the aspiration for shared memory is I think it needs a bit more. Need some more work on what we mean by that. And why do we want that what maybe it's not thinking why do we want that it's like what do we want that we think shared memory helps us get to or that helps support. So there's a bunch of questions underneath the question Jerry that's why I like it. Thank you and I'm right now trying to book a final episode of season one of the tools for thinking podcast with Ida Josefina of Sain in Finland and Doug Rashkoff. And Doug when approached and sort of asked about the topic and all he's like so the topic we're winding up with is why tools for thinking. He's like is this is this ever going to work why bother. How does this work you know something like that and I'm probably projecting a little too much on to him here but it's a really good question I mean the idea is this could be just a futile effort may because information is really complicated opinions really complicated and you get three of those in a room and we are often running into impenetrable territory really really quickly. But on the other hand and the question I was going to ask you as you were talking was have you found in any of the documents that having a physical artifact like a post it on the wall or a series of post it's on the wall that represent ideas or concepts or facts or whatever. Or let's pretend on a computer was that helpful to the conversation and might some persistence of what we agreed on in some space paper or informational be helpful and I think yes my my experience my personal experience is OMG having post it's or on devices is incredibly helpful but not often enough and as Pete said way at the start it's rare like most people aren't inclined to share their notes most people don't take great notes never mind other sorts of forms of all this kind of thing so it is absolutely thorny and messy and I'm trying to figure out a simple way to explain it and cut through it so clearly not that close to it. To your to your question do I value tools for thinking absolutely I love tools for thinking I geek out at quite as much as you do and by the way please put a link to the podcast in the chat to refresh us on that you know you know this interpenetrates with the question of overload I can't even keep track of my own memory my what were memory my externalize my tech memory my paper memory I'm a wash in paper I just this week discovered stuff that I wrote 15 years ago I thought was really a tool and I want to do something with which is what I thought then and didn't it got lost in the array so it's not just memory it's access it's filtered relevant you know constantly evolving relevant access because you know we see it here you know we're having a conversation things pop up in the chat you run down a trail you hadn't even imagined running down there's maybe it's not shared memory maybe it's about maybe we're talking about it you know and we've we've gradually learned that it's not about brain it's about mind brain is I don't live in this in a bunch of tissue inside my skull case I live in a body that's experiencing it's experiencing not just neurologically but hormonally and kinesthetically and proprioceptively in other ways and in action interaction with you all there's something here that is that is beyond that is beyond each of us that is sort of all of us the thinking is happening together thinking is not happening in here it's happening here and in the associations of all the stuff that we live with so there's the you know shared maybe is the word for it maybe it's maybe shared as a way of an individualized perception of something that is more real than each of us I'm just I'm riffing wildly here I don't know if this is making sense but I think the shared memory question is I think it's the wrong question I think it's too narrow a question I think back to what we're talking about before about the metaphors of systems and ecosystems and the dangers of metaphors we've just walked into a dangerous metaphor it's rich it's juicy it's fascinating and let's recognize that it's just a name that we're giving to something that we don't know how to name Gil thank you and you're making open a bunch of kind of philosophical questions that really matter to the issue at hand a shared memory is for me the simplest two words I can use when I could say collective intelligence collaborative sense-making hive mind and you know half dozen other terms that are in the same neighborhood but don't only mean the same thing the mind brain duality and the mind or memory thing is really deep complicated and don't know that we want to crack open that Pandora's box right here but it's but it's germane and then back to what Pete said earlier it really all comes down to humans getting together and figuring things out together in some way so that just complexifies everything that's it since one more thing then I'll go ahead in a conference a couple years ago with with Fernando Flores somebody was introducing he was somebody's trying to say something saying I think that and Fernando Flores in a fantastic way just interrupted said you're not thinking and first said what you know it's like the reaction is how can you say such a weird thing to me he said you're not thinking thinking is something happening to you and I found that enormously provocative and I've chewed on that a lot since then you know we we here I was doing in a moment ago I was speaking without any idea of what I was about to say the words just came out but it's not just me from somewhere it's me in this conversation interaction with all of you on top of my lived experience of the last xdx years it's a weird thing that happens this thing that we call thinking so that's part of this mess too you absolutely Mr. Breitbart yeah so I'm going to pick up with where you're leaving off so all the action is in the present moment and thinking is one dimension of being and if the telescope were turned around and instead of being focused on all the things external to us we're turned around and focused on how I'm doing me how you're doing you what are the internal and intrinsic capacities that as human beings based on culture and imprinting and education and everything else we've had trained out of us on an awareness level on a consciousness level on an orientation level my experience when you turn the telescope around and put a lot of time and attention into how am I doing me how am I receiving and experiencing others and how can my orientation and awareness be expanded in service to having a better handle on dynamics and flows and why what is manifesting in reality is manifesting the way it's manifesting and to a certain extent from my experience Stuart your thing about what are we talking about we're about to run out of food you know have social unrest we're about to have this whole show collapse on itself and we're fiddling on the deck and I believe how we're doing us and how we might do us differently in relation to in response to the reality we're seeing around us is as germane a question in saving ourselves or not as all of the history experience and learning which somehow not withstanding it being there doesn't seem to affect what as a species we're doing actually in fact is proactively doing and all of the conversation about aggregative collective anything and the same tickle around ecosystem as a metaphor all of those things are aggregative but in a certain way they're creating distance and going further away from the focus on how am I doing me how is this group doing this group collectively and how do we change that or affect that or catalyze a shift in that and so a lot of this intellectual and abstract and aggregative and and technologically exponentially enhanced capacity to sort of distance separate and objectify our reality is responsible for why we're on the edge of extinction and it's not a negation of that or a devaluation of that or a judgment of that I really want to understand I'm not like anti anything but it's a balancing and there are just pieces of the balance system that are where the action is where people actually source from moment to moment to moment to moment that's never part of the conversation and with that I'm complete just to throw something into the mix Thanks Doug as you were talking I was meditating a bit on my MO my methods of being present and and the thing Pete and I do a lot on these calls which we've talked about before which is we're screwing around looking things up adding things curating gardening and there's a piece of that activity that takes us out of being present which we're trying to cure with more presence on the check-in calls but there's a piece of that that's very present in the moment and for me that's very aggregative accumulative and gets better over time like things pieces of the puzzle snap into place and another piece of what I was thinking was if I'm note-taking or externalizing things for myself am I separating them from me or am I inspecting them more closely and Naomi posted in the chat there's a couple articles on a really great thread like Michelle Wang she fed her childhood journals which is somebody writing things down in the very present like journaling is I think a very present exercise and feeding them into chat and then having a conversation with her younger self that sounds incredibly fruitful to me and seeing insights about your earlier self and how you've changed feels like the kind of internal growth that I think you're looking for us to have everyone to get and so forth and I'm sort of trying to sort those things out together because they're a little confusing and complicated don't know if you want to step back into that I I appreciate and recognize there's a living part in that for you in your engagement flow sort of where you experience pull what you're drawn to and and I think that's part of lived experience for Jerry and the key is every single person has a different mosaic like there are no two alike and until we start replicating exactly but still that's in the future there you go and the essential in it is honoring that my honoring that in you so nothing that I'm saying is about in negation or opposition or polarization to that it's and or it's and and in the in some of the facets and dimensions of living being non-intellectual mental body there are these other domains where there are commonalities that are universal to every human being and connecting with those reawakening those is where the human dimension of what I see in you that I see in me comes alive the recognition and resonance points that have two people that are at each other's throats in a political frame right in the face of seeing something happen in front of them they can save somebody becoming one and that phenomenology of people transcending their individual stuff and attachments and preferences and centers of focus transcending that in service to with others without any intervening like governance questions constructive intellectual abstract structural none of that stuff people just act together that's the what's the sauce in that and how do we catalyze that globally for people to actually reflect in their behavior the consciousness that all of us that steward is referencing which is like shit guys we're going over a cliff like next year the supermarkets are going to be empty like it's not going to be there to buy it's not even going to be an inflation question it's going to be like where do I buy food so there are pieces and dimensions that I think we need to elevate in some way that we need to bring into the conversation and the frame as a balance and dimension of affecting an orientational and consciousness level shift in awareness and resonance and alignment across our species that factors on a visceral level on a physical level on a spiritual level on an energetic level and those are all those invisible and tangible domains that we've had beat out of us like we've forgotten that indigenous people haven't like they're feeling everything so thank you and Rick if you want to pause for a little bit before starting we can sort of process a little bit and then catch up with you you go ahead you take us back in but I think a little pause and that would be great actually I'm going to just ask you a question I put it was a typo it was in the chat actually and it's how might we use our shared memories to build a living an ongoing well not a but story to develop flourishing ecosystem to trust integrity and transparent accountability we're going back to some of the themes that have been involved but I don't know if Gil you'd like to respond to that question I think I responded already and said plus one well I thought you might be able to elaborate that what that's what that's the human enterprise isn't it that's what we do that's what we've always done maybe Rick you're muted well let me reframe the question how can we do better I'll open that out to anybody I don't feel like I need to say anything other than ask the question I like the question a lot perfect I wanted to throw in the name Julia Ghalif she's the co-founder of the Center for Applied Rationality and she uses a metaphor soldiers and scouts and what I wanted to say is that sometimes I find there are too many soldiers and they wind up killing the scouts soldiers are the ones that can't take in the information that goes against what they already think so they kill it Jerry brought up Daniel and Grace earlier and I know from communicating with both of them that sort of ties into and I'm not holding this I'm not saying this group does that but what I'm saying is in conversations about trust and safety and memory and all those things I think we look for that we want that scout mentality now it's not likely that we're going to be able to train every soldier how to be a scout but I think it's really important that at the top of the pyramid or at the gates to information we have more scouts than soldiers I always think about how John Stewart was the most trusted man in America and he was able to talk to Bill Riley they had conversations he was trusted by everyone and I think he's a good example of this scout mindset recently I was listening to him talk to a group of people about COVID and we all know what a dicey topic that was but I mean it was a great conversation and in general what I think the problem is is that many people shut down total they close the gates to all information because they find one thing and where they're finding that one thing there's no I'm not saying this well we have to be a little bit more open to siphoning out what we're not 100% sure of instead of saying it's totally wrong we have to find first what we agree with and that starting point and work from there that's what I love about these conversations because we get to hear what's coming from each person and let's face it at least for me who the person is that's giving me the information does have a certain amount of weight as to how much I trust it so I'm complete thank you Stacey where does that put us who would like to step in and take us in a different direction or add a spin or Stuart yeah it's funny I had this on my desk and I just threw it out and I think that this is where we're heading and maybe there's nothing more to say about it but it's a great quote from that novelist and naturalist Peter Matheson when we are mired in the relative world never lifting our gaze to the mystery our life is stunted incomplete we are filled with yearning for that and the rest that is lost when as young children we replace it with words and ideas and abstractions such as merit past present future our direct spontaneous experience of the thing itself in the beauty and precision precision of this present moment and in this present moment is where we feel the connection and the juice and need I say the love of each other that ties us together as human beings and given the kind of precipices that we're all standing on right now you know regardless of whether you believe there's going to be some utopian future or you believe that it's going to be a dystopian future in the present moment what we need is each other and the moment of connection between us and the love that might flow and emanate between us I mean I was just thinking about that after maybe spending a year or so on these calls there's something to hear that everybody brings to the party everybody brings something to this party whatever this party is alright but we keep coming back and keep showing up and keep being drawn together and that's I think the mystery that Peter Matheson is pointing to beyond those abstractions the connection is kind of where the juice of life is Stuart thank you Pete super swift on the quote find that was Klaus off to you Yeah I'm coming back to chat GPT because I'm actually really up on what is being done with this I'm putting a conversation into the chat here but I watched another one which I lost I can't find it but the guy the presenter was using chat GPT to take a topic and then start by developing an index and in the index the chat GPT laid out connections and relationships that you wouldn't have thought of automatically and then he went into the individual index items and asked for the next level of detail down and it was stunning how how this widens the conversation instantly and it was I mean by asking intelligent questions you can claim authorship because you're doing research and chat GPT is your research tool and you develop not a brilliant paper in a very short period of time and what that does is it it creates an understanding of system right because people who don't even think in terms of system simply by exploring the topic and asking the chat GPT questions that have structure to it so you go through this process of exploring the topic you can get you get you become aware of oh I didn't think this was connected I didn't understand that this was part of this topic so I think this will revolutionize the way we think and the way we treat information and I think we're not yet prepared for what this really will do to the human mind and to our understanding if used as a tool you know if used to augment ourselves our mind our exploration this is going to be amazing the trouble of course is and I was watching a conversation with Schmidt the CEO from Microsoft with someone with some other guru there and Schmidt was talking about you have to understand anytime something like this gets introduced it will be misused it's automatic you have to assume that somebody will mess with it and abuse it and the AI the chat GPT in right and wrong so if you feed it bad information it will use that information as if it was real it just it doesn't have any safeguards in this regard the safeguards have to be in the way that the algorithms are really constructed and protected so someone somewhere is going to mess that up so that's the downside of it but I think and then one the other thing that Schmidt was saying that really struck me is he was making an appeal to software developers and to people in decision-making positions in whether Google and Facebook and Twitter and anywhere to support democracy he was saying when I look around the world today and I see what is happening in China and Russia and other in the Middle East where people are messing with information messing with democratic institutions I much rather stick with democracy as imperfect as it may be and I think we have a moral obligation to support that so yeah I think this will be a profound year I think 2020 this is what is happening right now in the sphere of thought and enabling knowing and sense-making with such profound tools I think it will blow us away I hope so because we are at the precipice of a lot of things that we don't want to see happening from your lips to God's ears but I think it will blow us away from the process as they say I'm going to take two minutes to riff on something I've said a couple of times before in these calls but just in case because I think it's useful which is about computer creativity and with the intention of tweaking the conversation a little toward Pete's topic of AGI and all of that and the tools deep minds group built a go playing program called alpha go which it trained using historic games of go between experts and there was an ancient games there were a lot of games and that game managed to beat Lee Seedal the world's best go expert now chess programs have been beating chess masters for some time so it's like but go is a more complicated game so that was kind of interesting but then what they did was they took a version of the same software and they trained it on the rules of go which are very simple you can put stones on a 19 by 19 grid there's a way that you take territory that's it and then you count up how much territory you've won and alpha zero then beats all the human experts beats alpha go and goes into sort of unknown territory there's a lovely chart of the curve still going up above and that piece over what the sort of the limits of what we knew before is to my mind only computer creativity and its creativity in part because the system didn't have preconceptions about what to do didn't have the frames of historic experience and what we always do in this situation and all of that all the things we do and so and so partly I was about to say that chat GBT or things like that might in fact be free of human preconceptions but wait a minute we've trained these things on all of the human vagaries of the written body of work we have out there and a piece of this conversation was about when we externalize things and leave them as writings what is the effect on society so so these engines these large language models have absorbed and not recorded verbatim but have absorbed the patterns of what makes sense as words in the world and can then come back in and change a science fiction plot from being steampunk to being solar punk for example like that as a concept and I started my career in tech explaining neural networks to people and one of the ways I would explain to people is like a neural network could be really good at figuring out what maple this is in a leaf like it could look at a bunch of leaves and go oh these are probably maple leaves and these are probably not you do not want a neural network doing your accounting like it's not designed for that not good for that use something else that actually understands like integers and maths so we're in this really interesting unexplored territory now where these large language models are available they're absorbing what civilization has written now they're absorbing what they're creating and we've had a little bit of this conversation before about is this the end of the easily accessible knowledge space because now we're polluting the knowledge space and we're sort of adding to the indexes like all the crap that we're generating randomly that's hallucinated and not actually real and what's that going to do and just I just wanted to put that in the room and maybe with that pass it to Pete if you want to head toward the topic you have we don't have a bunch of time but we've got enough time I am it's interesting that that creativity is a thing that you would grab on to for AGI it's one path in I and the other thing it's really interesting to me I play around more with stable diffusion than image generator than chat GPT so I can talk a little bit more about creativity in AI image generation and where I've kind of ended up it's funny image generation has a reputation now that oh the way that you use an image generator is you imagine a scene and then you want it painted by Borla's or mixed Borla's with Picasso and that's the kind of thing that you ask an image generator for and it happens a lot I have done I've gone a different way I accidentally typed three words into stable diffusion and I wonder what happens if I just give it three words a very under specified thing and it wasn't actually even asking for a thing it was just kind of evoke something in response to these three words and sometimes you get really boring trite stuff but some combinations of words you get really creative stuff and what's happening creative whenever we get into talking about human intelligence and machine intelligence I hear our friend Mark Caronza in my head Mark Caronza is extremely skeptical of machines doing anything like thinking or knowing or remembering or being creative so maybe I'm talking about something different than human creativity but I've been startled by the image generating things dreaming essentially what I'm doing is I'm under specifying what to draw and so it comes up with something that fits what I asked but is kind of random and sometimes for the same three words it will come up with a misty forest scene or a weird kind of wall with a portal through it looking off to a better world or something like that so another thing that I've seen happen with stable diffusion is when I specify a longer thing and I'm actually trying to get it to think in a direction I've spent a lot of time I've got a kind of a avocational interest in images and photography and things like that I've got a pretty good eye for photography and composition and things like that so it's really interesting to me that it can it's always kind of hallucinating but it'll hallucinate scenes that I'm pretty sure people never thought of and it will do it with a lot of because stable diffusion knows what good images look like and I always ask for good images rather than bad ones it can kind of remember an image out of the billions of images it's seen and by remember I mean synthesize it it evokes a scene out of it's huge memory that is different than I think anybody else would have ever seen and so part of the fun that I have with stable diffusion is it feels like exploring a new land to me because I'm always getting it to make things that people haven't seen before and it frustrates the heck out of me knowing that I can't run around and post on Twitter and say look at this thing that only I've ever seen and got imagined by a machine so creativity is an interesting thing and it's interesting that we thought that computers wouldn't be creative they're really good at being creative I think if you drive them the right way Jerry you kind of said it you wouldn't use a neural network for doing your counting but you would use a neural network for dreaming and machines dream really well now if you kind of set them up and use them the right way the other things that I am more interested in for AGI now are theories of mind and in a conversation what is the other person thinking and how might I help make a better way or think differently how is their thinking making my thinking different so just to say it real quick I think AGI is about 20 times to 100 times more complicated than chat GPT but I think that's not a very big number and kind of just guessing that sounds like a couple years in Silicon Valley tech time so and I think the way forward is you know we haven't really hit emotion and we haven't really hit like how other people are feeling I think that's the next big step that AGI's will be taking towards shepherded by humans hopefully good humans rather than bad humans again from your lips to God's words Pete thank you that was awesome one thing you triggered at the very end which I'll put in and then we'll see what everybody else wants to talk about in the field of AGI and robotics and all that there's I'm going to be very binary about this there's kind of two approaches one is emulating humans as the goal and the other one is hey let's just make this the best thing it could possibly be and I've always been troubled by the emulating humans route it's like why would you want an android that stands and looks that has the physical aspects of a human when that's like a cranky way to move around in the world spiders like can crawl up walls they have like great they can like manage territory much better they have many more points of contact snakes can get around etc etc like why are we trying to emulate humans and then with AGI it's like until it passes the Turing test and reasons exactly like a human I won't be happy I'm like you know what in certain areas computers are way outperforming humans already if we manage to patch this together into something that that's spectacularly different and better we've generated some form of intelligence that is that and then I think one of the little things that sort of crawls in there is common sense sorry putting something in so I remember it is common sense like like does the system know enough to come in out of the rain sort of things and common sense is hard reasoning reasoning like a five-year-old is actually difficult because there's some things that are hard to model that way and then the last thing I'll add is long ago Rodney Brooks wrote a book about robotics in which he said hey the people in robotics are really working hard they're busy trying to get a robot to see a scene then they're trying to identify there's a chair there's a rug there's a pillow and then they're trying to calculate a trajectory through the room and make their way through and computers were nowhere near powerful enough to do that and he took a couple of very simple algorithms and basically said hey I'm going to make a little worm like thing or centipede like thing that knows enough to bump and turn and it basically senses and responds as it goes and these robots he made their behaviors were very animalistic very naturalistic he was getting really really great behavior with very little processing power by taking a completely different approach to it and when I look at how cars are trying to do autonomous driving identify the scene map everything make sure that there's no child or bicycle running across et cetera et cetera and I don't know there may be other ways to get to this AGI kind of thing sorry long rev peak I wanted to relate a quick odd thing so I spent a lot of time asking stable diffusion to imagine something based on three words right and so it dreams a lot I'm watching it dream a lot and I've gotten I guess for better for worse not that this was my intent but I've trained myself to see how it thinks of patterns where it puts stuff in the composition and the larger scale things and the smaller scale things to make a forest scene for instance or to make a landscape with hills or something like that it'll do a forest scene or a landscape with hills or interesting cathedral things by itself on the three words that have nothing to do with forests or hills or cathedrals but I have this weird and I'm not sure that I like it I have this weird thing now when I'm out in nature walking down the street looking at shrubs in my suburban I have a suburban kind of end partly desert you know place where I can go walking every day I look at trees and I look at shrubs stuff like that and I can kind of see the algorithm that nature used to make the self-replicating fractal thing happen much more than I used to and I think what I'm seeing is I'm recognizing the the self-replicating you know fractal patterns that nature uses and it's got different ones but it kind of uses them in the same way and I think a stable diffusion is kind of doing something similar and I'm you know it's trained me watching it has trained me to see it in nature more it's kind of an odd thing I don't know if it's good or bad I'm going to recommend everyone watch this video after we hang up it's Jonathan Colton's song Mandelbrot Set which is pretty funny I think about Benoit Mandelbrot and what he invented and it's like a pain to to what Pete was just saying in some sense any closing thoughts were near the end of our usual 90 minutes anyone else on AGI so round voting with voting with your hands raise your hand if you think in the next decade so we are in 2023 the beginning of 2023 will we have what we consider to be AGI available like right now we have chat dpt available we don't think it clears the hurdle will there be some system in 10 years that seems to clear the hurdle raise your hand if you think yes I have a problem with that question which is that whenever computers get better at something we say well that's not we move the goalpost human level intelligence we move the goalpost so they're already better at most things than humans in narrow areas they're like kicking ass but the areas are getting bigger a robot couldn't paint a robot couldn't remember a robot couldn't tell me a story a robot couldn't translate languages it's like personally I'm running out of things that robots can't do how far can you move the goalpost my answer is my answer and I may be wrong my answer is we'll have human level intelligence in the next couple of years and we will continue to deny it probably for the next 10 years kind of Star Trek next generation and data who met her data he looked human although he had strange eyes and white skin and he acted very human and he was amazingly technically proficient at playing his violin but he didn't have soul and so maybe that's part of the thing we can get that level intelligence but we ain't going to get the soul I've never thought of data as a modern version of Pinocchio before but you just did that Klaus, then Gil, then Stuart yeah there's another phenomenon that is really coming out and Yuval Harari actually pointed that out he's saying that there is a split in humanity at species level and that's caused by information so for example my daughter is three little girls and the way they are being stimulated is incredible and there is now AI tools directed at children so children don't learn anymore the way we traditionally understand they learn to use tools data tools how to find information how to ask questions in ways that they get answers automatically routed to them now you compare that to the couple million children in the United States who live in the street or the 40 million people in the United States who are food insecure children growing up traumatized by the lack of food have zero access to the internet have zero access to any form of training and education during their formative years when the brain develops so you will have you will have a split where you have people who can't even talk to each other anymore because the brain as it evolves and develops we're going to completely different directions I mean it's like Neanderthals and Homo sapiens thank you yeah my question is is who owns stuff who owns what and Hans posted who owns the future and I think that's an enormously important question here we already see it in the IP discussions around chat chat GPT and what some people see is the appropriation of work creators and artists into this shared matrix which is great that's a user it's great for me as a creator maybe it's not so great for me and you know back to the ecosystem metaphors at the beginning of the call we're in a world of such lopsided trophic web structure what's the latest number like three Americans own as much as half of the rest of us so these systems are enormously unstable becoming more so people have talked about techno feudalism that wasn't the antidote to late capitalism that I was looking for but it may be where we're headed so that whole set of questions in here seems to be a very important one and change is passing by hearing part of the conversation offers that intelligence is not what makes us human love that briefly I will add to the question who owns the future I'm troubled by the whole notion of ownership and I think we are as a western culture way too obsessed with ownership I thought of my brain ownership versus stewardship because if we act as good as good ancestors we wouldn't be owning and sequestering and separating things we would actually be taking care of things so that we might all benefit from them go ahead Gil I'm inclined to agree with you but we're in a world of capitalism where ownership is the heart of everything but that might be the lever that prized the lid off capitalism by the way but let me just add one other thing to that one of the things I've been learning about the years is how much capitalism is rooted in violence Ernest Becker's book The Something of Cotton The Empire of Cotton the evolution of the last several hundred years of capitalism is founded in war capitalism and physical at gun point seizure of resources if you look at the enclosure acts in England in what was that 16th century 17th century at bayonet point people were forced off their land into the factories and so talking about ownership takes us into conversations about power and not just aspiration but really the real power dynamics that we live in in the world today so yeah ownership to stewardship yes Commons yes but how do we get from here to there and the tools that we are fascinated by how to say it I'm I'm trying to learn to be very careful about predicting so I don't know where it goes but it looks like they have tendencies to take us further down the rat hole and tendencies to take us out into the world and they're both there right Thank you for reminding me that capitalism is rooted in violence I had kind of missed that in some weird way and I just went and like quickly found a bunch of different like whoa I need to weave this together it's important and how did I forget that Stuart yeah so can can have something for us on the way out go ahead so one of the places I've been noodling with is stepping out of the matrix and the idea of ownership is just so much a part of of the matrix you know think about Manhattan being bought for $26 from Native Americans because the indigenous wisdom wow they're going to give us something from from this nothing and also this notion of whether or not AI can be as intelligent as human beings God I hope that that's not the aspiration because we certainly need a lot more intelligent slash we need wisdom because our intelligence has gotten into this incredible has gotten us to this place of Morris and and maybe there's enough information intelligence and wisdom to get us out of the Morris or back to Einstein the thinking that God is here is not going to get us there thank you Stuart we are at time can kill did you have a last word I have a I have a before Ken's last word just a reminder that when we talk about humans we're not all the same right and whether it's men and women or eastern or western or advanced or indigenous you know there are many many different ways we are and so the notion of getting something that is human like is maybe another dangerous question thank you entirely right into the decision making processes of people trying to emulate humans absolutely can so this is from the opening to Chesapeake by James Mitchell this is how everything begins the ultimate source of the sesquahana river was a kind of meadow in which nothing happened no cattle no mysteriously gushing water merely the slow accumulation of moisture for many unseen unimportant sources the gathering of do so to speak the beginning the unspectacular congregation of small things the origin of purpose and where the moisture stood sharp rays of bright sunlight were reflected back until the whole area seemed golden and hallowed as if here life itself were beginning this is how everything begins the mountains the oceans life itself a slow accumulation the gathering together of meaning that'd be a nice way to start the year love that, that's beautiful thank you very much chow y'all thank you for another lovely call gifts we give each other amen indeed and Denise just posted some real good to the chat always a good idea always room for real good Denise really like you said you had to, thanks for Stan yeah thank you I have monitoring something on another screen gotta go but they probably feel highly highly ignored it's been a pleasure thank you and I hope to join you again if you'll have me thank you nice to see you really it's just anyway and you Stuart too okay chow chow everyone