 This is the OGM call for Thursday, March 2nd, 2023. We'll also turn on the captions. There we go. And this week is a check-in week. We've been messing around with our chicken formats a bit. Last week, last week I felt there was a confusion between which part of the check-in was general discussion. We sort of lost the program on that. But I liked our protocol. We were doing a nice job of being pauses in. And Jenny, one of the things that we've discovered is that hastening through the conversation isn't quite as good as pausing a bit between when everybody steps in to take a turn and talk. And it's entirely up to them, their discretion to how long the pause or whatever else. But that gives us a little bit more time to process and sort of be with the conversation and stay there. We have a tiny bit of controversy about whether or not to use the chat during check-in. One mode of operation we have is to basically say no chat during check-in and then okay chat during conversation. But again, that funny boundary between check-ins and conversation, we haven't managed to manage all that well. So before we start sort of the round, I'm going to see if anybody has any suggestions or preferences on the protocol we use and how we shape it. Gil. Hi everybody and good to meet you Jenny. I wasn't aware of a controversy over the check-in protocol. So I'm, I just sort of muddled ahead with using the chat during check-in because things come up in check-in that are, you know, the trigger associations are possible references and so forth. So I'm inclined to go that way, but I don't know what the other point of view is. The other point of view is to treat the check-ins more like quicker meeting so that we get everybody's full attention, which then means relenting on the chat and not being distracted or whatever it is to make the comments in the chat. And then taking notes on a notepad or whatever if you want to, so that you can bring in what you're thinking about later. You're not going to miss out on the comment. It just doesn't need to happen while everybody's checking in. And it's kind of about quality of attention, things like that. Stacey. Yeah, and I have a point of view, which is against having a rule for that. Because I think it's good that we discussed, you know, why it's not good or why we feel a certain way, but then as grown adults, it's up to us how we want to conduct ourselves. So especially in a space that I'm coming to because I want to, not because I'm being paid to, I want that freedom to decide how I behave. I do think that grown adults thing is a little overrated, but still, even if I was a child, I still want that freedom. I want to do it because it feels like the right thing to do, not because I'm afraid or I'm being pushed. One of the interesting things about what's called vocal ministry and Quaker meeting, which is called silent meeting, is that every now and then you have to sort of stop the meeting and say, we're going to have a conversation about vocal ministry, you know, after meeting sometime. And then the one that I remember, we had a member in Connecticut at Wilton meeting, Wilton monthly meeting, whose name was John Lee. He was a retired engineer. And we get two thirds the way through this vocal ministry conversation where you sort of relearn how to step in. And the fact that in Quaker meeting, it's not meant to be a conversation between messages during the meeting. It's meant to be us ministering to one another, et cetera, et cetera. And two thirds of the way through the meeting, he piped in to go, I know I talked too much. You know, and he was the reason the meeting was called because he had a message every meeting. He was, he was pretty sort of chatty. And for Quaker's chatty is not what would pass for chatty in Brooklyn. Like these are different levels of chatty entirely. But he sort of, he sort of confessed to understanding he might be the problem and promise to sort of behave better, et cetera. And it was a bit of a self-regulating community building conversation where we learned the norms of how, how you do what's called vocal ministry. Go ahead, Stacy. Yeah, the other thing I want to say is I think it's also up to the rest of the community. So if somebody's doing a lot, you know, if they're chatting to me, I can maybe like not answer. Or, you know, shape the behavior that way. And I think that's all of our responsibility to do. So I have a suggestion. Yes. Let's approach the check-in in the spirit of listening. And if people feel any to comment, they can comment, but the spirit will be different than the usual part of the conversation. I want to see how it goes. Sounds good. And I would like to have a sharper marker this week between the check-in and conversation, meaning let's go once. We will each sort of pick our turn into the check-in by raising the electronic hand here so we can all kind of see who's in the queue. But let's all go just once until we've all checked in. And then we can be off to the races. Any other thoughts about process? And Jenny, by check-in, we mean, and welcome to the call. I'm glad you're here with us. We kind of mean what's on your mind that is OGM in nature, or what have you been doing, or what questions are you having, or whatever else. In your case, we'd probably love to know a little bit about your background into this group. But it's kind of a light kind of check-in. And there's another thing I just realized, which is the nature of the question we're answering during check-in matters a lot. So we have Doug Carmichael here. There's a Doug protocol where the question that we are asking is, in fact, let me get it proper. The focus and question for the Doug protocol is what's on your mind that's worthy of serious conversation. And so everybody would address that question individually. And then at the marker where everybody's done a quick check-in, and the idea of the check-ins in the Doug protocol is to have the check-ins be relatively quick so that the questions can be stated and put on the table. And then we pick from among the questions that are on the table what we're going to discuss for the rest of the session, which is also a really interesting idea. I'm confused. It sounds like a topic thing rather than a check-in thing. But we were using that as the protocol for check-in calls because it was a form of check-in. I thought we were using that as the protocol for starting a topic call. Well, we actually used it for check-in. My memory used it for a check-in, but I would call it a topic call. Well, there we go. For what it's worth, by the way, the Pete that hasn't been to a bunch of these calls is feeling like the eight minutes so far has sounded like a Calvin ball. We could also play Calvin ball. Which means everybody changes the rules as we go. That works. Calvin ball is from Calvin and Hobbes. It's basically because Calvin has such a fertile imagination. He gets to change the rules so he wins all the time. Calvin being one of my heroes. My first online e-mail address was spiffatpanics.com. No, I think it was spiffatwell.com because my hero was Spaceman, Spiff, Conker of the Cosmos, one of Calvin's fictitious characters. So I'm going to suggest we start going into check-in, not using the focusing question of serious conversations, but doing a more traditional OGM check-in. Let's go take one turn. Oh, nice. I thought you'd like that, Jerry. Would Calvinist believe? Thank you. The Church of Calvin. I really like it. Calvinists. Neo-Calvinism, maybe. A whole new angle on this thing. So let's go in the, checking in, in our old spirit of what's in your head or what it has to do with open global mind just kind of things. And I will not step in between people so that, as you see the hands in the queue, I may step in to ask if anybody hasn't gone yet to raise your hand or you could do that. And please, before you step in, take a pause. If you have any questions, I would like to start with that who would like to take us in. I'll go really quickly. I was just interested out of curiosity to find out the difference. But to find out where Socrates and Plato disagreed. And I was really surprised to find out that where they disagreed had to do with the written word and conversations, which is exactly what I'm interested in. And I wrote a little thing in the Plex this week. And I will be having a call hosting a call on Tuesday to talk more about that and maybe go towards, are we asking the right questions? So I'll go. One of the things that's been on my mind is what is happening to romantic relationships under the shadow of climate change. I didn't catch this. What is happening to what? Romantic relationships. It was simulated by a friend. Who was traveling in Indonesia. No, in China. And fell in love with a local person who fell in love with him. They barely shared a language. And the view was, as they tried to understand what had happened, that the vulnerability coming with the shadow of climate change made it much more vulnerable to new relationships. Flowing from that I was a radio conversation yesterday. I'm talking about Alan Earl, a cane show altogether now with a guy named Andrew Boyd who's just written a book or just got the publisher book called I want a better catastrophe. And talking about optimism. What people's reaction and stances in relation to climate. And the, the apparent mood of the book is despair and surrender. And it's not. So I'm in that conversation about how do we orient to huge looming messes coming at us. What are the possible responses and appropriate responses to that. Shall I jump in. All right. I'm new to the new to the group. My name is Jenny. I live in Amsterdam. My entry, I'm a good friends with Hank Kuhn who's part of this group. I've always been interested in talking about Jerry, Jerry once online. Part of my background that made the connection is I'm pretty good at patterns. So, so I don't know what the group is talking about. So I won't comment on that. Other than to say I'm interested in. Things like a zoom and do it less badly. The summer, I actually went to four Quaker meetings. And so was intrigued by that procedure. The serious, at least one of the serious questions which I had when I met Jerry the first time in which continues is I can't keep up. So I'm I'm experimenting with various ways to find good filters. How to select information how to, how to dump the junk, get the goods and use my attentive powers as best I can. And that problem is not going away. So I find that a serious thing that I would like to talk about. I think that's enough for now. Thank you. I can follow that up because it stimulated something for me of long time ago when I was hanging out at spirit rock meditation center jack cornfield told the story of a senior teacher coming from Asia. And it's like, oh, maybe you can help me, you know, because I've got all this stuff to do I meet all these people on counseling and managing retreats I'm doing all this stuff and he told the teacher and the teacher said, let go of something. And I realized, you know, like, I, I trust what I call just in time information, where, wherever my attention goes, I'm going to find something that's useful to what I'm doing. And I don't try to keep up. I just, I delete all kinds of emails and unsubscribe from newsletters and I don't try to keep up with the news because I can't because just who are welcoming so I just focus on what is in my current, you know, sort of lighthouse light beam whatever it's focusing on for the moment and trust that I'll get what I need from that. And it's really helped me to just relax and not have the feeling of oh my god I'm falling behind because I don't think I'm falling behind I think I'm actually doing okay and I know there's lots of stuff going on in the world that I'll never be able to keep up with in track, but I just follow my interests and the things that that appeal to me and trust that'll be enough. And if I don't I'm in a constant state of I'm not keeping up and then I have some kind of cloud hanging over me so it's it's helped to bring in a lot more light. Thank you. Partly our pauses in the conversation or an attempt to cope with that as well. My mind is, last week I kind of landed on a topic I'd love to give speeches about the speech title is my life as a cyborg. And I present myself as, you know, quarter century worth of cyborg activity because I've externalized so much of what's in my wet brain out into this one piece of software called the brain, which is a we sliver of what the whole world of cyborgness is about. But it's really interesting and just doing some introspection about it and then trying to share that out in a presentation formats, really super interesting. And also is cool that it's very demonstrable. I can put it up on a big screen and sort of talk through it. And it's very interactive in that I can, I can sort of do ask me anything kind of things around it as well, which would naturally be part of it but anybody who would like to talk about a, what it means to be a cyborg in any case and if you wear a wristwatch you're in my mind or glasses you're a little we little tiny bit of cyborg because you have a bit of a technology extension. But it feels like our future has a lot of cyborg in it, a lot of professions a lot of people are going to need to learn how to blend with the machines which are now getting better and better at doing more and more stuff. There's a conversation about AGI sort of artificial general intelligence there's a side conversation about the ethics of it which shouldn't be a side conversation but it's really important. There's all kinds of different things and then second, I'm really interested in the mode of engagement and presentation of something called something like my life as a cyborg and I'm trying to think of monster in a box by Spalding and the vagina monologues and Hassan Minha just Patriot Act and I'm sort of looking around at various ways, handling the material with people in the moment that might make it more interesting compelling different etc. Thanks Jay. I really like, I like your inquiry there, both both of them. I came up with my check in separately and it uses a nice segue into mine. I didn't design it that way. It's not something I want to talk about but it's something that we have to talk about with all the other things we have to talk about so I hit return on my notes. So, I've been in meshed a little even a little bit more deeply than usual in thinking about AI and AGI in the past week or so. And one of the, as we have conversations about it. It's really easy to think of the AI tools that we can see right now as the same kind of products that we've seen, you know, coming up and changing our lives, like the iPhone, like Google, like Tesla. I think there's a difference and it's breathtaking. Because things are already changing very quickly and the, the, the rate of change is going to go faster. It's going to accelerate. So, I've found myself with a little bit of frustration about some of the conversations where we talk about, you know, you know, chat GPT is it good is it bad. As a product that sucks, you know, did they mean it to suck. Did they think it would suck. I think chat GPT was actually a tech demo. I didn't, I don't think they meant to be a product at all. And, and so I think they were surprised by, by the reaction of most people that wow this is another thing like a product I know of like Google. I'm pretty sure that the open AI folks and other foibles and limitations and narrowness missed that largely because the things that they are working with are much, much bigger than chat GPT. So when you look at chat GPT and go wow that's a crazy amazing big thing. It's a tiny little things that people who are working on it. You have to think in their lab that they're a year or a year and a half ahead of what they're pushing out the door. The stuff that they're working with is much different than, than the chat GPT you see. So, when you go well I like it I don't like it it's scary it's not scary it's you know this or that. That that conversation is like small compared to the bigger conversation that they feel that they're embroiled in. And also you have to think that people like open AI who are working pretty much in the open compared to probably other things that are not working in the open. You have to think that that those labs that the people who have things like Microsoft co pilot that can write code much, much, much faster than than people can. And you have to think that they're working with chat GPT for what it's worth by the way is not an Oracle, even though we mistake it as an Oracle, an Oracle is something that you ask it a question that gives you a smart thoughtful answer. GPT is just chattering basically, you have to think that in the labs and some of the allies, they have oracles, and they, they can't release them yet because they're probably crazy oracles they talk crazy sometimes but a lot of times they're telling the truth. You know how to talk to them. And so when, when somebody like open AI is thinking about market strategy or something like that, you have to think that they have talked to their, their oracles, which are imperfect but can can think of a big massive kind of set of strategies about how the world is working the economy all of their competitors and partners. So factor that into what you think that they think they're doing. They're working with, you know, the thing that you're going to get to work with any year and a half. And the decisions they make are partly driven by them being cyborgs with an AI that's a lot more advanced than chat GPT. So, so in that mix. Ken has a great piece and this week's plex about the, the gold rush, the great gold rush of AI. And he, he wonders if profit. People making decisions about AI is being driven by profit is probably a bad thing, especially if they're mostly driven by profit and not by human concerns. Makes total sense. I am sure there are people doing stuff with AI that are totally driven profit Microsoft the way Microsoft is has been doing being looks like, you know people trying to conquer market or trying to tip Google search or something like that. They're not being very thoughtful about the humanity underneath that. But one of the things maybe we all know this but I want to say it out loud. One of the things that the tech pros are really thinking of there and I don't the people who are really making the tricky decisions here aren't thinking so much about profit stuff. They're thinking about WTF, a GI artificial general intelligence when, when something like chat GPT isn't just a chatterbox, but it's actually thinking and feeling and as smart or smarter than a person. And now imagine not just one of those but 10 of them or 100 of them or 1000 of them. The way those AIs can cooperate and collaborate and talk the challenges that we have, just getting you know 10 of us in a room talk deciding what to talk about. They can do all of that collaboration stuff faster. They can look at the ways that they're not collaborating and and change that and be better at collaborating. So, so the tech pros, and unfortunately I guess I'm a tech bro in some ways, the tech pros. It's like, you know the thing that to worry about here is is the positive feedback loop and the runaway effects of AI. And that kind of gets bigger faster than anything, you know, if, if it's starting to look less and less, it's starting to look more and more likely it's starting to look less and less likely like it's that it's not going to happen. If you if that happens, what happens to, you know the economy what happens to humanity what happens to there's a bunch of like, you know, existential like core existential stuff that is on the table. And what does humanity mean at that point. Do the, you know, is is humanity intelligence is humanity feelings is humanity art and love and things like that. We've done a good job in the past 1000 or 2000 years of being human humans have done some terrible things in the past, you know, 100 500 thousand 5000 years. We want to teach that to our inheritors. Do we want to take the opportunity to let bots do a bunch of the grunt work for us, and like rise above all of the, you know, all of the pettiness and be better humans. The opportunity that is afforded by by this. I'd love to see, and I'd love for us to be thinking about, you know, the challenges and the risk and the opportunity to be more human which I think AGI could actually give us. Thanks. But there is the, there is the technical component, and then there is the impact, but what does it do already. And so I'm, I'm all into application and what is happening is that the technology is already so advanced this is not just I mean chat GPT is just one new entry that now is on on the path of development but already the algorithms that we can, we can influence YouTube or your Google or your whatever you use as a search engine. And whether you're on LinkedIn or social media, you can all cram your algorithms to focus on information that you're interested in. And the days a parallel discussion taking place that is that runs parallel to the political discussion. And it's an enormous curveball to to what has what what is normally controlling the political process which is why it's getting more intense and more ugly and more loud, because I mean for example and LinkedIn. I mean, several groups that collectively come to about 100,000 people's when I post on LinkedIn. I can access several groups that collectively represent 100,000 people also international. And in turn, I get information that I wasn't aware of or that, you know, that's just an alert to know something new coming out or a process that we should be aware of. And it's disrupting. And it's disruptive to the power structures, you know, because you, they are, they are enough smart people, forming collectively what you could call the brain. And it's when you when you think about thousands of people with different skill sets and like minded interests and engaging climate change, for example, engaging in very technical discussions about the impact the impact of climate change and agriculture and so on and so on. You build knowledge, lightning fast, and much faster than the political process can can deal with. And you have, you have communication channels that are opening up where some of us can go directly into members of Congress and their staff and points towards the information that is important not to have. So you're cutting through all the, all the normally controlled and and vetted channels. So we're already in the middle of this, of this vortex. We're already in the middle of experiencing a information revolution that that I don't think is yet fully understood and when you when you look at companies, the the only thing that that seems to do the one thing that seems to really push to the to the surface is managing your plans, you know, managing your reputation. So the reputational integrity of what companies are saying and doing is about the one defining defining marketing practice that you that you need to manage moving forward and then that of course is at risk in a ever more widening communication spectrum. So, so, you know, I'm sort of all over the place here but Pete what I'm, what I'm wanting to say is, we're already in the middle of this we don't need a more advanced chat GBT. We're already experiencing an information revolution that is that is changing the conversations in a in a very profound manner. I agree maybe that we don't need one. We're going to get one. You know, I would want it but I'm saying, we don't have to wait for it. It's a process it will accelerate a process already underway and make it yet faster and bigger and better. We're close to having everyone checked in so let's just go once until everyone has either past or sort of stepped in and done a brief check and Michael will say for those of you who got into the conversation a little bit late, and then we'll and then just go into conversation. So anybody who hasn't checked in yet please use their raise your hand to do so and then you'll see the order of people who have their hand raised on your gallery view and before stepping in take a pause for as long as you feel is right so that we get a little bit of breathers between what everybody says. I woke this morning with a what I later called a Zen Cohen question. And I put it in the chat box there. How might we as equity muses become anti influence non influences. Before I go ahead. I'm curious to hear you explain that a little better. I mean, I'm in a little more. Just the anti influence notion. The beauty of a Cohen. Not sure if Michael was going to go next or there was a little confusion in my mind as well so okay. Thanks. Um, the last couple of days for me have been rather interesting and it's interesting that I jumped in as Pete was talking about a I just not a surprise coming from Pete but it's pretty much on my mind. And the work that I've been doing the last few years has been around understanding ourselves. And I felt that we are getting close to doing that in some way. And yet, there's now an arms race, who's going to understand us first. And I think we're losing that arms race that we might have artificial intelligence actually understand us better than us understanding ourselves and that leads me to, to this fear of if the technology that is in the hands of mostly corporations, whose role it is to make money understands humanity better than humanity understands itself. So if we don't either change the business model, or the economic system that we use to to drive those businesses, then, then I don't understand how we get out of this. We don't have the systems that have driven us to the point of getting AI to where it's at, and its ability to understand us better than, than we understand ourselves is putting us in a position that I don't know if it's tenable. So we either have to understand ourselves better. I don't know how to do that sooner than they will understand it. Or we need to prevent the system from acting as it's always acted with this new super powerful tool. And Gil, you said who's we I'm not sure in what context that that we was used by me but you're on me again. I'm challenging Ken Homer as I often do. We, we tend to use we in all kinds of different senses and levels and concentrics, even in the same sense. But you said we don't understand. I'm especially guilty of the OGM we the people like us we humanity. But that's a really important thing to listen for as we speak. Thank you. But I did mean we humanity. I'm complete. Take my, my check in slot. I'm in a slightly troubled state of mind and grappling with grappling with the many wheeze and time and feeling, you know, I think this group in its various permutations is so full of, you know, smart and well meaning folks who are so dominantly older cis white males like me. It's troubling to me that this is one of the, one of the recurrent wheeze in my life. I try to go elsewhere to be part of other and broader more humanly represent representative wheeze because I mean I think our efforts to to change the makeup of this group are, you know, have been largely feudal and I think are probably misguided. Because you can't sort of pull together a group like this and then say, you know, hey let's institute some, you know, some change in that way. And I guess that's just, that's, that's something that's just really alive for me right now. And I don't really have much more to say about it. You know, I thought about not showing up today as I was thinking about that, and I don't know what good that does. So I just figured I would voice that feeling into the room. And yeah, I guess that's just all I've got today. Rick, I think you've gone already in the check-ins. Yeah, that's fine. So have we finished the check-ins? I don't think so. I think Scott hasn't checked in, and that may be it. And Scott may have stepped away. And Doug, I think you haven't checked in either. So, you know, yeah. Following on this discussion about AI, I have a question. Can AI deal with emergence or can it only link together things that already exist? It's actually really good at emergence. Can you give an example? I can kind of, I don't have an example off the top of my head, but I can kind of talk about it metaphor. If I'm something like chat GPT and I've read whatever billion documents, and I'm a pattern matcher, I put patterns together. I will put together patterns that I, you know, I can notice emergent patterns in the thing that nobody else has noticed. You could ask me a question and I can kind of emergently come up with the zeitgeist of a billion documents, which is different than, you know, than any human has ever seen. And it does that pretty regularly, you know, image generation and text generation is like that. So I have an example where it didn't work. I became aware of a leading fast food company coming out with an image campaign that wants to link health and good food and all the benefits of responsible dining to basically its brand and its menu. And so I went to chat GPT asking, does this company associate its brand with healthful dining in order to blah blah blah. So I asked this question in as many ways as I could think of and I always got locked out saying I can't answer this. Because when there is a clearly a trend, you know, because you understand in order to regenerate the soil, farmers have to change their crops and when they change their crops that means supply chains have to adapt and adjust their menus in order to deal with a different, different grains, different seed contents and so on and so on. And this is basically the answer from the fast food industry, which is similar to what the fossil fuel industry is producing, meaning a concerted public relations campaign to basically not do this right to not collaborate with this effort. And I could not get anything out of chat GPT that would say he is in the emerging trend here is something that's following maybe it's too abstract, maybe it's too pedestrian, I don't know. But, or maybe it's, it's political speech and it's being shut out, I don't know, but I could not get anything out of it. Thanks class. Scott, if you'd like to check in. Otherwise, I'm going to take a swing and answering Doug's question as well. Scott may have stepped away from his device, which is what appears to have happened. So real quick. Let me share screen for a sec over here and explain this chart, because I think this offers an explanation, at least one path into your question Doug. Alpha go is a famous neural network program deep learning program that learned to play just the game of go by being trained on thousands and thousands and thousands of historic games of go a game that has been played for many several many thousands of years. And I don't know that many is applicable there but a lot a really long time. So there was a lot of data to train it on, and it got better than the world's best go player Lee Seedal and beat him back in 2016 as you can see if that in that first arrow on the left and these these are sort of go rankings I guess yellow ratings is how you rank go players it's a little bit like chess chess rankings. What happened was, the team that created Alpha go went back and created something called Alpha go zero. And they did not train the software on historic champion go games, they just gave it the rules of go. Now, go was a really dramatically simplifying assumption because there's a 19 by 19 grid black and white stones there's a couple rules for capture races. So they had Alpha go zero play itself over and over again, and you see the curve of what happened to it. And for me, on the upper right that piece where where the curve just keeps going up is creativity and innovation, partly because the software was innocent of all the things humans assume and had done before, which are all inside the brains of the people who are playing go and learning how to play go, we have cultural habits and all that. And there's a famous moment when Alpha go is playing Lee Seedal, where the person who is putting stones on the board for Alpha go. There's a move where the guy goes to put the stone where he's pretty sure it's going to go and then realizes that the move that Alpha go just said, here's my move is a different spot. And everybody stops and Lee Seedal who just was out on the porch for a smoke slaps his forehead and goes back out for another smoke, because this is a very unexpected move it's a move that most humans who are who are go champions wouldn't have made. It's an extremely constrained domain of the game of go, but but this raises for me. Like what we train these these devices really really matters and unfortunately, apparently Western culture is full of bias and misogyny and a whole bunch of other things. And we've been sort of giving these devices some sort of diet. And here's what we've written what we've done here's what we know. And I for one, for one have not done any kind of deep dive into what did we actually feed these things. And so then we wonder how they're going to answer and what they're going to do. And it matters a whole great deal. And then we have to try to come back in and tweak them so that they behave in a more moral more ethical, more grounded less dangerous way, which is really hard. And, you know, and what goals do you keep in front of the AI so that it consistently tries to aim up instead of down. These are incredibly hard questions that are being wrestled right now as the aircraft is already flying because as Pete put in the room earlier. This was kind of a demo that launched that has just caught fire, and a whole bunch of people are doing a lot of experiments and things with it right now, including a bunch of people who are trying really hard to get attention by trolling it. And they're getting a lot of success because the thing isn't fully polished it's a working demo that has a lot of limitations and, and by the way, as we start to combine its intelligence with other forms of intelligence that are out there. The damn thing is just going to get better and better and we're going to start heading toward something that's going to smell a lot like a GI to a bunch of people, and that gets even more and more interesting or so. We were saying we're already we're already hip deep in this absolutely and I think that's kind of what Pete was implying it's just, it's like we are well down this road. And kind of lively like oh look I can ask a question back comes an answer, but there's larger implications which I think Pete was trying to put on the table for us. So my take is that these things are extremely capable of innovation and sort of cognitive leaps. In fact, one of their benefits is that they're not hampered by our inherited trained assumed historic limitations the ones in our heads. And yet we're training them on a lot of data that came out of our heads and therefore contains those kinds of limitations as well. So it's a mixed bag. Gil the floor is yours. Yeah. Well Jerry you've just set up like the next 10 GM meetings. That's where to go. Got a million things say, we're, you know, how do we constrain them again is back to the question of who the bleep is we, you know, and we now is fang. And governments are, you know, like a decade behind trying you can try to understand what this is and that's not where it's going to happen. So we're, you know, we're, yes, we're in the soup. What do we feed them on? We feed them on us. You know, in all in all our beauty and crappiness that we feed them on us. And we've let them loose. And I don't know how you can strain them, but I will note that I just saw the headline this weekend and follow the article, but a human beat the go bots. This week or recently. And I don't know if it's the alpha go zero or what it is, but fat. So that there's an anomaly in that trend. That's worth taking a look at. That's it. Thanks, Gil. Doug Kevin and then let's go back to Rick for a little deeper explanation of what he said earlier in the chicken. So Doug. Okay, the example of go. Still, the winning game is a collection of previous moves that have already been made by somebody. Actually, the game, the games that one was so innovative. Okay. The example of emergence that I think AI would have difficulty with is when to lead introduced the idea of black swans. I cannot imagine an AI that would have come up with black swans as a new category to think across a lot of phenomena. The implication is that AI is actually quite conservative. Can you say what you're basing this on? Is this an instinct? Is this an observation? Is this a thing you saw people say? Because I've been in AI since 88 in some form or another and I did not evolve the AI is quite conservative idea. Well, it's like data. You can't get out of data something which is not in the data pool. You're constrained by what's present. AI can walk together and new patterns, things that are lying around in the thought world, but it cannot come up with a new category like black swan. It would just never get there. Thanks Doug. And I think what I was trying to explain earlier is that because a family of machine learning or machine intelligences might be very naive about our assumptions and our framings and all that kind of stuff. They could very easily plop an observation into territory that seems like a black swan to us because they're like, I didn't know that you can't talk about bananas when you're trying to launch a nuclear weapon. They're like, whatever that might be. Kevin. There we go. I asked chat GPT about the ethnic and demographic makeup of its originators. And it's a question it had never asked. And I said, well, tell me about the value of diversity. It said the Silicon Valley things about diversity that people say. If it's valuable. Why didn't you check on the diversity of the mostly white male, possibly a bunch of Indian as well, folks who are your makers. If that's a value you would have asked that question, it says, it is a value to us and we did you know it did some, some of that talk. So, you know it's, it's been shown that AI is discriminatory to people of color in pretty serious ways. And, you know, it, it, it, it is not aware of it. It is, you know, it is artificial artificial intelligence, unacknowledged bias baked in. So that's just one point. Thanks Kevin. We're kind of ankle deep in this topic right now we can also switch back to other sorts of things. I would love, Rick, maybe to step in and talk about what you posted earlier. I was going to respond to Michael and it does tells very nicely into what Kevin was talking about. And it's, you know, this is, you know, predominantly white older men. And I think it's okay. But what would be better is if there were some sort of outreach to intergenerational groups so that the wisdom of this group can go beyond the inner circle but reaches out so at noon I'm actually going to another zoom pool which is a global organization, which is more intergenerational, and it's called the global regenerative co lab. So that's where I'm going in a couple of minutes so I think we have to think about outreach. And Dave Witzel who's part of the co lab and Hudson and a few other people are friends of ours and we have good overlap with the co lab. I would suggest being more proactive actually and reaching out to them and say, you know, how can we collaborate joint sessions. To me that's where the networking power will emerge is where the different ecosystems are actually more proactive and trying to see where the sweet spots of collaboration are. Thanks, Rick. Michael and Carl. Hey, Carl. Thanks, Rick. I appreciate that. And as I mean I think I've said in this group before it's funny it's we're rolling up on the, the second. There was a discussion, born out of this in this group, this issue. When I was attending the Mozilla festival Moz Fest. And I think it was two years ago. And bouncing back and forth between rooms that were global and diverse in which you know I was a minority. And, and the, the effect of that on the conversation and you know was really noticeable as, as you would expect and, and came back and, you know, told some people here about Moz Fest specifically, which is coming up again and I urge you to like be part of it and, and you you know, Rick to your point being part of global generation collab and, and other groups and not necessarily with the purpose of you know, this group changing, but of, you know, the of you changing of all of us changing all of us being, you know, more at home with our, with our demographic smallness and, and, and being a component of a world that doesn't look like us and, and being, you know, a contributor and a collaborator in groups like that. And, and, you know, I'm not, I'm not speaking disparage legally of anybody here and assuming that, you know, that people don't do this because I'm sure that, you know, like Rick, you know, many of you do and, and I just wanted to emphasize the point that Rick was making about like be in other groups, be in other groups and, and, and feel yourself as part of something part of a we that is not this we more because I mean the default we and so many of our worlds is this kind of a we maybe it's a little bit more, you know, gender diverse but it you know it takes effort takes acting affirmatively. So, you know, I know I'm preaching to choir, but can I just quickly respond to Michael before I go. Actually, one of the things that I think could help and I'm just experiment experimenting with substack. I mean if open global mindset had a substack thing where you could have a channel that's open out there. There's so much wisdom that I see going through the emails I can't even keep up with it. But if it was public, and then you could link up to other groups and have cross fertilization on platforms, something that's more transparent where people can go between groups that's what I'm looking for but I have to go to my next session. So, hopefully these ideas will have some seeds of ideas will grow into something else. So gotta go. Thanks. And if you want to pause for a little bit before stepping in. That'd be fun. Everybody's. Yeah, I mean that's the thing that gives me encouragement is just how I mean I'm part of so many groups and there's just such as incredible convergence going on. One of the main groups I've been out with is the International Society for the system of sciences and I mean it's, it's the who's who I mean they're almost every member is a professor emeritus. They had they got their PhD with like Heinz von Forster or Eric Trist or Russell a cop. So, I mean, we're talking amazing things there. I'm act. I've actually got a meeting at three o'clock today my, my primary mentor has been leading one of the system integration groups we have a meeting at three o'clock today. And it's hot. His small group it includes the incoming president elect in further 2324 session. Alexander Laszlo who's a past president of I triple s as well as his father, Irvin was to. And, and the other person is leading a, a wholism, SIG. So that's the group that's been meeting to plan on the next thing I met the rest of today and I'm put a link in it there's actually amazing group people like all things productivity it's a task management and time blocking summit. That's going on. Today they've got, it's, it's just, they've got like 1212 videos out there pre recorded material for people to go through tomorrow and Saturday, and stuff if you can attend it's, it's amazing, it's amazing. I've done a lot of things with the main project I've been, I've also been involved with the, there's a knowledge engineering and an education group within I triple s. And we're exactly looking at that know a lot of things you're talking about big initiative is about a systems literacy that we need people people really have. That's a major push going on. And the outreach and how do we engage like Jen, Jen. Well, I just go, there's gen W, XYZ alpha beta. That's 18 years just forget the other parts of stuff but we really need to everybody who's living now needs to be working towards what, what's the generation for generation beta the first on yet to be born generation which would be January 1 2036, knowing forward so that's really what's needed if we can't come on if we can't get some consensus towards that. And then we don't have much chance. So that I'll leave it. Thanks Carl. Systems literacy is a really nice phrase and something we need more of. And it's funny, I just, I just did a search in my brain for systems literacy and I found a, I found an article that apparently it's I'm not kicking up again but the title of systems science and pattern literacy. That came out of the Bertrand Laffey Center for the study of systems science. I will try to find it, but I was wondering, Jenny, if you have any as a pattern and then expert, if you had any thoughts on how patterns fit into this. I know except there's a recognition in the pattern community that it's a major issue. So I don't think we've, I don't know that there's a leg up on it, but there's certainly an agreement that it's a big deal. And you're waiting, that's good. So on the one hand, I agree that system literacy pattern literacy and all that stuff is crucially important and you know the question of how we get it out of academic papers and into the basic culture and you know, education is a really important question on the other hand. If you go to a sports bar. You hear an enormous amount of pattern sophistication. You know, if you if you talk to an automobile mechanic. If you if you listen to what was that. I posted an PR show with click and clack the top car talk. Listen to that is an enormous amount of pattern and system sophistication in places that we smart people don't think about as being that. So, and if you, you know, if you walk on to onto a machine shop or a factory floor any place where human beings gather and do stuff together there's an enormous amount of pattern sophistication it may be very localized. Very constrained to a certain domain. We, we kind of folks here tend to think about meta pattern, which is maybe another kind of game. So, I'm just saying that to say I love the question, but I'm not sure what the question is. She said cat talk. Yeah. I love the cat talk. I've been complete sidebar I've been I've been in a loop of YouTube videos last day of car guys who are buying. Like old luxury cars like you can get like a 20 year old Bentley for 12,000 pounds in Britain. It's like a $200,000 car. And then of course it bankrupts you trying to do things to a guy bought a Ferrari for like seven grand and completely rebuilt it. And I'm not sure what that has to do with this, but I think it has something to do with this. It's you know it's rebuilding extremely complex things by people who shouldn't be able to do it. So since we're talking about rebuilding some really complex things maybe, you know, I'll stop now I'm going to ramble. The ship of thesis thought experiment is that a ship is replaced part by part over time and at the end is it the same ship or not. So you combine that with my life as a cyborg and think of ourselves as being extended and replaced whether it's through hip and knee replacements or organ and tissue growth or whatever else. There's an interesting set of topics there as well. Well, even, even, even these things completely in meat space are completely replaced all the time. And I'm waiting for the first time. One of the participants in an ogm zoom is actually synthetic. This speaks to the pattern question. Bucky used to talk about pattern integrity. You know, I needed an experiment of tying together strings and cords and ropes of different thicknesses and tying a knot in one hand and sliding the knot across the different ropes and the material is different but the but the pattern of the knot is the same. And so, you know, with with our wet wear existence is the pattern is the same even though the physical constituents are completely different every seven days or 30 days or two years depending on which body parts you're talking about. So, we, we know something about pattern. We're blind to it. Interesting territory. Yeah, I'm, I'm not so sure that how intuitive pattern recognition is I think it requires basic knowledge and sort of technical knowledge. So I just published a letter to the editor in our local newspaper in support of a soil bill that is pending in the Oregon State Legislature and I pointed out the linkage between the soil organic carbon. I mean, the quantity of soil organic carbon and the soils ability to absorb and hold water 1% of soil organic carbon equals 20,000 gallons of water that the soil can hold. So you have between two and 10% of organic carbon in soil, huge volumes of water. We have a huge water problem in central Oregon just like everybody else, 86% of the water is used by agriculture. The city in its sustainability plan hasn't even considered agriculture as a player in this whole thing. And so the editor called me because he was surprised by the logic that I pulled together then he asked me for more supporting documentation where I sent him several articles from reputable reputable sources and he published that letter really very fast. And it spawned a conversation within our central Oregon community of others who now started to think in terms of how can we ignore agriculture, you know, what are they doing with this water, how can it be. I mean we are exporting alpha alpha to China, you know, using so much of this water. So, we can think of pattern recognition as something that we can stimulate right I mean if we have technical knowledge and awareness. What are the, what are the levers that we can pull for people to see a product systems perspective, but I don't think it is just entirely intuitive if you have to have the basics. I'd love to, and I'd love to inquire within a little bit class with what you just said it's it there's a whole bunch of things that raised in my head. My first reaction was kids are little pattern recognition engines pattern recognition and our and trajectory following and things like that are just natural human things that we're usually very, very good at. There's a bunch of patterns that like we see we infer and all that. Then there are some patterns that are harder to see, because either they conflict with our value system and our framing so they're sort of invisible to us because they, they're like taboo within our current worldview, or because they're technically complicated and difficult. And in some cultures you know it's going to rain tomorrow because you know that when the termites start going underground. It's going to rain and so it's going to be a big storm coming in a couple days, but that's through observation and the passing passing down a wisdom and the observation of nature. In other places you know it's going to rain tomorrow because the little app, you know, little app on your device says hey look droplets on the forecast tomorrow. And then, and then I had the thought of a lot of what politics and lobbying are about as are the directing of attention toward one set of insights and away from another set of insights. And I think a piece of your task is to reorient society to the patterns that matter for regenerative healthy soil and soil fertility for example and water capture, and away from the patterns that everybody's accustomed to. And I can't imagine that whoever the governing bodies in the center of Oregon are ignorant of the fact that so much water goes to agriculture. I imagine there are political reasons why they're not addressing that issue on the table like like, hmm, I'm going to lose all my major campaign funders if I bring the issue of 70 some percent of water is actually being, you know, sucked up by by agriculture. There's some other answer to the question it's not a failure pattern recognition. It's a success of attention management and direction or something like that. And I'm, I'm increasingly aware, as I age about successful campaigns to distract people from really useful information that's right over there in view, except it's been demonized sequestered taken out of public view, other sorts of things right. And it feels to me like a piece of your quest is to make these things more useful things like that more visible more usable, more applicable, and to turn them into the policies that we all adopt and live by. And if I've misrepresented, let me know. Well, some of it is, is knowing the technical aspect of 1% of organic carbon translates into 20,000 gallons of water. Not everybody knows that. But it's established science that other things are but auto Sharma refers to as a whole lecture around it as blind spots. It's a whole, but we don't, but there's no, there's an attention deficit, which is what you just mentioned. So that's the body, the blind spot in theory you always a big thing. Because we, we should know we should see it, but our attention is diverted. So yeah, I agree with you. So is there a chance that the bots will talk with each other, and not with us because we're more stupid. Already I think feeling we have less influence in the world than we did before AI. The bots right now still have masters human masters. No. And well, and the masters of the bots may be able to control. They may end up with very powerful bots under the control of a few, you know, rather, relatively few humans that don't care about the rest of as much. Well, that's an extrapolation of now. You, you think was you, Pete, you talked earlier on about the future real where there's thousands or tens of thousands of these AIs or agis and my concern is not that there's tens of thousands of them, but that there becomes one of them. And that's kind of what I mean. I, you know, yeah, the, the, the failure mode that I don't like having, having worked with humans for 20 or 30 years trying to get them to work together is that humans work together tiny, you know, it comes back to what's augment thing, you know, shouldn't you be able to gang together 10 humans and get, you know, 2x increase in intelligence or 3x increase in intelligence and if you gang together 100 humans in hierarchy or whatever. Couldn't you get, you know, couldn't you augment human intelligence and get more collectively more smart and and I, you know, we in this channel, I think have been bumping up against that for a couple years and I've been bumping up against that for 20 or 30 years humans. And I think it's actually an evolutionary advantage. We, we bump into each other and we collaborate in a in a small tribe, but, but, you know, 50,000 years ago 100,000 years ago. If you wanted humans to survive, you wanted the tribes to bump into each other and then bounce off, you'd want little wars of attrition you'd want this to spread out. You'd want them to fight each other. And for the weaker ones to get killed off by the stronger ones. So, I think we still have that it's, it seems really obvious to me when we start working together we can work together up to a certain level of cooperation and then we fall apart again at different, you know, we've we figured out kind of how to hierarchicalize that a little bit but it happens at every, every level of the hierarchy. And so, I think, and maybe this is still up for debate but I think we're going to have an AGI that's as smart as a person. But if you have an AGI that's smart as a person and it's a built thing rather than a biological thing evolved over 100,000 years. The built thing is easy to tune and tweak. So, so be really easy to tune it so that it works together with another one of it and another one and another one. And just like you said, girl, but what I imagine there is not that it's, you know, 100 individual AI is as smart as a human. It's the collectively that you could gang them together and get, you know, 100 acts, any of any one of them and maybe more depending if there's synergistic effects. That's what people are saying. I think the way this shows up in our lives is that machine learning machine intelligences get better than humans in little slice after little slice after little slice of activity. And it's a different AI that conquers the game of go or the game of chess from the one that plays ping pong better than a human and can catch an egg in the air with a robotic hand which was a really, really hard task to do. And, and, and you sort of go through everything. And when you assemble them, it looks like in the collection of them is smarter than humans and you're getting sort of sort of made you I, but getting these things to work in concert seems to me to be a couple of decades worth of really hard work, including AI is trying to solve that problem. I'm going to be surprised and that could dissolve faster than I think, but I think making turning them into an orchestra, in some sense, not with a central conductor holding a baton but rather with an ability to negotiate insight and perspective and maybe even have some kind of ethical framework that that guides them. That seems really, really hard and to me, run away functions in the slivers and the whole thing dissolving into chaos is likely to happen multiple times. In a novel walk away Corey doctor of science fiction book. One of the subplots is a person who was killed but managed to upload her brain into the cloud before she died and they're trying to reconstitute her. And there's a lovely passage where they're basically trying to reboot her and they finally get some relief. But she is like having it, like having the worst, the worst psychotic episode ever, you know, in the in the robot memory until things start to stabilize. And there's this lovely feeling of, and again, this is science fiction so this hasn't played out in real life at all but I can see this happening easily that that that the reboot of any pieces of this are going to be just really hard and messy. And I'm hoping that by that time the consequences aren't really big because we have wired things together so well that the eyes are running large pieces of what we do. Michael then Jose. So a lot of our conversations about as I think we're, we're doing the thing that people have been doing with the word algorithms where they're, you know, ai's are necessarily these things that exist. from and above us and at the risk of teasing next week subject a little bit but but the idea that individual ai's and and our own gatherings of of our exposed knowledge and our unexposed knowledge under our dominion and then choosing to you know, consent to our intelligence about a certain subject being used in an ai. This is, this is something that you know I think is a digital human rights issue. And I'm really, I'm really struck by. I don't know if I said this in this group last week but that, you know, we're in the sort of mainframe era of of ai, and just like, you know, personal computers and mobile devices, kind of gave us at least the illusion of separateness. And not far away at all from constructing. I mean just technically the ability to construct our own ai's based on, you know, that are that are under our control, and as smart as we are with better memories than we have. You know that that don't forget which book you read that passage in and you know where you saw that and you know can help you make the associations between things that you do somebody you met five years ago and somebody you met yesterday. I think I think just pushing pushing pushing for for personal dominion over your intelligence in the form of an artificial intelligence is really key to building better broader intelligence. And Jerry's comments for me collide because As Pete said, he has and many of us have for many, many years been trying to do collaboration. And it's really, really tough. I think it's stuff because we haven't cracked the nut on understanding who we are much less understanding how it collaborate. And we don't seem to be doing that very well. So, then the question is, can we learn to collaborate before they do. They being AI. And Jerry's point that it's going to take 20 years. I don't think so. My suspicion is that it's going to be a lot faster than that. And, and I, whatever steps that's going to take and how quickly it happens. There won't be the barrier of our history of not collaborating, because we've learned that we can't. We've learned that it's hard. We've learned that we can sit together in a zoom room. You know, 10 of us, two dozen of us, whatever the case may be. And we know we're going to walk away with nothing more than what we had when we walked in other than a little few bits of information. We're not going to be bound together in any other way, other than the expectation that we might share some time together. We might share a few bits of information with one another. But we're not going to walk away from this connected and collaborating. That's the expectation we have now. They don't have that expectation. They won't have that expectation when their masters say time for you to guide these guys, and they will guide those guys and they will guide those guys. Collectively, we will be able to arrive at better answers than any group of humans will ever be able to arrive at. And if there isn't a group of humans that is acting collectively on behalf of humanity. How does that end up. And that pace at which that's all going to be happening. We need to figure out how to walk out of these meetings. Not as individuals, but as collaborators. And, and we, it seems to me that every meeting that I participate in what we're working on is sharing our ideas sharing our thoughts sharing our visions sharing our experience, and not on understanding ourselves understanding each other with a knife towards collaboration. Complete. Thank you for that. I say I love what you just said. It really resonates for me. I have a feeling Mr Homer needs to bounce at the half and has a poem for us, but that might just be a hunch. Oh, well, you know me. This poem by Sharon Olds and I think it goes to the heart of what we're talking about here, although it has nothing to do with AI per se but a lot to do with who's programming AI. It's called right of passage. As the guests arrive at my son's party they gather in the living room. Short men. Men in the first grade. Men with smooth jaws and jins. Hands in pockets. They stand around jostling. Jockeying for place. Small fights breaking out and calming. One says to another, Holder you six. I'm seven. So they eye each other. Seeing themselves tiny in each other's pupils. They occur their throats a lot room full of small bankers. They fold their arms and frown. I could beat you up. Seven says to a six. The dark cake round and heavy as a turret behind them on the table. My son freckles like nutmeg on his cheeks. He's as narrow as the ball. So keel of a model boat. Long hands cool and thin as the day they guided him out of me. Speaks up as host. For the sake of the group. We could easily kill a two year old he says in his career voice. The other men agree. They clear their throats like generals. They relate and get down to playing war. Celebrating my son's life. That's a nut for us all to crack. I got to run. Good to see you all. Thank you so much. Kill you may have the last word today. And take your time step again. That was a lovely calm. I'm not sure. Jerry since. Can may have had the last word. But for the sake of provocation, let me say two things. Jose, I love what you offered and I don't like it. That's an invitation for a conversation. My interpretation is that. We collaborate all the time. Humans are rich in collaboration. The history of human history is only there because we do that. But that also that puts a finer point on your question. And. I wonder about a scenario exercise for us to do maybe in a future. GM or some other time, which is what happens if. Meta. AGI shows up in 20 years. Jerry, as you suggested, it's at least, you know, decades out. What happens to the shows up in six months? Three years ago, I was, I hold a, every Monday I hold a little get together at a local design from where I have a desk. And we had a conversation. I'm like, there's this virus thing. We may want to tell all the staff to take their laptops home. Just in case. And then the following Monday, there was no mind melt. Because we were in lockdown. And so just opening these conversations and saying, what. What if this thing could happen, whatever, and what if it speeds up is invaluable to me because it makes me could start to consider and then look for the evidence of those kinds of things may be happening. So I appreciate that I opening comment. Yeah, not just to look, not just evidence, but to enter into a zone of pure speculation. Well, some of us are actually experimenting. Some of us are sort of part of the forefront of learning to use these tools well. And that's important to also. If I, if I may, I just to comment on Gil's comments. I don't actually think we know how to collaborate and I don't actually think we've done much collaboration. But I think we've set up a very good system of force. And we've created us created a system where people then comply with that system of force. So who's we in this case. So it's humanity. Managing now or humanity over the last 50,000 years or both. Yes. Okay. I think. I think we've taught people to obey. And that people obey well enough in school to then obey their bosses at work to then obey the financial system at work and the legal system at work and the political system at work. And that that obeying. Looks like collaboration. But it's not collaboration. I don't believe. It's, it's simply a way of dictating. A certain progress towards a goal set up by the system. Collective system, but not one of individuals actually collaborating on their own from their own sense of themselves. And I think those are two very different things. So my, my definition of collaboration is one of myself collaborating with you and others rather than my being someone who is willing to participate in a system that has found incentives to cause me to act. I am sad that we are at the end of one of our conversational stretches because I would love to languish in this topic that we are in front of us. I'll say it's really good. So let's come back to it. Excuse me. Do you want to jump back in or is that you waving goodbye. Okay, good. So with that, why don't we wrap today's call. Thank you all for being here. And let's be careful out there. Jerry, do you have one minute. Yes, I will hang in. I will turn off the recorder. Thank you.