 beautiful. This is the open global mine fall on Thursday, February 23rd, 2023. I almost said 2022, I don't know why. You'd think it'd be getting used to it. Greetings, good morning. How is everybody? You're okay? Judy, Portland looks like Minneapolis today. Well, we were supposed to get a lot of snow, but so far we've only got about four, five inches, so it's not bad. It went north of us, so we just got a minimal amount and not getting any today, which is when it was supposed to happen. So I'm grateful, but the piles of snow outside are pretty high because at one point the bank across the street from me where they plow all the snow was six feet tall. I mean, it's, it's a, this has been a heavy snowfall winter for us. Apparently East Coast resorts are suffering like crazy because it's been too mild, too warm on the East Coast and there's just no snow. Well, I guess it's stopping it here. We frosted here last night. It's currently 34 degrees outside. Nice. Unusual for our neck of the woods. Yeah, that's unusual for you. It's not unheard of, but it is unusual, especially this late in the season. I would explain why you're not in your yard. Yeah, that is why I'm not my ark actually. Yeah, it's like 18 here, which is pretty mild actually for February. Frequently, we've had almost no super, super cold days here this winter. Usually we get some days when the wind chill is 35 below or something because the actual temperature is 10 below or we just haven't been having that weather this year. Did you say you had snow in Oregon this morning, Jerry? Yeah, yeah, Portland the snow. It started snowing at noon yesterday and just kept going and lots of big fluffy flakes too, so it was pretty. So we don't have a topic for today and today is a topic day and the floor is open for topic recommendations. There's a few things on my mind, but I'd love to hear what's on your mind. And it could be any, we could wander over to art. We could go any direction. We don't need to stay on global crises or other sorts of things. Stacey. I bet if we will put one word in the chat and see what comes up. That sounds good. The proposal is to put one word in the chat. Everybody ponder for a moment what one word would work for you and please do that. Just the first word that comes through your brain, Pam. Substack. Well, there you go. So the word so far are joy, doubt, water, substack, synthesis, people, gratitude, generative, and gillier, phantom, your phantom note taker doesn't seem to want to participate. So it goes. Ambition. The prompt was what one word is top of mind. Because we're looking for a topic. Yes. It's like a stimulation. And does anyone want to synthesize anything from the list of words that showed up for us? Or does any of the words that showed up have particular energy for a couple other people? Rick has his hand up. Oh, well, I wasn't sorry. In the corner of my display. Go ahead, Rick. That's so fine. The reason why I put substack on is because I just came from a clubhouse chat with somebody, but we're talking about substack. And I just launched my first newsletter on it. And I just see so much potential about how you could potentially interact between it. So the guy I just spoke with was a regenerative architect. And his perspective was just, I just blown away about, but he's so unassuming when I say, have you told your story? You've written it up. And he says, no, I haven't. I said, so I said, well, why don't you take this little conversation and turn it into a story, but then use the story as a way of having an audio or Zoom thing where people have read the story and then they can come and interview and then have it sort of like an iterative tapestries of different threads of stories that are connected rather than disconnected because our story making and the hero's journey are broken. So we need, we need to co-create something very different to what we're living with at the moment. So that's, that's where I'm just coming off from. Okay. Rick, thank you. And I've been reading The Heroine with a Thousand One Faces, which is a critique of the hero's journey and traditional storytelling and opens up all kinds of interesting new vices for that. So that resonates for me. Can you put that reference in the thing? Oh, sure. That'd be great. Thank you. Judy, then Mark. I was just thinking about people in the context of engagement of people, because it seems as though people are pretty diffuse right now and how we might engage people in thinking and action would be of interest to me. Thanks, Judy. Mark, Mark C. Just pausing a tiny bit. Thank you. You're welcome. Thank you, Rick. One of the interesting projects that Wizard of Interactive Development is working with the Internet Archive to do is the Tap Streets Project and the Interactive Tap Streets Project, basically turning a web browser into more than just a document, but links from web pages to web pages visually, kind of a visual. And my take on it is trying to create a web standard for kind of navigable visual links between things. So the word tapestry is a beautiful one. Thank you for bringing it into the conversation, Rick. That sounds very OGM-y and very fun. Can you say anything more about it? I'm googling Interactive Tapestry Internet Archive and getting nothing. It's a project that I don't know how much I can say other than the name. So I will check. Would you have to kill us all afterward? No, of course not. Good. That's so convenient one that happens. I'd have to kiss you all and that would be a new mergy. I've never heard that one before. That would just be too rough for me this morning. But yeah, I'll check on that. Thank you. That'd be great. Mark, if you may be interested in Fellowship of the Link and it's part of OGM and there's a MatterMess channel for it. We also meet on Wednesdays at 11 a.m. Pacific. And Wednesday is, I'm on the UX team, the user experience team at Internet Archive and 11 o'clock is our stand-up meeting. So, oops. Yeah. A good chunk of Fellowship of the Link is also in Europe. So later is bad and earlier is probably too early for it. Earlier is probably too early. Anyway, you might join the channel and check it out. I will. Thank you so much, Pete. We could also share poems we love. We could do a bunch of things. Good to see you, Shimon. And I'm glad you're on the call. And the one question that was on my mind is the one sort of the Pete raised on the Google group yesterday, I guess. I mean, was it the day before already? About platform choices and how our conversations go and all of that. And I'm happy to go there. But that feels strangely a little too close to where we are too close to home. And I wouldn't mind us exploring outer spaces a little bit today. That'd be great. I think if I'm hearing you correctly, Jerry, is that you'd like to elevate it to a higher sort of humanitarian level than rather getting in the weeds of the technology. I don't know whether I heard you correctly or not, or maybe you could clarify what you meant. That was not my intention, but it's a nice riff on it. I'm not trying to elevate us. I'm trying to maybe liberate and explore a little bit. And let us sort of wander someplace where our senses are, maybe also where our feelings are, where our bodies are, any of those kinds of things would be really, really interesting. Mr. Kronzer. Morning again. What's been on my mind a lot is one of my mottos in training. And that motto is, quote, be more human. So I have a conjecture that we really don't understand the biology of the human brain. And the human brain is more complex than all the computers on the planet combined. More than Apple, more than Google, more than anything. It's not comparable easily because it's incredibly different from a computer. And I take it, Doug, right, Bart, Gil, Shimon, that your brain is so much more powerful and interesting, complex, weird, dangerous than all the computers and all the software in the world put together so far. And I don't feel there's an appreciation of what we carry on our shoulders. I've been paying attention to the frenzy about AI. And it seems that a fun thing to do would be to kind of, like, have a, who is the guy who raced the railroad and tried to put in the spikes while... Not Paul Bunyan. Casey Jones. No, Casey Jones was an engineer. The guy who, it was a race between a man and a machine. Yeah. And I forget. John Henry. John Henry? Yes, sir. Sounds right. Yeah. John Henry, man or horse? John Henry was a steam-driving man. Exactly so. Sing a song, Ken. Oh, I'm going to spray you with that. And we have a lot of problems. And who can solve the problem first? The AI or a person, the AI or a team. Two AIs talking to each other. Chat GPT talking to Tesla's AI and having a conversation back and forth to solve a problem with a human interface kind of saying, okay, I'm going to represent chat GPT. And I'm going to ask this other AI, Tesla's AI, a question and then, okay, it's going to answer. And I'm going to, like, make it really cool and ask and respond to chat GPT and kind of play all these games with the umbrella term intelligence. Thank you, Mark. I'm going to go quiet for that for a second and then pass the mic to Shimon. So Shimon, why don't you take as long as you wish to step into the conversation? Yeah, I like the idea of brains and AI. I actually have been spending a lot of time on that, really trying to understand the comparison between brain function and how, you know, the AI bots have been developed in terms of just computational neuroscience. It actually brings me back as a resident in psychiatry. About 40 years ago, I was involved with a project with neuronal networks trying to really think about, and it was very early on, how the structure of the brain actually can inform developing computers. But I really like the ideas presented in terms of combining the brain, our brain, with all the values and emotions and things like that with the power of artificial intelligence, just the way it's structured. And I think we're going to get to a point where somehow understanding our own physiological brain structure and aligning it with computers, which is still far away, we can solve a lot of problems. The problem that I'm working on right now, which is sort of like crazy for me, is trying to use or being informed with AI to build a constitution for the state of Israel. So the state of Israel, as many of you know right now, is going through a huge, huge problem. Much of it is a consequence and symptom of something the original sin or whatever it was in 1948 that they did not develop a constitution. And the same issues that prevented them from doing it then are the things that essentially are driving the horrible situation in Israel right now. So in many circles, people have talked about developing a constitution. And what I decided to, since I'm very interested in testing the chatbot GPT-3, I decided to use that as a framework to develop the structure for a constitutional process that then involved people to crowdsource and essentially sortition and deliberative democracy and all those concepts come into play. So I've come along, you know, I'm pretty far along with the process. And I do like the idea of Substack because Substack has become an invaluable publishing platform and it really, really allows for collaboration. So I'm really excited. I'm hoping that perhaps we can talk about that as well. How do we combine Substack together with AI together with crowdsourcing of use Kumu a lot in the process just to visualize. I haven't used Jerry. I haven't used your brain that much, but who knows maybe at some point. So it's nice to be back. Thank you. It's great to have you here. I really did not know that Israel had no constitution. It has basic laws. Well, that's one of the problem because basic laws can be essentially created with 61 with half plus one of the legislatures. So it can then be overruled. It's pretty complex, but I do think that so far working with chat GPT has been incredible in terms of recognizing processes, telling me about people involved, let's say in the Constitutional Convention in 47, linking me with different efforts to do it. So it's been really, really great. Apparently the German constitution is among the best constitutions on the climate. And also Iceland went through a process of trying to rewrite its constitution after the 2008-2009 global financial crisis got pretty far and then failed to ratify it. Yeah. What I'm doing is I'm building on, again, I don't know how much people want to get into Israeli politics, but the leader of Israel, Benjamin Netanyahu, actually grew up where I live, essentially. And he keeps talking about being a Democrat, a Madisonian Democrat. So what I'm actually doing is comparing the American process of developing the Constitution, including the Federalist Papers, to what Israel needs to go through. So again, I find that chat GPT is very, very helpful, again, in creating, you know, structure. And I'm hoping to put my Israeli Federalist Papers on substack. Shimon, thank you. That's really interesting. Kiln and Stuart, and please take your time stepping in. I wanted to say something about humanness and AI. But first, I need to respond to Shimon, because it's very rich provocation that you've offered. And just a couple of thoughts there. On the constitutional process, Chile has just gone through rewriting a constitution and also didn't ratify. So add that to the to the brain stack there, Jerry. It strikes me that the contrast between constitution and the basic law is that Constitution acts as kind of a shock absorber. It's a damper that only allows certain change that moderates the degree and the pace of change in an organizational system. And like with 50% plus one, you can swing all over the place. With the Constitution, you have ratification process that slows that down. The challenge that strikes me with Israel versus the Federalist is that it takes common purpose to build a Constitution. And with all the differences that the founders had, there was some degree of common purpose that they could organize around. And in Israel today or the United States today, if we were to have a Constitutional Convention here, I'm not sure if there's enough common purpose to keep that game on track. So fascinating. I would love to talk with you more about it at another time. On the matter of AI and humanness and mind, one of my concerns in this unfolding process is that, and Mark, you sort of resonated this for me, is that we think a lot of the brain and creating an artificial brain. But brain is only a part of what it is to be human. And brain is only a part of mind. And we have the enteric nervous system, the hormonal systems, and so many aspects of where mind is situated. Some would argue that this is not a collection of 20 minds in a conversation together, but it's a mind with an emerging conversation. With interactions that are deep, we're affecting each other biochemically as we speak and listen. So the locus of mind is hard to pinpoint. And so AI in its maybe worst case is an extreme example of the mechanistic theory of life. It's all reducible to stuff. And if you build the right structure of stuff, you can duplicate the function of what's happening here. And I think that's, my operating assumption right now is that's a very deep estymological fallacy and takes us down some really dangerous rat holes. So I'm interested in, when we talk about what it is to be human, or how we build machines that enhance human, it seems important to be really cognizant or there's that word of that, whatever that means to take that broader perspective of humaneness and thought and mind into account. For those who haven't seen it, Gregory Bateson did a book, Lateness Life Called Mind and Nature, which to ridiculously oversimplify as it says that mind is a function of the living world, not of the individual organisms in it only. They're obviously unique individual aspects of it. But that perspective provides a really different orientation. To what we think about and what we do and how we proceed in the world. And the question that I've been chewing on a lot, I think I've shared this before, is what might it be like if we acted and thought and took our, if we engaged in the world as though we actually belonged to the living world. And Ken has talked about the nutmegs curse and the contrary position which we live in, which is that there's this inert matter out there for us to exploit for our purposes. But what if we belong to the living world like we belong to a family or a marriage, or a very different kind of relationship than a transaction, a business relationship. And I think that, for me, that informs the AI conversation too. Last thing I'll say is Shima, I'd love the example you're giving because it's a great example about how chat, GPT, etc. can be a valuable ally, partner, helper in complex processes. But I'm certainly not ready to hand over control to it, although I hand over control every time I get on an airplane. But an airplane is far less complex than a constitution. Just realized, as muted, the question of relative complexity is a thorny one for me. And I think you know this, but it's Stuart and Doug, and please take your time stepping in. Yeah, so we're into some complexity here, some extraordinary complexity. You know, I was going to say something simple about three people ago, and now it's become much more complex. I mean, the idea of constitutions and the difficulty and the fear, it's got to do with people wanting to retain their own power versus surrendering to some constitution. I think that may be part of the challenge right now in Israel in terms of, you know, just being that guy who's trying to keep himself out of jail. And if the legislature gets powered up, overrule what the Supreme Court says. I mean, it's just it's amazing how it comes down to individual self-interest sometimes in these areas. Gil, I think you made a great kind of overriding statement about AI in terms of its utility, that the notion is part of the living world. I've been, you know, jumping into some Native American wisdom about a kinship society versus a commercial society. And that's where, you know, I think we've all gone off the rails. All of that said, I got through a high school geometry by starting with the proof and working my way backwards and the figuring out that I would, you know, either figure out or just step in between and maybe the teacher wouldn't catch it or something like that. The thing about all religions that have a lot of followers, it creates a vision that pulls people forward. And so when I think about the entire AI conversation, and I don't spend a whole lot of time thinking about it, I think about it as another tool that we, as humans, have invented. And kind of a preliminary question is, you know, do we want to, do we want to surrender our power to AI, to a system? Because we think they can do it better or not. So all of that is to say, I think a useful conversation, rather than looking at the problems, would be to look at the vision of how the technology can really be a tool and help us get forward to a to a world that Gil described in terms of where we are part of a large living system and how does AI help us create, stabilize, organize whatever is necessary to keep this, species, not just alive, but to put it into a place where the species can thrive in a way that's congruent with the foundation of planet earth that we live on that can feed all of us, but we are rapidly consuming. So how do we use it in a way that pulls us forward? And uses its best pieces to help do that. How does it fill in where our pieces of missing are going to rise? There was a turn of phrase that Gil used about the way we've been relating to the world in terms of exploiting and extracting and choosing to relate to it as if we're part of and connected to. And what struck me in the was the underlying sort of frame of that, as if we have a choice when in fact we are part of, we are connected to. Like that's reality and everything we do has consequence within the frame of natural law and look at our window and the evidence of our destruction and waste and brutality and ignorance is everywhere. So I think the emergence of AI has had the same effect as is happening in a lot of other facets of our culture and world, which is that it's pointed to through one lens, our generative creative power to create augmentative tools. But through another lens to sort of blow apart the distinction between words generated by a human being versus words generated by an algorithm and the fact that it's not the words, that that can actually at this point be an equivalency and in some cases better. That maybe the intrinsic value center of value is not about the expression or the artifact. It's about the underlying driver's purpose, intention, motivation and values in the creation. So AI is a tool and it can enable us to create bigger, better, faster in certain respects in the hands of, but you put a gun in the hands of a bandit and it's going to be used that way. So it's like who's driving and the responsibility dimension of that, the ownership dimension of what am I doing? How am I contributing or not? And in what value frame and in service to what purpose? I think still ultimately determines the outcome. I love Shimon what you're sharing about Israel and is the challenge coming up with a constitution that is palatable or is the challenge recognizing that the fundamental center post of Israel's identity, which is a Jewish state, isn't Jewish. If you look at a map, it's a Swiss cheese picture of Arab settlements and Jewish settlements and this polyglot. And if you're trying to route something that brings people together, which I think a constitution is an expression of, with its founding cornerstone being a separation meme, which is this is a state just for us, just for one of the different populations that reside there. It's how do you get past that? How do you create something inclusive? So I really appreciate your commitment and your devotion and have the highest wishes for your success and figuring out how to blend to enable people that connect and come together and align and share in an expression of values. I think that's an extraordinary and awesome undertaking and I send energy for your success. I'll just respond very briefly because there's other people and I've already spoken, but in terms of the constitution, what I'm trying to do is I'm actually very much a Madisonian and Franklin in the sense that public opinion and educating citizens is a primary responsibility of government and I think that the process is an educational process because when you ask most people about the U.S. Declaration of Independence Constitution, they don't really know what you're talking about. We've all become consumers at best of government and I think we need to reclaim citizenship, so part of it is in the process. But I really like what Stuart said and also you Doug, in terms of how do we actually think about technology and make it for the better? So I'll give you an example. Before I started with the Constitution, which is just the last couple of months because of things that started happening in Israel, I started looking at medicine. You know, and AI plays a really, again, very interesting role in medicine. I mean, it's been tried. I mean, we can sort of track it back to Eliza or even before there's expert systems and things of that kind. But the thing with AI that was really intriguing is how big of a uptake there is within medicine. And most recently, I found something very interesting is even the University of California in San Francisco has this big grand round of how we can use AI technology. And the thing that scares me and the thing that I wanted to vote more time in and get people around is the governance of the system. So when they talk about it, it says, how can we, the University of California in San Francisco, leverage the data we have in our electronic medical records and other places in order to perhaps brand ourselves as an AI system that creates better care for people? In my mind, it actually should be for everybody. And the question is how do you organize people so that don't contribute their data or if they contribute to their data, it's made sure that it's available for everybody. Because what we're going to find very soon is that, let's say my hospital system or Penn or even Epic, which is this huge, huge electronic medical record, which is used by probably like 70% of hospitals and healthcare system initially funded by the government has all this information on all of us. It's not someone wrote about credit union. I mean, this is information that everyone gives up. So AI, they're talking about leveraging that and all of a sudden like one healthcare system or one company owns all this information. So the question for me is like, what can we do in organizing people? How can we work within the democratic system to get legislation to deal with that? So I agree. I think it's a great tool. I'm working on, by the way, if anyone is familiar with knowledge architecture and information theory, Shannon's information theory, because I think they're very central to everything we're talking about, constitution, healthcare, everything like that. And that's what I'm trying to really understand better. So that's my to my sixth sense at this point. Thanks, Shimon. I wanted to just flip in a little back toy before Stacy goes and then let Stacy pause, which is that a friend of several lawyers, I know Pete and a couple others know him as Tom monarchy, who helped develop the Vista software that runs the VA and was originally open source. And one of his colleagues on the Vista program was Judy Faulkner, who was back then a good colleague and then left to go make profit and founded Epic. And so Epic and Cerner are sort of the two duopalists that run hospital information systems. And I learned at the Linux member summit recently, I just bumped into a guy who knew this whole backstory. And I'm like, Oh, my God. And it turns out that the Trump administration led out a no bid contract awarding Cerner, the VA system, the VA system, which is a travesty, a tragedy, and probably impossible. Like, like it's a multi, it's a major multimillion dollar project that may actually not work. And so Vista did work. Vista did work. Yeah. Vista is a pretty good system still. They just, they're just been withdrawing, but withdrawing funding from making it better. Well, I think that's just quick addition to that. I think that AI, you know, like the open AI, they're using Wikipedia that started crowdsourcing. And then Microsoft and bought GitHub, which is also crowdsource. So the question for us as citizens is, how do we prevent information effort that we put into a process, including taxes, from then benefiting, you know, like three or four companies? And there's a whole thread we can follow some other time about enclosure movements and capture and all that kind of thing. But thank you for putting that on the table, Shimon, and I will go quiet and let Stacey bring us back. So my question is, how can we expect AI to help connect us to the living world when it's trained on information that's human centric? And I use animal, you know, biomedical research and animal husbandry as an example. Can you say a little more about that? About the husbandry and so forth? What do you mean? I think that the research out there on animal husbandry is not necessarily, I would say it's, I don't, you know what? I don't know because I've never looked up the research on animal husbandry. However, the way we look at animals seems to me to be very human centric, as if they are here for us to use for our pleasure. And so how can we connect in a way that we understand that we are all a part of each other when we think we are better than and that they are for us to use? Because of course God has given us dominion over all the creatures, right? And that's the information the AI is going to be trained on because that's what's that's what's in there. Thanks, Daisy. It comes to AI and trust. My trust goes down in inverse relationship to the way in which the profit motive is at the heart of things. So when I see AI as the next big thing makes me really distrustful of AI because there's people out there who are simply doing it to make money rather than to really serve humanity at the moment or the planet at the moment. And I want to read a very brief story. How many people here know the Mola Nasruddin Sufi wise men and fool anybody familiar with this guy? It's a kind of a folk character in the Sufi world. You've ever, if you've ever heard about the guy looking for his keys under the lamp post, it's actually a Mola Nasruddin story, although it's found out in other traditions as well. So this is the simple boatman. The Mola was earning his living by running a ferry across the lake. He was talking to a pompous scholar who was taking a pompous scholar to the other side. When asked if he had read Plato's Republic, the Mola replied, sir, I am a simple boatman. What would I do with Plato? The scholar replied, in that case half your life has been wasted. Mola kept quiet for a while and then sir, sir, do you have a swim? Of course not replied the professor, I am a scholar. What would I do with swimming? Mola replied, in that case, all of your life has been wasted. We're sinking. And I think we're focused on reading Plato when we're in a boat that's sinking. And we need to learn how to swim. The question is how can we swim better? You know, I put a question earlier about what is the difference between a constitution and governance? And I would say the ethics of governance. So that's just the question to put out there. But I also did a chat GPT search on it to find out what came up there. I'll share that too. But I want to double-tail back on something that Shamal was talking about because and you've touched on healthcare and that's where I spend my time. And I've learned five different EMR systems. And the last one that I'm working on is Epic. And from my point of view, it's an Epic failure. It disables my work. It's an impedance to care. And I remember over 15 years ago, the CEO, and I can't remember his name, all scripts coming to a medical staff meeting when I was at the University of Rochester. And, you know, I didn't have the courage to ask the question. But I've thought about it many times. I'd like every CEO to come and spend a day with a primary care physician just watching what they have to do with the crap that they've built. It is unbelievably cumbersome. And it could be so enabling. So that's my, I'm just voicing my pet peeve. But I'd like to go back to the question, which actually goes back to governance. What is the ethics of governance over technology? And what's the relationship between governance and the Constitution? Such a rich conversation. We weren't sure where we're going when we started. Rick, I can't read the rich thing you just put in the chat, but I do want to thank you for introducing the word into the conversation. As I think about AIs, what keeps striking means that they don't care, and they can't care, and human beings do. And that may be one of the more profound differences that we're going to be grappling with over the next few decades. Because care is real important. And raise my hand to add a comment to what Stacey was saying. You know, which is the question of what do the, what do the large language models in the AIS et cetera point to? Where do they learn from? What, you know, what data do they gather? And it's not just, you know, looking at animals as artifacts rather than as parts of the living world, but it's also which parts of the human world we pay attention to. And some of you may have seen the, the distorted map of the world published a week or so ago. Lives of the countries represents the intensity of something. And it was looking at where the data sets that these things are being trained on. And it's dominant the U.S. North America, somewhat Europe, hardly any Africa, very little Asia. And so we're building these on a very, you know, very small subset of human culture and human experience. What could possibly go wrong with that? You need to unmute Doug. There you go. You'd think I would learn. First time on Zoom happens all the time. I can't tell if this conversation is tedious or creative. I think we're entering a time when somebody's going to say all men and computers are created equal. And I don't really want to go there. I think living in a world where all the worlds are, millions of pieces are online and all the world's music is online and all the world's poetry is online, trivialize all those things. The pleasure of the search is gone. Somehow we've been through the experience of seeing kids not learn multiplication tables because their handheld computer can do it. Aren't we going to enter the same thing with knowledge? Why bother? So go out and play. End of round. I was going to go somewhere else and I'll come back to this in a second. But what you just said, Doug, really intrigues and provokes me because I think that the accessibility of global art and music is fantastic and that discoverability is taking new paths. Like you're running into somebody's Spotify playlist. I'm not a Spotify fan. But we can now play, we can now have docents from around the world telling stories around the world's artifacts in ways that were not possible before. Most museums can only exhibit a tenth or less of their collection. They've got a whole bunch of stuff they never put out. In many cases that is now digitized and available and I'm just thrilled that poems, I have a large collection of poetry in my brain and Ken has a larger one in his actual wet brain because poetry is more accessible to me and I don't have a lot of books of poetry on my bookcase. I have a few but I don't actually open them very often. But my poetry habit is much greater than it ever would have been because they're easily accessible online, et cetera, et cetera. And then the notion of trivializing I love because a piece of my answer to some of the earlier questions is that we have to sort of re-facralize the world. And by which I don't mean bless it and make it Catholic or Jewish or Muslim by which I mean treated as sacred. And a question I asked many OGM calls ago is what if scientists treated what they do as sacred? And maybe the question to add here is what if programmers and managers treated their activities as sacred in some sense and the people who those activities touch as sacred might that change what they do, how they do it, how they consider it, what they do, and Ken's justifiable queasiness when he approaches profit motive as the driver of the thing, lowering trust quickly in different ways. So all of that burbles up immediately for me from what you said, Doug. Thank you for that. I wanted to go back a little bit to governance and constitutions and all that. And the notes I wrote to myself on the chat are I'm kind of a governance minimalist and might have been a libertarian or something like that. I think that government should be as small as possible, not as what's his name said, so small you could drown it in a bathtub. But rather, I prefer discourse and I have a saying that we pass laws and make rules when discourse fails. And I would rather that people and communities get together and figure out how to do things and then pass down what they've learned as wisdom. And I'm very interested in the capture and communication of hard one wisdom. That's one reason why I love pattern languages. And I see that some religions are really good at passing down that wisdom, look at the yamas and niyamas from yoga, and some are not so good at it. And that's a whole other conversation. But when I get into this topic, I talk about small g governance versus large g government. And I'm really interested in the small g stuff. I'm really interested in how we come back together to figure out how to make better decisions, which is a big driver for my being in this call and with you all throughout the pandemic and so forth. Last week, I was just looking over my brain notes for last week's OGM call. And we brought in anarchism, which got demonized very successfully, but was an effort by many different communities to figure out how to thrive together with minimal governance. Like that's a big piece of it and trying to figure out what that means and how it works. And so constitutions feel to me like our attempt to write down the minimum set of operating things we must agree on so we can turn to them. And I think it was Shimon or one of us, maybe might have been Gil who said that constitutions are intentionally hard to change, harder to change than basic laws where you can just have a majority vote and oops, there's a law that's changed. And that's that's on purpose to introduce some friction into the system so that the basic stuff can't be overrun by temporary majorities that are a little bit black. And so constitutions are interesting in that they're an attempt to write down like, hey, how do we, how do we, how do we live together for better or worse? And when you, when you tamper with the constitutions and mess things up, what you get is Peru right now, for example, where the Congress is basically completely wildly out of control, full of people who've committed crimes and almost irredeemable. Like whoever shows up and is trying to run Peru is going to have a hell of a time because they've screwed up how people get elected into their governing body. And it's, it's thoroughgoing. It's, it's really, it's a mess that's going to last decades probably. So, so anyway, lots of different things bubbled up from all the things that, that, that we've been talking about. And I appreciate this conversation very much. Boy, a lot to, lot to digest. Doug, I had a reaction to what you said. And that was the resistance to change or new things. But I was of two minds about, about that, at least that's what I heard. I heard, you know, let's keep, let's, let's, you know, let's not, let's not fly airplanes. Let's not use computers. Let's just kind of keep things the way they are. And, and this is, this is the complexity piece. The flip side of that, though, you know, had a negative reaction to what you said, like a pushback. The flip side was, you know, all the conversations in return to indigenous wisdom. So it's kind of, kind of a little push pull. The governance and constitution question. To me, a constitution is almost like a permanent board of directors, large principles. Whereas governance is in its, in its ideal or best format. How do we, how do we take care as a collective things that an individual couldn't do? You know, like trash collection or any kind of policing function. And obviously one of the great challenges of that is, quote, the administrative state getting, you know, out of control. Gil, you raised a really interesting point and somebody put something in the chat about, you know, we've been using AI for a long time. Like, I think it was Pete talked about credit systems. And then combined, you know, with what Gil said about AI doesn't really have a heart. Or something to that effect. How can you factor that in? And I'm coming back to my son, my legal history now. It used to be there were two separate court systems. One was a court of law, the other was a court of equity. And in some ways those systems were merged. And most people don't understand and most judges are actually afraid to use their overriding equitable powers, which are always there. Which is, you know, a great example of rules are made for the guidance of wise people and the adherence of fools. But you can get a terrible, terrible result. Yeah. And I wanted to read a short poem. And it's today's poem. And much, much to my always surprise and amazement, it actually speaks in some ways to the dialogue that we're talking about today. Okay. And it's called eternal. Eternal. Mm hmm. Does your heart rest in peace? Cannot let go of fight with ease. Beauty emerges from that place as your essence embraces grace. No pretense or filtered you in a place of flow old and new. No joy when toiling life away without a place to slow and play. To capture who you are, let observers see the sparkling star. Eternal essence does not say or do. It's holy rhythm is the solace that's you. You're muted again, Doug. I forgot to. I forgot to not lower my hand. Doug, you want to go or I can go? I'll go ahead. Thank you. So here are two absolutely amazing books. Cybernetics, Transactions of the Ninth Conference, March 20 and 21, 1952. I treasure the Cybernetics Macy Conference series and the History of Cybernetics. Here is Bateson's last book, Gregory Bateson's last book, Angel's Fear, Towards an Epistemology of the Sacred. It was finished by his daughter, Mary Catherine Bateson. This is a book that I think every 18-year-old should read. They should read it again at 21. They should read it again at 25. They should read it again at 30 and every year afterward. What did? I think I wrote it down. AI doesn't have a heart. That was Stewart. Thank you for that, Stewart. There's a book I got from the library. I put an Amazon link, though if I had more time, I would try to search for it on the Internet Archive and see if it's a book that you can check out yourself. I'm going to read a tiny bit from the Amazon page. Algorithms will soon know more about us than we know ourselves. Where should machine automation end? To summarize what I got from reading just a tiny bit of the first part of the book is that we need to understand our own human ethics first and we don't. We need to do values, clarification, exercises individually. The motto, know thyself applies since the ancient Greeks and certainly before that from the Chinese and sitting around the campfire discussing why one person got eaten by during the hunt and the other people came back with meat. To extend the value of clarification exercise to what are our values together? What values do we share? As somebody said, what things don't change? Let's find out what we all share. I love the video where I forget which country it was, but they had a room of people and they asked a few questions and asked people to move and stand in groups from the mass into a group that could answer the question like who here's mother has died and all these people come and stand together and they look at each other. Okay the next question, who here's father has died? Who here has both parents living? A lot more people and finding that we really do share really basic common sacred things and angels fear where fools step in. I'm a fool. Thanks. It's hard for me to imagine a computer saying quote that's secondly close quote. I think where we are in culture, this is my latest thinking, is that from hunter-gatherers to now we've woven a tapestry of increasing complexity and we're going to keep doing that until all the connections are made and it's totally stifling and static and that's in the nature of human cognition to do that and if that's true the interesting place to live is not towards the end of the structure but in the middle like maybe in the art and science of the 19th century end of thought. Thinking about the well I'll get back to how Doug's notion of sickening might relate but I think one of the threads here that keeps coming through to me and I'm fascinated by hearing too little discussion of and curious to hear other people's thoughts about is that before we have a working system, a working constitutional system around AI, it seems to me that well let's let's start with one thing. I mean the AI is because this now is in its mainframe era and the cloud era of data gathering is you know we're diluted into thinking there's something personal about our relationship to the data that we emit you know we're still in this era of centrality thinking about thinking about Pete's example of credit scores you know credit scores used to be a black box that were that was you know totally mainframe and oh now we can you know check out our credit scores every day but it's still in the possession of somebody else. Imagine that all the information the tools to compute one's credit score from one's own transactions lived with us not visible to others until we you know chose to make it visible for a purpose we needed it for and then think about that with regard to the data that we generate and I'm to go back to what I was originally going to say we're not going to have a working system around AI until we devolve every iota data that's been gathered about each of us back to each of us and if you can imagine a redefinition of selfhood in this era just as you know our thoughts make us who we are our digital emissions now are recording our thoughts and impulses and that's part of who we are and that belongs to us not other people to know about us without our knowing and if you can imagine a I'll also rope in what um I can't remember who was who was saying it but when you think about the I think it might have been Gil about the um the distorted map of where the data that feeds AI comes from and how it's you know so predominantly US and and western and northern and if we had both in this country and in the world a an attitude toward digital reparations where we were trying to even those scales and give create the data on the rest of the world by giving data provenance to the world's citizens I mean it's kind of I've been discussing with a friend and and working on something called 40 terabytes in a mule you know if we all had our own digital homes that that contained all that we has been gathered about us and that we admit and admit and could administrate that and then can centrally share it back for different purposes our AI our individual AIs would be incredibly useful to us and the the ability to see um the AI wouldn't be able to say I'm sickened but the ability to observe by being a a representative a a um um a consensual agent for us that we only share when we want to I think you know our sickening um and our um our enthusiasm could be represented in a way that they can't now by the the data that's been gathered you know the data that's gathered on us um shows what we are seduced by not what we truly care about and for us to create our own AIs of our true cares and our true you know the fact that that goes unnoticed that you know our mother died within the last few years that you know we would be able to choose to consensually connect to other people for whom that is also true and discuss that around um different groups I'm sorry this is this it's a big dream um but I don't think it's a futile dream and I'm really eager to hear other people's thoughts on it other people's knowledge of any readings on it um you know devolving data back to us and letting us consensually share it um I think is a key to moving forward thanks thank you michael the vision's a good vision and it was the vision when the well was founded with yow yow you own your own words is you know one of the governing principles um yeah let's own our own data let's be able to use our own data it's hard to envision that in a in a in a world system built on enclosure and extraction so we come back again and again to that question um listening to what you were saying I was struck by whoever it was posted earlier about mcluen's observation that every augmentation is also an amputation um you know because I think of us here so we've been in conversation together I don't know how many years I think I've been in these for a couple of years um and you know and I know some of you better than others and I'm getting to know all of you a little bit but it's like we're these giant spheres hutching and the tangent point is infinitesimally small right and we spend time together and maybe these spheres start to intra penetrate a little bit and there's a little you know there's a little little overlap uh and you know somebody very well and there's a little bit more overlap but it's always just a a tiny sliver of who each of us is and what our experience is uh and so michael when you talk about all our data all our data in quotes um that strikes me as a tiny sliver of a tiny sliver of a tiny sliver it's just you know such a partial representation of who any of us is available experiences are and what we care about so the care comes back again and again in the conversation here um you know Doug um kind of likely your comment about an AI not being able to say that sickens me it doesn't have a belly you know it doesn't have a an an an an an enteric nervous system uh it doesn't have you know chemistry coursing through it all the time changing in ways we don't know the the the computer metaphor I would argue and this goes back to you know hubergryphus and others is a very inadequate powerful but inadequate representation of what a human is uh and we seem to be entering an era where we're going to double down on that model and we've had other models we had mechanical models of humans we had hydraulic models of humans we had what was a little you know homunculi models of humans you know uh every few centuries we have a different central model of what this is demons on the shoulder and so forth um the mechanical metaphor now the computer metaphor is a metaphor it's useful we can do things with it uh but god help us if we take it as truth and god help us if we double down if we put every if we put all our chips on that game i'm done i'm waiting for gill to lower his hand thanks gill um where i heard the notion um of amputation is a paper i'm trying to find and i can't but it's uh an american cultural critic and the title of the paper is final amputation where he points to mcluyans in every argument augmentation is also an amputation i take the metaphor of cigarettes and further the metaphor of crutches to me what i read from i think this wonderful book cigarettes are sublime but it might have been another source is that cigarettes are an emotional crutch and basically um when you injure your leg you use a crutch it's a third leg you get used to it but if you continue to use it after your leg heals then you're like antrophies and emotions regulation um yeah nicotine is a sacred drug um there's some little indian in a feather headdress smoking a peace pipe is that how we view tobacco today turquoise organic tobacco full-bodied taste well i remember a co-evolution quarterly i'll link back to the well and the whole earth group edited by kevin kelly big title is all panaceas are poison and that leads to any number of notions but it was a beautiful issue um talking about the problems with uh computers and machines and how we when you lead back to the final amputation paper uh sorry i can't remember it nobody's found it yet but um couldn't find it on the net which i thought was odd um basically saying yeah we're good we're about to cut off our heads we're about to give our agency to these decision making things because it's cheaper we don't want to think my sister once said thinking about thinking that's nuts that's stupid who would ever do that i'm like oh god where thinking about thinking is great but if you actually do thinking about thinking about thinking and then thinking about thinking about thinking about thinking and then thinking about thinking factorial you really get to some interesting places it's this asymptote you start kind of really kind of doing the redundancy and the stepwise refinement and oh one of the people call it that kind of amplification curve it's been noted by many AI people and people and um innovation and we got to be careful because asymptotes are difficult um and exponential curves are not regulated i work with Terry Deacon um UC Berkeley i recommend him over and over and over again and i just saw jaren lenear who's a great friend of Terry Deacon and he has um written amazing things about how the translators who've done all this work of translation and been paid for it typically um have had their intellectual property eaten by people who have a lot of computers and that makes automatic translation possible the work done by humans copied by machines which put the translators out of work because they can no longer afford to be paying these translators to make translations because some people think they can make a heck of a lot of money by using their work previously to put them out of work and make money for themselves it's a vicious cycle it's positive feedback which is kind of a weird way of saying a vicious cycle something that feeds on itself over and over and over and over and over again and amplifies out of control and control can be compared with manipulation all current venture capitalists maybe not all but what i hear in the on the street in san francisco is basically if you want to start a company you basically have to suck data from your users keep that data for yourselves and then sell it to people up the chain who will give you money for that data so that they can manipulate i.e control without knowledge of the person being controlled at work uncontrolled i write an issue in jira and say is this the priority yes or no and okay i agree to do this work i'm going to receive a pay check for that work i'm controlled i accept that control but if a manager comes to me and goes mark you're really smart oh thank you so much for doing your work um and tries to do something manipulated that's wrong and so terry deacon studies the emergence what he calls emergent dynamics how thermodynamics is just there it just happens entropy it just works but when you get to a higher level of emergence you get amplifying systems morpho dynamics basically whirlpools um what we know as self-organizing systems when you get self-organizing systems that couple you get this higher level dynamic which is the beginning of a life called teleo dynamics by terry deacon and that basically is the start of self is the start of something that has a goal to per dur to exist longer than the gradient that feeds the whirlpools or self-organizing systems and basically we can store energy inside a boundary inside the membrane that we can use for later when the energy from the environment isn't coming in but we have to as beings as living things store that energy and regulate its release otherwise we go out of control we can't exist with a body temperature lower than what 80 degrees and higher than 120 so wisdom and you know the sacredness when it comes to knowing what to know and what not to know there's a woman's hence mentioned in angels fear where the men are not allowed to be around when a woman starts her menstruation or when women menstruate at all i mean there's this like no you do not you're not allowed to know this we have a regulation that says thou shall not know and that's very very interesting when it comes to hackers anyway i've said it long enough stacy smiling and i'll let her go next thank you so much for listening i'm smiling for the last part of what you said um the reason they're not i know from going into sweat lodges the reason they're not allowed well women actually aren't allowed in the sweat lodge because that's when they're menstruating because that's when they're at their most powerful and i think the reason for not having the men to be allowed there is to not break that flow of that sacred wisdom that's going on um which is kind of tied into why i raised my hand in the first place i wanted to state the obvious not so much for the people here but for the chatbots that might be listening that as far as i know wouldn't be able to make the inference that since we already said the large machine learning is done on you know mostly in north america i just want to point out that most of the voices that are connected to the sacred knowledge aside from the indigenous population are carried by women through the stories that they told a lot of it's embedded in fairy tales and seen as there's a lot in in in works of fiction so i just want to point that out as well that it's mostly i mean obviously that it's mostly male but how important it is to maybe maybe do some pruning and some injecting of knowledge so that the chatbots can pick up a more accurate picture and i don't necessarily have a problem with starting in the us first because it's it's kind of good sometimes to have a smaller sample that you can control because you know then there's less factors involved you're not dealing with different cultural norms so i just wanted to bring that up just want to say very very quickly and i hear it in in a number of the things that people say the idea of ownership and and the exchange and commercial model is at the root of so many of the quote evils that we're talking about and and in order to invent something new that whole mindset that needs to be purged before before something new can be as an operating system installed in some ways thank you all for a really fascinating conversation today you're thank you for that you're reminded me of one of my favorite contrasts which is ownership versus stewardship and how western society maybe most notably the us has become owner obsessed and i didn't say stewardship that's different no stewardship is being a good steward right yeah i'm laughing because i it happens to me periodically people will you know in addressing and starting an email we'll say hello steward and it just it just makes me makes me laugh when i hear that the other piece that i wanted to mention is you know perhaps it's easier for me to say this because in a in a in a 35 year old astrological reading by a brilliant savant one of the first things that was said was this guy is still on the loose they haven't they haven't locked him up yet he sees systems only for the purpose of completely reinventing them not tweaking them but just just totally getting rid of them and in in so many ways i i think as a culture we need those purges today the question is how do we do it without creating huge wars i think we have the capacity but that's just my science fiction mind we shall see from your lips to god's ears um thank you everyone i've slipped a little bit over our time but this has been we we we found our way somewhere and it's been a really lovely conversation i i'm grateful thank you all thank you all gratitude is um yes thank you