 This is the Rex, I guess, alumni call for a Wednesday, May 10th, 2023. Welcome. Good to see you. Hello, hello. There we go. I'm trying to script as well. We were just doing weather talk, Brad, because weather is acting strange and it's finally pretty here in Portland, but it's going to turn over 90 in a couple of days. So there we go. Seriously? Yeah. Wow. I don't know how long. Southern California survived all of its rain, so, you know, we're good. We'll see what happens when the snow starts melting. Yeah, yeah, no kidding. Is that big lake going to restart in the center of the state? It could. It could, yeah. Daddy daughter camping group that I was a part of when my girls were in elementary school. Now they're 13 and 14, but a bunch of those 13-year-olds are going on a white water rafting trip. And originally they had booked a class two, and then it got bumped up to a class three. And now it's a half a day with a class four, just because of the runoff. So that's really interesting. Yeah, that's going to be good for rafting excursions. Yeah, I've never done white water rafting. What does the class differentiation indicate? How many lives are lost on the trip? It's either that or the velocity of your ejection from the raft upon hitting boulders and or subterfuge, yeah. Politico just published this article about it's kind of a build on the really big one, which was a New Yorker article back in 2015. Basically, the New Yorker publishes this huge article about the Cascadia fault right after we moved to Portland. I'm like, hey, that's a nice welcome gift. And so Politico just sort of built on that visiting a Coast Guard base on the coast of the border between Washington and Canada to see how they, you know, would you go, if you were Coast Guard and have ships that might be useful after a tsunami maybe, would you race to the ships and race out to open waters? Or would you screw the ships and like run to get a pill? Yeah, no, I saw that article. It's pretty fascinating. And it's a 60 foot high tsunami. Who doesn't want to surf that, huh? People go to Mavericks every day to do that. That's true. That's true. Mika, good to see you though. We can't see you yet. And he may be on a different call or something. Do not know, but he's sort of here. He put his jacket on the seat. Yeah, that could be it. Santos had to turn himself. I did not. I knew that Santos was in trouble. Mika's having trouble connecting. I knew that Santos was in trouble. I did not know he had to turn himself in and do the whole record thing. He was actually arrested. And wire fraud and a bunch of other things could be 20 years plus. Oh darn. But at least he's got all those fashion quality clothes that he bought with campaign funds. So, you know, he's got that going for him. Exactly. I was sort of fascinated, morbidly fascinated by the Santos thing because it seemed to me like he was pushing that envelope of, hey look, you can become a powerful politician on nothing but lies. And it still works. And watch me do this. And he was sort of stepping into the limelight as a hero of a lot of people who were in the post-truth fake news world thinking, well, good. That's the way we're just going to run things from now on. And if he actually serves time, that might put a little damper on that energy, which I would like. Well, and then you have Trump's verdict that came down this week as well. Which seems to me to be sort of toothless. If it doesn't put him in prison, he can, he lied to his followers and basically collected a whole bunch of money, which by the way should be another legal case brought to him for misuse of funds, right? But he has money to pay her off with. Bear in mind that this is a civil case. So the misuse of funds will be a criminal case. And so that would be handled differently. I don't know if it's toothless because it will be there forever. And he's appearing on CNN live tomorrow. That's right. No, it's tonight. When is it tonight? Yeah. Count Hall is tonight. Yeah. Yeah, yeah. And if they don't ask him really hard questions about this, I don't know what happens. And there's also like book being made on whether he's even going to show up. Susan, yay. I love the rationale as to why he was not, they did not charge him or did not penalize him for rape and not just sexual assault or rather than just sexual assault. And that's because he, she couldn't tell whether he had, he had stuck his penis at her or not. Oh, Jesus. Right. Crazy stuff. Yep. So, so I'm, I remain amazed and shocked that the bulk of the Republican party seems to think it's okay to put their eggs in the basket of a guy who has five, six active legal cases against him. Any one of which would be devastating. And like a document misappropriation that, you know, the Nara Lago case less interesting, the overstating your income case that they sort of already closed. Those are to me the not less interesting ones. But, but there's some like hugely substantial ones that, that are still those shoes are waiting to drop. Yay, Mika made it safely onto the, onto the calls. Very unsafely. Yeah. Do not look down. I'm currently on the Whitestone Bridge. Oh, no. It's a nice view out there. Sorry. It's less than optimum. I'm on my way to take my mom to an eye doctor appointment. Life, life has its way of getting it in the way. Doesn't it? Life finds a way to get in the way. Life finds a way. So I can be on for it till like about 1245 and was planning mostly to listen. I want more chat GPT talk. Oh, we can do that. Happy to talk about politics if you want. I, you know, whatever. But I'm going to put the phone down. So you, sorry. I don't really have a good way to both hold the phone and hold the steering wheel. Okay. Perfectly fine. And we would rather not be witnesses to your plunging off the bridge in the live stream. That's not good. Yeah. You said chat GPT Susan threw her hand up. So Susan, you're muted, but please jump in. Hard to jump in when you're muted. Well, I was part of a conversation group yesterday. A new one for me. And so I just listened and what struck me about the whole, and it was a lot about chat GPT stuff. And I've been thinking this morning, which is one reason I would bet late to the calls because I was thinking when I'm thinking, I don't move as fast. So partly as a reaction to what I've been hearing people talk about, which is over and over again. And they noted that mostly it was sort of in feelings mode. Amygdala is what I call it. And reaction, nary mode and all the rest of it. But I got to thinking about it. And I'm on the verge of proposing a project, but I don't want to go so far as to say it's a project. It occurs to me that given as a sort of student of conversation and a student of organizational change and having spent time in the knowledge management world and all the rest of that. And I got to thinking about the paucity of or the paucity of options that have been mentioned, at least in my experience, or if I've read about for intervention. And it starts with regulations being just about the only thing people talk about. And we know on this group that knows has enough experience to know that that's problematic. And in the best case, in a particular case, chat these different things. So I was wondering whether I thought about, when I get into a new topic or a new area or a new something, I sometimes just start making lists. And the list I was thinking of making was intervention points. Where in the process, where in an actual activity, where in the interaction. And that's where I would like to really focus, because we don't focus enough on making the interaction, the center of our analysis, which I think we could do in ways that we couldn't before. And making a list. And so just to see what people know and come up with on a sort of system who kind of thinking. And this group does a lot of that. So I don't know if today is the right day, but I'm going to start a list and see where I get. And take people's names and what they suggest and why they suggested it. And see if among this and always try to do this in a group, not individually. So that you get the sort of begin to get the mix of perspectives, which is so valuable here. And so anyway, I'm not saying we should do that today, but I want to plant a seed. And if we want to come back to it, if somebody thinks of something in the middle of the conversation, I'd be happy to hear it. I'm thinking of things as broad as the sort of OGME donut economics thinking and that framework, because there's a framework around intervention. So there are both frameworks and terms that I just want to start collecting like I used to collect rocks. And oh, this is interesting. And just try to be a little more rigorous about all of this. I'm sure there must be other people who have worked on this and thought about it. And I see the glimmerings of people who might be thinking in these terms. So anyway, I just wanted to put it out there and say, you know, we're in the process, we're in the practice, we're in the, we're in, you know, who are the, what are the interactions we would like to look at, for instance, but at first we could just do frameworks and ideas. I've heard people say, well, can't we require, can't we require people, the regulators to, can't we ask the regulators to stop certain kinds of activity? And that's just about the only one that people can get their heads around who are not schooled in this sort of thing. So I'm just opening the invitation to that. And as we go through, if we get back to GPT, which I've now introduced, and I know we're all sick of it, but I think the time to be a little more rigorous about what can be done about it is way past. So I'm just putting that out there. Thanks, Susan. Anybody else jump in? I've got stuff I want to add, but I'd rather hear from everybody else. You might have to think about it for a while. I certainly have, and it's not easy. Well, this is going to sound like I'm being, I'm being a smart ass, but it's actually, I'm actually serious about this, power quarks. One thing to remember about all of these, these systems is that they run on electricity. And so it's very hard for me to imagine a future where we are dominated by computers, where we can't just pull the plug. And that may be oversimplifying, but at the same time, it's something that we tend to forget. Because we start to personify the AI, we start thinking of it as an independent actor in the way that a person is an independent actor. And we sometimes forget that these are as currently, as they are currently formed, these are physical machines that require external power, whether via battery or a plug in the wall. Could we separate the, could you separate the question into two halves? Because it seems to me there's the, the near term use of these tools by the commanding heights of capital, corporate, you know, all that, which is the likely, you know, it's what Ted Chang writes about in The New Yorker this week. You know, this is McKinsey on steroids. And in that case, all the electrical power they need. There's the other case, which is more in the realm of maybe still science fiction of these things gaining enough capacity that they can, in effect, commandeer their own power or, you know, put that one to the side, maybe. All right, all right. Well, we don't have to decide that, I'm collecting, I'm collecting here. Yeah, okay. So I have started another list, though, that prompts another list, if you don't mind, which is, you know, trying to subcategorize things before we're ready. You know, I would like to just get some, you know, in my own experience, I mean, one time when I was trying to sort out conversations, I just pulled every book, linguistics book or an topology book or anything else I had on conversation off the, off the, and just looked through all of their indexes and got all the terms that are used in that kind of analysis. Well, you know, quite a lot. And then I didn't, I didn't want to just break it down right away. I wanted to just kind of marinate in it. So I'm going to let us make lists, but I'm going to put down a list of things that are, are potential, potential dividing lines or ways to frame the problem that are kind of frame it so that it gets all together. And I'm just, I'm thinking it might be too early to do that. But thank you. I've got it down here. And I do recognize and, and accept what you're saying, right, Mika? I think it's really interesting though, because think of all that information that's, that's being flowing around in those wires or the air, actually. I mean, when you, when you get down to the point where, you know, they can read, I mean, I don't know what to call this, but when they can actually get to the word level by sensing, you know, reading minds, reading brains, whatever, brainwaves. You saw that. Pardon? You saw, you saw that article to them. I did. About the, about generative AI being used to interpret brain signals to basically describe what someone is thinking about. Right. And well, it's, it's scary. I mean, it's, it's profound. I mean, it's just another thing we need to keep on our minds. But I'm just putting it down here, interpreting brain signals. So brain signals is, is, is, is that an intervention point? Well, we'll have to get around to that. But, you know, I think you can see where the conversations are going. I just interrupted someone who was going to say something. Several different thoughts. One on the power plugs remain. If this were blockchain, I'd be like, wow, that would be a good way. Just like, just denied electricity. Let's cut off the feedstocks. I think the thing that these models is training them takes a bunch of energy, but not nearly as much as the ongoing calculations of blockchain distributed across the world. But once they're trained, there's basically an image of a set of neurons and weights and other funny things that can just be flashed onto a chip. And you can run questions against it at almost no power. You could put this in a, in a Casio calculator and ask it questions. And it would be able to answer you just fine with a nine volt battery. For a long time. So you can't actually, I don't think you can deny this thing enough power to dent the capacity to just be present, right? Well, not with that attitude. But that's the question is, I guess the question on the table is bringing up power points. And I know you're serious. Is, that is a point that is a point, you know, is it a point to intervene in? Could it be a place to either gain some information or to whatever? I mean, that's the kind of question. So we want to just keep it open for a while. So after the cord cutters, could there be cord pullers? Yeah. Yeah. Can we teach them to pull themselves? Yeah. Yeah. Yeah. Or own cords. Well, that is actually a really, that leads to a really interesting point. One of the potential points of intervention is competing systems. Yeah. You know, whether you think of it as competition or conflict or rivalry or whatever, are you talking about, what was the name? Colossus, the Forbidden Project and its Soviet counterpart, Guardian. Do you remember that movie from the 1960s? No. Yeah. In the 1960s, I wasn't allowed to go to movies, but okay. Okay. Well, that raises a whole bunch of other questions. Is this the Forbidden Project movie? The Forbidden Project. Yeah, 1970 is the movie. Oh, 1970? Oh, okay. I should have seen it then. 1970, no, I was out of the country. That's it. Blame global travels for them. Okay. Anything. So what was the name again? I didn't catch that. I just put in the chat. Okay. Good. Well, now I'm going to go to Project Chats Talk. That's okay. Go ahead. Go ahead. No, I was going to say that one of the ways in which we could, if we are envisioning that these systems remain available and continue to proliferate, and they are powerful, one of the more effective ways of countering a generative AI system being used poorly may be to use a generative AI system to seek out responses or to develop responses. Yes. Yes, a meta system of sorts. But to ask it, so how would you contain yourself? Well, I know, but that seemed rational. Yeah. Which actually leads to a point that's something I've been mulling forward the last few weeks that, you know, chat GPT easily passes the Turing test. You know, these generative AI large language models easily passed what we had been the agreed upon model for how to determine whether something is intelligent. At least the popularly agreed upon model of determining whether a machine is intelligent. But it may, but one thing that has always stuck with me is this definition of intelligence is being being able to figure out what to do and you don't know what to do. Okay. And one of the things about large language models is they are limited in how they can respond to what they have been built on, the models, any of the inputs to their data. We have been able to get to go around restriction or people have been able to go around restrictions in chat GPT and another large language models that have been hard coded into them. You can't talk about assassinating the president by basically coming up with clever wording to work your way around the hard coded rules. But the way they work, if you take a look at the text being used to work for way around the hard coded rules, it's really obvious to a human mind what they're doing. It's really obvious that they're trying to evade the restrictions, the language or the letter or the law. And it's certainly that maybe one of the ways, one of the determining factors for whether or not a generative AI can be considered sentient or sapient is whether it can recognize when someone is attempting to fool it. And not just simply because of hard coded rules, but being able to graph. Those of us who can tell when the AI is fooling us that we're not sapient. Yeah. Sorry. Well, no, I'm not essentially opposed to that argument. Go ahead, Kevin. I think this is another example of teaching to the test. Which one is? Well, the earlier models of things was can it play chess? Can it play go? Can it do it? This is, can it mimic what we would consider to be sentience? And it's doing a good job of mimicry. Is it actually sentient? No, it's an idiot so much. Okay. This mimicry thing, I'm going to flag it. Okay. mimicry doesn't come up enough in the conversation. So continue because just a footnote on a stick in here to keep in mind is, and we were bordering on that is what people, what we do or don't do ourselves. Like how do we unplug ourselves? How do we stop ourselves? How do we, you know, it's like, yeah. Sorry. So in a minute, yeah, go ahead mimicry. Yeah, I'm just saying that in my humble opinion, since we know what the parameters are of the Turing test, right, that you can go ahead and ingest enough information to, you know, do learning to, you know, the system can pass the test, right? If you know what the parameters are, and we do know what the parameters are, you know, to be fully human, all right, to get to that. If I think it's more important to talk about this is a new form of intelligence, right? It is, it is intelligence, right? But if you're talking about modeling something that's going to be rivaling humans, then it has to have all the perceptual capabilities that we have, which they don't, right, because one can drive, one can write, one can do something else, right? And we have no model for the pre imprinted stuff that's in our Avintala, right? And, you know, all of the things that are fight or flight are not present in these systems at the moment because they're not modeled. So I'll stop, okay? Okay, so that's on the limits of, okay, so do you have a point of intervention you'd like to suggest? I entered the conversation not knowing that that was the purpose of the discussion, so I do not. Okay, we'll come back to that. That was, that was sort of how it started, yeah. Susan had proposed that sort of early conversation about, about generative AI, which is like, where would we intervene? And we haven't gone back and talked about that very much. I did post a link to Daniela Meadows, 12 places to intervene, points of leverage in the system, which is the classic text on that. It's really interesting how the Turing test seems to be the only test that people are coming up with or figuring stuff out about generative AI. We don't have good measures. We don't have other good things going. New and existing customers can change. That's hilarious. That's all right. And then another thing I wanted to point to was several people kind of at the fringes of the conversation pointed to a 1975 conference at a Silamar when recombinant DNA was just coming up and a bunch of researchers got together and said, hey, this stuff could be really dangerous. We should self-regulate. And they came up with a declaration of principles for how to go about researching this relatively dangerous technology, which is interesting. So some people are saying we should have, I don't think they've raised the Macy conferences, but one of our network friends, Paul Panguero, has wanted to revitalize the Macy conferences again. And the Macy conferences were, ironically, where artificial intelligence sort of was coined as such and where some of the earliest researchers in emulating human reasoning or how the brain works met. So there's even some precedent there for how this all went. Is this just the New Mackenzie thing, Mika? I put the link to that article in the chat as well. It's really interesting because partly there's a lot of promising technologies that show up, like the stuff we now call social media. And the thing that usually warps them and sends them down the wrong path is the capitalist medium they're born into. So the major social media platforms all have, as their business model, dumpster-diving our data, manipulating it to make us buy shit we don't want, basically espionage plus manipulation, plus a whole series of other breaches of trust, plus addiction. And we seem to think it's okay that that's happening and that the people who run these platforms, you know, Zuckerberg is the hopefully benevolent dictator of the world's largest country because Facebook still has more monthly average users than the populations of India plus China, which is staggering. And he doesn't seem to think of it as a civilizational platform that might be useful for helping improve society. He seems to think of it as an awesome money mill and data extraction device. And maybe the way Google has thought of itself for years now as an emerging, evolving AI itself. Right now that he's lost some of the metaverse religion, thank God, but too bad because what he's probably going to do is look back on all the data they've collected and everything they've got and say, hey, how do we push the model and to make more money, which is not an unreasonable assumption about how a lot of efforts around this technology will be used. So what's the panel's opinion of auto GPT and are you guys familiar with that concept that arise on the rise and towards the end of April? Yeah, a little bit. So auto GPT basically the thing about these large language models in AI in general, they're typically built to solve a point problem, right? And they become super experts at that point problem. What auto GPT does is allows you to task it to go figure something out and it recruits a bunch of point problem bots to ultimately solve something massively complex. So as an example, it's a Marvin Minsky Society of Mind. So as an example, not only that, but it codes problem solving bots. So the whole axiom of when robots start building robots, we should be worried, what about software that writes software? Should we be worried? I don't know. It's prompting itself. Yes. But the problem said that came out, I heard a case study on this and then I saw it. I can't find it, but I've got a couple different tech crunch articles that I could post into the chat here for us. I live in the valley and I want to take my three kids and my wife for a weekend to go wine tasting. We have very specific wine tastes. Two of my children have very specific diets and diet restrictions. We are very fond of this type of food. We're very sensitive to these types of sheets. We like to have outdoor picnics, but only if they're shaded areas. We don't want to have to drive more than 90 minutes to any single destination. I want to be able to check in at 6 p.m. on Friday. I want to be able to leave Monday morning at 8 a.m. after a great, fantastic Eggs Benedict brunch that we had once at this French eatery. And I'd like to do all of this for the price point of $2,200 solve. And it doesn't. And you know what? This is in three and a half weeks, so if you can beat that price point for me, buy more than 8% rebook. Now all these bots are going off. They're figuring out the long tail of nuances from all these different companies, all these different vendors, all these different bed and breakfasts. The weird thing is is that it ruthlessly decides what qualifies as satisfaction to the task. So all the nuances of customer experience and loyalty and brand, I'll just toss it, just throw it away. It doesn't matter anymore because these bots are just going to execute with precision. Now put this into an enterprise space. I want to spin up a brand new company to help do CRISPR at scale. I've got a seed fund of $10 million. I'm only going to hire 15 coders, couple one Xers, couple 10 Xers. But I want you to build a couple of cloud spaces for us to do the software development and keep ruthlessly moving our data and changing our clouds based upon what our performance needs are and the best price point possible go. So now all the enterprise B2B loyalty stickiness, those kind of old school contracts, once you bought into Oracle, you're there for life, we'll screw that. I want you to dissect enterprise software and come up with a better way to do it at a cheaper price point, go. So the thing that's got me up at night is a couple of things that the whole of customer experience I think could pivot in a heartbeat here. Number one, number two, it's highly disruptive, which is always fun. Number three, the unintended consequence of these task spots going off and doing people's biddings. Number four, but then number five, the thing that haunted me, which made me want to come to talk to you, find people today, isn't this the beginning of the consumer vendor management, customer vendor management beginning, I could have a task spot that represents Brad Smith and all endeavors. And I could task that task spot to go off and solve these particular things. And it knows my preferences and it's only going to engage based upon what I've told it to do. And I'll just stop here. Something that Kevin said a moment ago about teaching to the test. How many weeks would it take before you start seeing advertising bots that are designed to interact specifically with these concierge meta systems? Sure. It's basically designed to game them. Sure, hit the trigger words, hit whatever the response time, whatever are the qualifiers that underlie the code for determining what is the satisfying result, basically gaming to that. It's the white on white words to game search engine optimization. I would further say, Brad, that it wouldn't take too long to be able to message from, say, Mandarin Oriental or Ritz Carlton or the top level brands to say, are you tired of the random shifts of experience that you're getting from your bot? You should enjoy the end to end experience that we're going to give you. If you just stay with us globally, we will cater to your needs. We will know you in a way that no bot can, we actually use the technology inside the bubble of our own experience and you're going to love it. That would actually potentially be the walled garden that people would say, oh, I want to be part of that. Other people won't. They'll want the herky jerky, give me the best deal of the day, but there will still be a role to play for brands that deliver something and can deliver on their promise. Go ahead, Jerry. Get two quick thoughts. One, I'll see your Mandarin Oriental bot and raise you a four seasons bot. In the sense of, I think what you were proposing was a bot that would manage your experience so that you always want to stay at Mandarin Oriental properties, which is interesting. Yeah, not total landscaping, that is a much higher value. But what I mean is there are people who go to four seasons because four seasons has for years been famous for keeping track of their preferences and customizing it stay. But imagine your whole world, including your home, being managed as if it were a four seasons experience and that the bot isn't managing your time on property and trying to lure you to stay at the property, but it's buying you sheets for your house. It's treating you with, it's got a Butler service that takes over your Google Assistant or whatever else it is that you're using. It basically curates your entire life as if you were always walking around in a four seasons property and that's really kind of overwhelming and interesting. I think that's true. And the fact is, the big difference between a four seasons and a Rich Carlton is if you blindfold somebody and say, where are you at a Rich Carlton, is you say, I'm at a Rich Carlton. I don't know where I am on the planet, but I'm at a Rich Carlton because they have a high consistency level. Four seasons is very localized. So if you're in Singapore or you're in Miami or you're in Chicago, you know that there are overtones, I'm out of four seasons, but you also know you're in Chicago because it's bringing in the local experience. They want you to know where you are on that dimension. So what kind of experience do you want curated for you is also something that you're buying into and deciding. Brad, to your point is, could you build that into your prompt? But the fact is that as you bounce around and get this thing constantly being put together for you, it's not likely that you're going to get the end to end that Jerry's describing. Right. You will miss that. I think you're, I think you're spot on there. The other classic task you could say is, I want you to find the top 100 prospect companies and I want you to market, build a marketing campaign for me to market these products. I also want you to find the names and contact information of the chief marketing officer at each one of these firms. I want you to write a sales pitch. I want you to send emails, manage responses, SMS text at social media. I want to set up a bunch of meetings based upon interactions with this. And by the way, have it done by Wednesday. And here's my credit card. And the thing of it is, I don't know what your guys is LinkedIn random pop in. Hey, I'm a friend. How are you doing? Let's connect. There's no human anywhere out there right now, I don't think. So that's already here. But the fact that somebody who has a great imagination and a tiny bit of tech savvy Moxie can now release that kind of marketing force into the universe that typically is reserved for much, much bigger companies back in the day. You know, there's a lot of small businesses out there. And there's a lot of the, and the other thing is that the great shedding of headcount balance sheets in technology. I think they're just there. You're going to see a wave of entrepreneurs coming out of all these tech layoffs that, you know, can put a couple building blocks together and they're up and running. So I think I think the density of the marketing messages out there is going to get beyond chaotic. At a B2B level, I'm just going to say that, you know, the people who do this who manage supply chains have been trying to prune the number of entities that they do business with so that they know whose throat to choke, right? Now, whether those entities that they're doing business with, you know, will, you know, have the ketsus that will allow for the promiscuity that you want, looks unlikely for a supply chain like automobile manufacturing. That seems highly unlikely to me. For the kinds of service orientations that you have, maybe. So I think it's going to be highly dependent on what part of the economic sector you're talking about that will allow this to manifest. I'd like to mark a shift in the conversation and ask if it matters to anything we're talking about. So we've gone from, and that's not to say we shouldn't do this. I'm just, I just noticed a shift between sort of the chat, what chat GPT kinds of things do now. And they're mostly sort of, you know, giving you information. It's kind of an information exchange or question and answer thing. And it's been going further and further into, you know, what we want, what people want to have done for them. Okay, now that's fine, right? And we should be aware of, but we should be aware of that shift because I think there's a big thing as, and you can see that in some of the articles that are out there now that talking about talking about where the, you know, where it is that, well, thinking that maybe these kinds of system are now about to give us something really useful. And often it's framed in these terms of what can it do for us, which, you know, I mean, it's like, does the chat GPT need to have built in, you can ask it directly. But what about indirect speech ads? I mean, what about things that are more subtle? Your intent becomes in this, in this shift, lots of things come up as interesting, as interesting things to figure out, like, you know, what's the intent behind the question? That's been an AI question for decades. Yeah. At what point will we, will we design a system that can read between the lines? Yes, there we go. So in some strange way, that's what these systems are doing right this second is that they're, they're, they're being trained up to get a map of roughly what's going on and they're busy interpolating and reading between the lines quite like crazy. And using that to generate new texts, et cetera, et cetera. It's interesting. I wanted to go back a little bit to Brad's scenario, which I vastly agree with. And it's super interesting because one of the things I'm trying to figure out is how to get my brain into the, oh, I need to attach these superpowers to my fingertips and extend my capacity like crazy by setting these things up to go do things. And you said, I'd like you to have this all done by Wednesday. It could all be executed within the hour. Absolutely. There's no, there's no reason, there's no reason to think about that as if you were tasking a human and thinking about human hours to do it, because this thing will just go execute on each of the little steps that you want. And the pauses that you will want in the process are pauses so that the people you're trying to solicit won't think that it's a bot will think, oh, that must be a human. So you'd be like, so wait a random amount of time before you send off the reply or the probe or whatever, who knows. But I wanted to come back to just a little slice, which is one of the things that's held up e-commerce a lot over time. And this idea that everything's going to wind up in markets and we're just going to be able to put out our bids and offers is that the moment those kinds of threats show up in the marketplace, all the people with offers are like, well, shit, I don't want my stuff being comparable to anybody else's anything. So they hide the data, they make it incomparable by messing with the warranty terms. Whatever it is you can do to make it hard to compare your product with others is good because it hides you from the comparison engine and from being basically competed out of the market by the efficiency function you were proposing to come in. And I defy anybody to go to Comcast's website and figure out what actually Comcast service costs. Because they do their, they do their damnedest to only post special offers. And it's like the special offers expire after a couple months, but they will not tell you what the actual going rate is for most of their services, et cetera, et cetera, et cetera. They, they, how they live behind this veil of obfuscation, forcing people to call them up and try to negotiate a better deal all the time, which seems to be part of their business model, which is lunacy for a company that has basically a duopoly, if not a monopoly on access in this country, thanks to our really stupid telecom policies. That's where we are. Like that's kind of the marketplace we've helped create. But partly what I'm trying to say is there will be systems trying really hard to defeat being comparable in anything. And a system that's smart enough to sort its way through that and bully the vendors into divulging all that information would be super valuable, actually. Yeah. I mean, there's a big difference between being able to competitively bid for things that are comparable and competitive evaluations for things that are qualitatively different, but are offering you something that in, you know, some field you want. Right. And here we circle back to regulation. Yes. Regulation. Because I don't know if you saw, there's a bill that Justin put in front of the California state assembly that would formalize and regularize the language used on for exploration on cans for food products, whether it's no longer going to be best enjoyed by versus sell by it's either used by or used before or used by versus best when used by. So is it something that you have to throw away at that one point or is it something that will degrade over time? At least the idea is to formalize that language to make it clear. And so, of course, the food. Anyway, I mean, I'm sorry, it's probabilistic anyway, the stuff is decaying at some rate. Yeah, in terms of like pharmaceuticals, we probably should use language like destroy by. Right. The point with the food is that it very often the date on the can or on the box is for the retailers sell by. And that has actually actually has nothing to do with the quality of an inability of the food. It's entirely a marketing issue. And so the whole point of this of this regulation proposed regulation is to create a normalization of the language. And in a way what you were just describing there is how can we create a normalization of the these particular terms and concepts, you know, so you have to you have to expose your price. I think that's the case now for airlines in the US, they have to show you the actual final total and not add all the different things added on the end. I think they've just passed that. Yeah, I think that's a new. Yeah. Hey, Dave, Dave, we're on the pale blue marble. Do you find yourself today? Tacoma Park, Maryland. Oh, nice. Looking for the most liberal part of the DMV. So as a chef, Jimmy, I like going to the grocery store and actually buying, there's a particular place I like to go, buying the meat that has been discounted because it has been aged right to the point that if I buy it, as they've discounted it, if I use it within two or three days of that purchase, it's really good, right? Because you want the meat to be aged, right? And so it's a good deal economically, and it's ready for me to do something with it. And wait just a little longer at Salami. I mean, people kind of go, oh, yeah. If you talk to a sushi chef, you never want to serve fish that is fresh, okay? You want to serve fish that has just gotten to the tipping point, all right? Where it's very flavorful but is not rotting in your mouth, okay? So it's a, a lot of times they get it to that point and then they freeze it. They flash freeze it and then retrieve it, right? To be able to serve it at that exact moment, right? When they want it in your mouth. Good, Jerry. Can I just vouch for frozen foods? Like, there's a whole industry for freezing veggies and meats and fish and stuff like that, that lets you like go defrost something on demand and it's better than the thing you thought you were buying fresh that was wrapped in paper that's been exposed to air and warm, you know, et cetera, et cetera. Exactly, yeah. I just bought a freezer for that very recent. Oh, really? Nice. Like a locker style freezer or a stand-in? No, no, no. It took me a long time to figure I didn't want one of those because I lived with one of those growing up and you just, you know, about the fifth layer and you just sort of like your hands are freezing. Yeah, yeah. If you want to do what you're talking about, Jerry, right, at the top level is the Japanese chefs use medical grade flash freezers to do this work, right? I need a liquid nitrogen. It's not quite that cold. But it's pretty cold, right? It's colder than a regular refrigerator. The technology that Lawrence, Clarence Birdseye, I think pioneered the technology that gets us this bag of peas where the peas aren't all stuck together in a clump because they're dropped through something and frozen on their way down through the fall. So they're individually flash frozen. And that was a really big advance back in the day. Hundred percent. Like Morton Salt, right, that the reason that the little girl has the umbrella is salt used to clump. Yep. And now it gets humid and so on and so forth, but Morton Salt is still granular. So worth the extra nickel that you pay, right, for that feature. I have a note here from an article way back when. Frozen fish has half the environmental impact of fresh. Really? Yeah. That's interesting, including the refrigeration from point. I mean, I know that a lot of the commercial fishery boats now freeze the fish right on the boat. Yeah. Yes. In fact, if you want good scallops, you want dry, dry scallops is what they're called. Yeah. Yeah, it's good stuff. Yeah. I have this last mile problem with frozen things. Oh. Just getting them home in time. Well, you have a three, you have a three mile drive down curvy roads to get to your place. So yeah. Right. I mean, not just that. It's the sixth mile curvy road on the way up. But you know, that's just, but I think it's true of people going shopping that if you really want to keep that and you're, you have control over everything except, you know, but if you're trying to pick up your kids and you're trying to go to the grocery store and you're trying to, yeah. It's the next feature for an electric vehicle is the, you know, freezer chest in the back. Right. Well, I actually used to have an ice chest that you could plug into the, into the cigarette lighter. And if you put cold stuff in there, it would more or less keep it cold. Ice cream. I don't have ice cream home. But I'm thinking of other people, all these long commutes, right? And people who, you know, kids still go to school and all the rest of it. I think that's a great idea. Maybe we should just go into business. Although somebody's probably already figured this out. I'm going to put a link to frozen foods in the chat for Jamey. And it looks like it was Clarence, who did the flash freezing. No, no, I know I was making a joke about it. Ho, ho, ho. Ah, was it actually the green, the green giant that did it? So what does this leave us on GPT and generative AI? My guess it wasn't the peas, it was berries. Entirely possible. Sorry. That's okay. So one, one opinion we haven't voiced out loud, but I think we share here is that the horse is out of the barn. That any, any, any motion to sort of suppress research for six months or do a big pause, do whatever else like not really. And I posted a link that Google, the internal Google memo that came out that said, oops, we have no mode. Open AI has no mode. The open source stuff is going to race past us. That's that. That's the whole point is that open source is doing a really good job. The Kuna is, you know, moving by leaps and bounds on its own. What was the Stanford, you know, piece that they basically, you know, took the model and stood up something that was GPT like in 48 hours. So, you know, you're not going to get this genie back in the bottle. But nobody's, that's where the regulation question comes in. I mean, who's, who's thinking beyond, you know, instead of just stopping it. And I mean, regulation is, is always post. I mean, has been traditionally before ground to get around to it. It's a long time before regulation actually happens because it's a social process. And a lot of the people who do it don't understand regulation, but they don't understand, you know, what you're regulating and blah, blah, blah. Yeah. I mean, look, it's been, here's the, here's the, it's only been trained on what's available and can be scraped on the worldwide web, which represents less than 10% of human knowledge. All right. So, you know, the, the other goodies that it needs to know are behind paywalls and firewalls and are in skips. So it's, it's not smart enough, right, to outsmart all, to do all the things that, you know, we need done because it hasn't been trained on it yet. Do we know that it hasn't made its way to the internet archive or Google books or other places that have gotten behind some of those things like DRM that protects books? Cause I have a funny feeling that the large library of books is actually in the training set, but I don't know. I'm sure it is. So what would that add? 2%? Yeah, but it's an important percent. I'm not disagreeing with you. I'm just saying that Yeah. There's another thought here real quick. Also that indigenous ways of knowing are important. We've talked about them a couple times on our calls and many of them are not representable in the kinds of things that we're talking about. And many, many indigenous ways of knowing are tied to the land and the language kind of inextricably. So when people get pushed off their land, it destroys some of the knowledge and when their languages are lost, that, that sort of craters them in many ways. But, but there's even a representational question about whether that kind of knowledge is amenable to being represented in these kinds of models. Well, so two things to think about in response to both of you. One is that what happens when they do start to try to represent indigenous ways, indigenous knowledge with the system? Is there a, is there a potential way of interpreting the indigenous ways of knowledge into a LLM readable format? The other end, this is more in response to Kevin, is that there's actually the problem right now is the loss of quality or is the limited amount of quality material for large language models. From what I was reading, is that the most of what's used for LLMs is not just the majority of stuff on the web. It's a fairly limited amount of material from high quality edited journals and newspapers, places that where the writing is decent, the places where the knowledge is more or less trustable. And one of the concerns that a number of them have and over the groups have is that will they start to have to expand out to lower quality stuff just to keep feeding the beast? The responsibility of what to put in the hopper? Wow. Yeah, exactly. But you do have, like if you go into GPT playground, that has a lot more controls available to you, is you can select, do I want DaVinci 3 or do I want something down in the list that is better at coding? So you can select, what do I want, which knowledge base do I want to work with? Which one do I want to train on? And you have to have some literacy about generative AI to be able to make those choices. Is that, well, what's in DaVinci 3? Do I actually have a taxonomy that's accessible that can tell me what was that trained on? I think we need to be able to have, where's the pop-up bubble that shows me, oh, that's what's in this thing. Yeah, here's a thought to go back to this, this mimicry I've been musing about this and reading about, you know, we, well, goes back to my, you know, whatever, never mind, try to keep this simple. The, or clear, the business of mimicry, I mean, children learn language by mimicry, and now we're learning, they're learning a lot of things by mimicry, and it turns out that doesn't stop. Adults learn things by mimicry. We sit, I mean, and we also do things by mimicry. I mean, the cliches are things like sitting around a conference table, and if somebody, if it's a fairly cohesive group, and somebody starts to lean on their, on their elbows, then, you know, soon everybody's leaning on their elbows. I mean, it's crossing their legs or any of these kinds of things. And I'm thinking, if you think about indigenous, well, you know, if you think about indigenous, say work, I'm a big fan of work, understanding work, how work happens. And, and, and the way we've only ever been made, been able to make real progress in understanding work is by some very, very tedious, anthropological data collection. But then there's interpretation. And I think interpretation is something we need to teach or get kids to learn. We, you know, what is interpretive work? And you now find, you now find people talking about interpretive work. And, and, and who, who's, you know, it becomes becomes very interesting to see what that kind of work is. And it's, it's largely hard, hard to see, it's like analysis, it's like, you know, when I was working with lots of anthropologists, and I loved their insights, I loved what they were coming up with. The question was, what does it mean for anybody, in particular, who, and who's going to decide. And all of those big questions. So interpretive work is, you find that term now, you know, you can, you can start to find, find things around it. But we don't, we don't spend a lot of time thinking about how to, you know, are we expecting this, these AIs to do, you know, they don't at the moment. They're not doing analysis. And they're not, yeah. There's an interesting question coming up in the tools for thinking space. Can you hear me okay? Yeah. Yeah. Good. There's an interesting question coming up in the tools for thinking space. Well, maybe now that we have chat, we don't need to take notes anymore. Because everything's just going to go into the search bin, and we'll be able to ask the system, hey, tell me what this is, what this was, what happened, whatever. And I'm like, oh my god, that sounds so dangerous. And a piece of what I'm trying to figure out is how do we continue our activities and reach out to blend and mesh with these new intelligences and reproductive ways. But not how do we just, oh, good, I don't need to, it's a little bit like my normal narrative before GPT was, we outsourced our memories to Google and Wikipedia, and that was a mistake. I was just working on a soft skills and project. And the people are putting together, we're talking about, you know, the role of note taking, right. And I said, look, in fifth grade, I had a very forward-thinking teacher who made us put down on index cards one word or short phrase that represented the story or the lesson that we just heard. She'd collect them from us with her name on it. And a month later, she would hand them back and say, tell me what that story was, right. And so the only prompt that we had, we didn't have notes, we had the word or the phrase, right. And we were, you know, it's kind of like, okay, neurons, go pull that back out. That has turned out to be a miraculous gift for me to be able to take notes that are relatively cryptic, but pull back entire narratives, right, or ideas. And it was training. It was like, you know, go ahead, compress it, compress it, compress it. Okay, now, what's the, you know, how do you trigger retrieving? And, you know, that and speed reading, which by the way, I had to control later, right, because speed reading is antithetical to proofreading, right. So you have to, I have to turn that off and slow down, but great gifts, right. And I wish, as we're moving through this, we have to have our index cards. We still have to write down and commit to memory what we want, what we think is important, because all the prompt engineering and all the prompt crafting is based on knowing enough to know what to ask, right. And, you know, if we, if we're a visionary problem, right. Yeah, well, if we didn't know how to spell it, then we have no frame to ask a better question. And I think that the people who are going to win in this are augmented by these systems, by the vast knowledge that they have, but that you yourself have a framework of knowledge, right, that allows you to allow you to ask a better question, right, versus the other human being, though. Well, I'd love to push, I mean, just take the conversation back three steps again, but the, but this notion, Kevin, of who's going to win, right, is the part that worries me a little bit, and it's like, you know, is it a winner take all game and, and back to me. I didn't mean to put it in winner land. No, I know, no, no, but I mean, but that's the one I was just taking the question back to that governance issue around the AIs. And, and what do we do? I mean, I feel like in some sense, the technology has created a new set of problems around governance of large scale infrastructure, right, and we don't have an answer to that yet. And it seems to me the notion that it's going to be government, right, our traditional government doing this, maybe, maybe not, they don't seem to be all that good at managing this kind of infrastructure, you know, do we really want to replicate the highway departments and all the states, I don't think so, you know, we need a different modality. And I'm kind of curious, the only one I can think of is the Linux Foundation. And this, and I would love to know if anybody knows how the Linux Foundation does decision making. But by far the largest, you know, archive of tech assets, I think, in the world, and I saw some stuff about their growth rate recently, and it's tremendous. They're managing 5600 projects. I mean, they're, they're getting a couple million lines that go today, you know, and their, you know, that stop underpins so much of what we do, you know, and they, I don't know, they're spending $100 million themselves. I don't know how much is being contributed, probably 10 times that 100 times that. So anyway, we don't, there are governance, there are governance structure for that chunk of asset, but I don't know how it works. I know somebody who might know. I guess I feel like, you know, like when we're looking for models of how do you govern AI, we should look at Linux Foundation. I mean, Open AI had some weird structure right around your investment and return on your investment and things like that too, right? But I don't know how that works either. You know, I mean, we're playing, we're experimenting in the space, but I don't know what we've learned. I'd love to know what we learned. This is my line, okay, for, you know, this stuff is when these systems start to collaborate with the AI that's already installed that's, you know, 100 meters away from the York Stock Exchange, right? And they secretly buy up the utilities and start to tell me how much electricity I can have every day because the AIs want more of my electricity than the human beings get. That's when I know that, you know, we have a problem. My electricity is being redirected. Isn't that probably happening? Say more. We already know it's going toward, you know, crypto mining. Yeah, well, crypto mining aside, I think, I mean, there are issues around crypto mining, right? I mean, there are places that have put restrictions on how much power people have to do that and, you know, seeking ways to get more. But I think in terms of like municipal municipalities or people who are supposed to be governing electricity, it's like water only, well, water is worse, but it's like that. It's a finite resource that, you know, we're going to just sort of need more and more and more of. I think right now we're in about a 2% range of total global electricity being used for data centers. 2%? Yeah. It doesn't sound much, but I think, yeah, but I think what's going to happen is the algorithms that generate chat technologies are fairly rudimentary. You know, there's not a lot of differentiation. There's not going to be some sort of magic sauce. It's about the data that it learns from. So I think a huge propagation of walled gardens is about to happen and a lot of shutting down of open indexing is about to happen so that my secret sauce is my proprietary data and my proprietary insights. And that's, you know, my chat tool is better than their chat tool because blank. And so I see a huge propagation of that happening and then a bunch of experimental data sets. Well, let's go train on this data set and see if there's anything there there. So imagine I'm running 250 training exercises with massive quantities of data just to see which one of these things is going to produce a winner. So I see a huge growth in data center and data center use and cloud storage as well as potentially on prem because you don't trust the cloud not to see what you're doing. What you're going to have a corollary to that is that as those proprietary ones spin up, they'll see the edge cases of wanting to create the new Karatsu and Kai balls to have collectives start to form that share data. And that's going to be versus, you know, their competitive canisters and Kai balls. Yeah, I like that. All of those intervention points. That's right. That's the right one. I was just doing a pun with the eye. Kai balls. As you were talking about what just reminded me, my wife works at the UC Berkeley biosciences library. And one thing that the entire University of California library system is doing is digitizing everything. Every single work that that is they have in in stock, essentially, older new is being digitized and made available within the university library system to students and to researchers. And just struck me that, you know, perhaps, and I'm sure that you see is not the only place that's doing that. And it struck me that perhaps one of the walled gardens, walled garden models that emerges is from universities. This is the Oxford large language model that's based entirely on all of the material that Oxford that Oxford University has collected. And it's libraries, or this is the University of California system. We've done that for the North Carolina community college system, which is a whole bunch of practicum like knowledge, right, about, you know, University of North Carolina Chapel Hill does not know how to repair air can large air conditioning systems, right. But, right, the community college system sure does. So it's, you know, they have a differentiated and we have 54 of those campuses in the in North Carolina, right. So it's a big knowledge base, very interesting. It's, it's more certificate type knowledge as opposed to degree type knowledge. So imagine that as as a GPT, whereas as a large language model, 100%. Yeah, I mean, it's already being it was already ingested machine learning system. So that instead of needing to meta tag going in, it read it and auto meta tagged it as it read it, okay, create a hyper dimensional fingerprint for every object in the system. So this phrase is, it's not a question observation, I guess. Thinking back to the Xerox Park work on technical tech reps and knowledge management and the observation that there's a social, quite a social dynamic in approving or not approving what goes into those kinds of things. And so the they had, they had experiments or they had went out into the field to see how people were solving problems. And it's all interesting reading. And it's all very important. And but one thing that happened was that they decided to make a knowledge system for all the technical knowledge, etc. And so they got a really good expert, one of the most, you know, an expert practitioner to to classify the the value of the of the of the information that was being recorded. And the thing is that within the tech rep community, well, first of all, they didn't use it. They were supposed to and they didn't. And when when you sort of looked at it more deeply, then it turned out that, you know, you could find that that the groups of tech reps that work together, you know, this whole community of practice thing, right, is they, when they put somebody they trusted in in place to approve what's going into the data store, then usage went up I mean by an enormous percent. And I think I think we're, I mean, things I worry about are the thing that we've abandoned any any sense of how in fact, you know, this isn't knowledge we're creating, we're creating information and it goes from little I'll I'll send you a diagram at some point, you know, it kind of goes from through this process inside these different groups, it has to be reinvented all the time, the wheel has to be re understood, you know, it's like all the things that as you get older and you realize, oh my god, we knew that, you know, 20 years ago, 30 years ago, how come these people don't know when what it's because there is this interpretive step in which understanding what was said and how and why and and the context has changed, it has to be redone in that in that new context. And we're leaving all that out when we think that we're we're going to have quality data, I was asking a question earlier about the data quality, I mean, who's in charge of that? And what where what constitutes, you know, we've learned so much, I mean, if you go back to, oh, come on, Jerry, help me, the French guy, the French philosopher. Derrida. No, no, not Derrida. That's the one that came to mind. Jerry Lewis. No. That'll do. He actually is a French philosopher, if you think about it. Yes, he is. I mean, he was treated as one kind of by the French. Sorry, you're looking for Deleuze and Gattari, you're looking for just forget it. It'll come to me soon enough. But he wrote a book called, you know, Laboratory Life. One of that was his thesis. You must have it in your brain. I do. You're talking about Bruno LaTour. Absolutely. Okay. So Bruno LaTour wrote a book called Laboratory Life. That was his thesis. And that was the beginning of his whole, you know, evolution to where he is now is still life, I guess. No, died at 75 years old in 2022. Correct. Okay. Yeah. So in that book, which I had the opportunity to dig deeply into with a pharmaceutical, it was a pharmaceutical lab and with a guy who was a pharmaceutical researcher in Switzerland, we read that book together and went through all that in an ornate until the person who was funding it got brain cancer. So anyway, we went through it. And what was interesting, somewhere I have notes, interesting to see how it was that the, well, they all wanted to know how they were reinventing the same thing to aspirin over and over again. But in this case, it was a really intricate insight about how some molecules can interact or don't interact in the same way. And that was held by people who weren't in power. And that eventually that did win out. But it was, they sat on a mindset that was just wrong or not useful in the case that they were looking at that happened so much. And nobody's watching that. And I sort of think, you know, insight, intuition, all of these things, we can't let go of those. Just abdicate. So the Gen Z's and the Gen Alpha's that are growing up with this technology. My oldest daughter is an adjunct professor teaching creative writing at University of North Florida. And, you know, people are submitting, obviously, essays that have been influenced or updated or modified or wholly created by this kind of technology. Their sensitivity as to good enough, their sensitivity as to this feels wrong. It feels disingenuous. It doesn't have a real voice. Like the subtleties and nuances of, you know, historic writing, great writing versus, I got it done. It was good enough. You know, it looks fine. Yeah, I spackled the hole in the wall, but I haven't painted the wall because, you know, you don't really notice it. Like that's the thing I'm fascinated about as I see this, right? You know, I've got two sets of kids that are millennials. I've got a daughter who's 14, who's theoretically a Gen Z and then two that are 13 that are theoretically Alpha's. And then the next one is, if it comes from a trusted source, whatever it says, that's instructional and good enough to take action against. There's like no tenacious life experiences that have built a wise model of what's actionable and what's not. And what should I believe and what should I use and what, represent me and my thinking and my thought processes? I think this is an equally fascinating topic because this thing's only going to propagate. And so now take my 13-year-olds and fast forward them, you know, seven years and they're in middle of college graduating. Like how do they see the world? How do they interact with the world? Do they even care if this was human generated thought content or machine? It helped them get their assignment done. Got it done. It sounds like you're saying of all your kids, none of them have created a mental mechanism for vetting what's real, what's not, for pressing their thinking. I think the millennials have, you know, I think my 30, 31 and 34-year-olds have. And did they by the time they were 13 or 14? No, no. So how does this differ? Well, you're right. You're right. I mean, and, you know, back in their day, they were hiding their MySpace websites from me because I was terrified of online predators, right? And like, please, beloved God, don't post a photo. It's there forever. And now every photo of my new teenager's post, it's the most rudest, horrific, face-modified morphing that they just think hysterical and it triggers giggle and laughter. So, you know, my fear is as to what's the right thing to do or not are dissipating, but, you know, you read some of the old classic novels and you get a real sense of the author and how they see the world and how they put together things and what their profound perspectives are. The thought leaders of the future, is that even a thing? Is there even such a thing as thought leading? I have no idea. Yeah, but it's not. We don't have that. I mean, what was being described as going to the authorities and all the rest of that stuff, we've now learned that going to the most respected authority might not be the best thing to do. Right. Or may not be the person or institution that had been in the past considered the authority. It's a much broader population of authorities now. Well, and it's, you know, and when I've had this argument before with others, somebody helped me, it turned a lightbulb on for me, the authority, whatever that means, and that magical word that it means to me and my generation, that authority now is, look, you're either navigating to the store using Waze or Google Maps, and it tells you to turn left now. That's all the authority you need. Authority has been boiled down to that moment in time. It's highly transactional, and it's in context to need. Yes. Whereas in the past, that would be, there would be social dynamics behind that decision, right? Yeah. There's also this perennial battle over the controlling narrative for whatever culture you're in or whatever situation you're in. And I think, Brad, you're ruining the fact that reason and logic might be losing the battle to be that authority, and historically, the winners are the best storytellers. And the stories that we adopt, and many of the stories that we adopt are illogical, violent, and crazy-ass, if you ask me. And we seem to think that's pretty normal, and our cultures absolutely normalize and praise those things and cause us to swear our allegiance to those crazy stories. In fact, small side note, a couple of years ago, I realized that acts of faith, and I'm thinking here about the catechism in the Catholic Church, in the Catholic faith, or whatever else, acts of faith are very intentional oaths sworn to believe something that is clearly unbelievable and bears no sense of logic, that forces you to deny fact, science, and truth as part of joining a tribe and being a member of a tribe you don't want to be ostracized from, because chances are the rest of your people are all members of that tribe, because that's how social circles run. And so acts of faith are in a dark way of seeing them intentional ways of loosening people from science, facts, and logic. And it's like, oh, shit, I got that. Somebody died and was resurrected three days later, transmogrified water into wine, like those things are all ways of separating us from facts. There was a great interpretation of that Loes and Fishes story about a woman priest, a woman bishop. And she said, how could this happen? How could it happen to be true, that if there was a grand sharing of food, how might that have happened? And maybe the simple act, maybe it was just a, you know, it wasn't that five, what, two Loes and five Fishes or whatever it was, that 5000 people, that 5000 people was that they fed each other. They shared what they had among each other that it was, it was a stone soup incident, it was just misreported. Yes. Well, stone soup. Yeah, I mean, that's, that's a parable, but, but to go back to, you know, some of our other favorite stories or things that were miracles, they don't have to be miracles to be miraculous, miraculous. Yes. Thank you. Lovely. I'm glad we solved all these things. Well, I was trying to stick in, I was like a conversation that this echoed from this weekend where I was in a, in my song and we were discussing Buddhism. And I think Buddhism is so fascinating, right? Because you have several thousand years of people trying to convince other people of that you shouldn't need to convince other people kind of. So it's very funny kind of thing. But one of the, one of the, you know, very smart people in the group was talking about how he was so excited about modern physics and the relationship to Buddhism. And, and I was struck by this, like, it felt to me like he was saying that must be kind of a, there's a truth between, because they're tied, they're related to each other. And I was like, really, they're just two metaphors that happen to happen at the same time. And I just find it interesting to, I don't know, maybe they're actually, there's a truth underlying them, or maybe it's just happens that these ideas become popular simultaneously. And then the fact that they reinforce each other probably makes them more influential. And I don't know what it would have been in the time of Christ or something or whenever, but, you know, 900 years later or something with Christianity became really popular. You know, what ideas were floating around in the ether at that moment that like, you know, Christianity works and there's some other set of ideas around, you know, I don't know, eating blood or something like that, that, you know, everything says, yeah, yeah, that's the thing. And you remember the Tao of physics, right? Which is 1975, Frichof Capra, who was saying these things as well. And then I watched, if you guys haven't seen it, Mindwalk, which is 20 or 30 years old, a movie on Sam Waterston's in it and Liv Wollman. And it's Frichof Capra was one of the writers on the script. It's trying to make physics popular, I guess it's like a, you know, TikTok in the age where you could do it for two hours. But it's fascinating. I think we have reached a gentle conclusion in the day. You think we've hit peak generative AI? Cool. That's right, Mindwalk. Nice. 1990, which is based on the book The Turning Point by Frichof Capra, right. Thanks everybody. That was really fun and useful and interesting and scary, all those things at once. Well, at once, yeah, it was. Thank you. But let's be careful out here. We'll see you next month. If you do have somebody, if you have someone you could explain Linux, the Linux Foundation Governance, I would really love to meet them. I hope you, if you have any, I could figure it out on my own, but I don't want them too late. So Dave, consider pinging Brian Belendorf. Who's got time to Brian a little bit about this stuff. And I found it a little hard to pull it out of him. I've got lots of insights, but I mean, you know, somebody must understand the business model of Linux and I don't know. Well, there's a woman named Libby Bishop who's maybe retired by now. Everybody I know is retired who did her thesis on Linux and she was an economist. So there's must, let me just reach out. That would be great. Yeah, maybe I can find her pieces. I'll look around. Libby Bishop. Yes, Elizabeth Bishop. She's currently, I think in England, although she's American. Thanks so much. Let me know if you. I'll report that. All right. Thanks.