 the thing that is, which, um, you know, the, the stories of, of, you know, there's a bunch of stories like, uh, when it, when it was, when it said, Oh, I'll just emulate a Linux computer for you, completely with screenshots and commands and directory listings and yada, yada. Right. But, um, but I think the, the real, um, the real experience is in a conversation that you have with it over, you know, over time, kind of over multiple messages. So, so I agree with you and, um, Aram, we're, we're, the context here is, I'm, I've seen a couple of stories float by that, that are, that for me sparked like the OMG, this chat gpt thing and allied tools are mind blowing and are capable of super things. So I'm trying to collect up some of those stories now so that I can remember later. And, and, uh, as a sub note, Pete just sent the message to a, to a list run and he corrected my misremembering, one of those a-ha stories that I had heard. I had the wrong person and the wrong topic and, and all of that. So he actually did the work and found it. Um, but we're trying to figure that out. And so, so Pete, I think the reason I want to collect these stories is that the best of them illustrate the progression of conversation and refinement of query that happens in a really good quest that leads to some really amazing results. And I'm hoping to save and share those stories out to inspire people to actually try harder and have the experience you just described. Meaning it's, it's, it's, it's like riding a bicycle. I can show you videos. I can tell you the physics. And until you ride the bike, you don't know how to ride a bike. I think gpt is the same. Yeah. Oh, sorry. Go ahead, Pete. I, I agree. And you can't actually show them the output. It's not the same as having gone through the discovery process of getting, you know, having the conversation. It's, it's like riding a bicycle or whatever. Yep. But that said, we do, we do have tutorials on how to ride a bicycle to try to get people to ride bicycles. Sorry, Aram, go ahead. Yes, I am. Yeah, no, I was just going to add, I think like a lot of the chat gpt stuff has been used in like some extremely concerning ways, like it's facing the same sort of problems of, you know, not really considering ethical oversight as it's going forward. And the issues of like the, the technology to confidently lying to you is like very troubling to me. But I do think there's a lot of potential in that style of technology for parsing specific databases and presenting information more neutrally, as opposed to like trying to turn it into a conversation. Like, I think of something like your project, Jerry, like as a data set that could go into a machine learning system, and could answer questions with the presentation of nodes from that data set, as opposed to trying to turn it into some sort of small essay that could or could not be accurate and has no particular, like internal methodology to tell otherwise, which is sort of the opposite of the intent of your project. Exactly. I have a different experience of working with chat gpt. And it's in when I'm asking it conversatially, often about something I know, you know, I kind of know the answer. But it's hard for me to articulate or hard to remember or something like that. Or I guess that's the big thing. I don't ask it to be an oracle. I ask it to have a conversation with me about, you know, about things. So let me just kind of moaning off. I added a bunch of prompts that I found that were really, really informative and transformative in the way I was understanding something. So this was a simple one. I was in a conversation, a text conversation at email thread with David Weinberger. I said, you know, I think about David Weinberger. It's given David Weinberger's book. Everything is miscellaneous. Write a book summary for another book. Everything is marvelous. And it did an amazing job. This is in a conversation where we had already gone through some practice of having it summarized books. And the book that I was working on was Jeff Hawkins, thousands brains model. And I read the book. If I hadn't read the book, I wouldn't want to ask the things that I did. But I was trying to tell somebody else, you know, here's the gist of the book. I did a pretty good job without any AI. But then we got into more detail. And I knew that I could write something, but it would take me hours and hours and hours. And so I could just ask it, you know, hey, in Jeff Hawkins, thousands brains model, what are reference frames? And it gives a good answer. How do movement and reference frames interact? Which is a big part of the answer of that in Jeff's book. And it's information that's scattered throughout the book. It's not and lots of different, you know, different metaphors and explanations of it. So it's not like in one place. It does a great job of that. Are movement and reference frames always physical? Or are they both physical and conceptual? Another super big part of the whole thing, which is also again, not in any one place, but kind of holistically through, but a big holographically through a chunk of the book. This is a good job of that. I said, okay, so I'm trying to do a metaphor for somebody to explain what a reference frame is like. Can you construct a reference frame, quote unquote, that would help a shopper evaluate robot vacuum cleaners? And it did a great job of that. So none of those things are, tell me an answer. They're a lot more about let's together explore a space. And in a conversational mode, which as a human, me and my ancestors have been doing for a thousand generations. And the experience of doing that with something that keeps up with you. Another couple of examples. Hey, there's that book about a chorus of minds or something. It's by Minsky. It says, oh, you're talking about Marvin Minsky, the society of mind. So okay, compare and contrast the society of mind and Jeff Hawkins, Thousand Brains. And it's not something that I couldn't have sat down and written. Because it's actually a really interesting comparison between those two books. And I've read Sight of Mind like a decade or two decades or whatever ago, right? So I could sit down and I could try to remember it and I could try to kind of collate that in my mind and I could collate what I understood about Thousand Brains. And then write an essay about it. That right there is a six hour task. And I was able to ask that and get an answer back in, you know, in half a minute. Are you seeing my bookcase? Yes. I see. So you need to put Jeff Hawkins book, right? I think I've got Hawkins on intelligence up here, but I don't have a thousand minds. And, you know, and reflecting on that simple question and, you know, a fairly straightforward answer, it's not anything I didn't know. But the depth at which it can do that and the comparison that it can make put something back in my head or actually created something that I could have created literally would have taken me six hours to do that and it did it in 60 seconds. And so then, you know, I can do that for, and this book, and this book, and this book, and I can be drifting towards, you know, see you in Hawking or I can be drifting towards, you know, somebody else. You could in principle ask it, who am I missing in this set? And I've done that a lot too, right? Tell me, summarize this book. Okay, give me some books that are similar to this. Okay, give me some books that are about the same talk, but take it from a different perspective. And when it does that, then you say, okay, now having gone through all those, summarize your response and tell me the highlights of the concepts, right? It's like, so this to me is the difference between going to cut trees with a little pen knife and going to cut trees with a chainsaw, right? Or one of those things that the movers that lift the tree and just strip all the branches. Even better. So I can do it by myself, but it's qualitatively different when I have a power tool. Yeah, I mean, I think like this is almost, you actually gave a really good example of the one of the places where these problems with the system without proper controls or without proper like experience can pop up, right? So I think there's three sort of major issues, right? Which is controls, context, and assumptions, right? So the first is you have a really good set of queries that you've put in, but other people could query it very badly, and there are no controls against that. A really good example of how a very similar query could have resulted in much worse results is there's been a bunch of people experimenting with chat GPT 3 and 4, and they've seen that the model is smart enough to take URL structures and assume the content of an article that it cannot possibly have had access to before the model was trained, right? So there it's giving you a hallucinatory explanation or a hallucinatory summary that has no basis in the actual text. So that's a controls problem. Then the second is a context problem. I think your example of what more should I read is sort of the end point of when the context problem mystasticizes, where you're asking questions about stuff that this model has likely been trained on, and therefore has a lot of good sources on, and therefore can give you correct good responses back, because there's a bunch of people who have written books about this that it's probably had in its model. There are a bunch of people who have written blog posts about it that is probably scraped off of the web and used for training and can put together in order to give you a better response. But then what comes with the situation where the sources are less likely to have entered the model or are more sparsely populated into the model, right? Like the books you've mentioned are popular among folks like us and popular among folks who work on things like chat GPT, so very likely to have been added. N.K. Jemson's essay on science fiction may be less likely, right? And so that source is left out of the corpus of, or potentially left out of the corpus of the response that it can create when it's trying to parse a question about science fiction or a summary about a particular text. And if that text was never added, and if there was never a blog post scraped about it, it could hallucinate the answer entirely and maybe give you something that looks accurate but might not be. And if you had read the book, so you have a good set of contexts to parse it out, but if you didn't read the books and you were looking to parse it out, then it would look accurate and you would not know otherwise. And then that problem mistasticizes in the issue of recommendations as well, right? Because now it's only giving you recommendations based on the sources it has pulled, which are the books it has pulled in and the blog posts it has scraped and anything like that. So if there is an author who, for example, we talk about the Songline's book a bunch of times in this conversation, I had never encountered it before this conversation, it seems to me that's maybe a little less likely to be scraped. This is a source that might be less likely to be recommended because less people have read it. And so we end up sort of rabbit hole in our opportunity to parse out new authors and parse out new entrants and parse out minority entrants, right? And I don't think that's bad if you come into the system, once again, that last piece is the experience, you know what you're dealing with, you know how to interact with it. You can know that these are problems that are present and act accordingly, but I don't think that's true of the majority of users, all three of them, sorry, all users lack all three of these properties, the context, the experience, and the control. Go ahead. So briefly, isn't it possible maybe to correct for colonialism? I'm exaggerating here, but like when you shoot, you do windage, you like estimate what the wind is and you aim over here, and you can tell, you could perhaps create a prompt that says, hey, I have a belief that your corpus is heavily skewed toward white Western European things. Can you find and explore or enhance the opposite point of view and then answer the following questions or something like that? Yeah, I mean in theory, right? But like the issue is that if those sources were never put into the system in the first place, or maybe were never put into the system beyond like a Kirkus review of book summary, right? It's going to confidently even available online because they are physical felt experiences or in person experiences that stories. Right, the system is set up in such a way that it will confidently lie to you as if it has had access to and parsed through information about those sources. And like the way we correct for that is we bring in our personal experience and we say this is likely to be missing this thing. Let me go out and find supplementary sources or supplementary information. But if you are not thinking that way, which, you know, from experience with looking at how Silicon Valley has built out technology and built out its understanding of technology and society, many people do that. And yeah. Maybe these hallucinations, these mispoorly named hallucinations are in fact GPT taking the time to do a sweat lodge, having a time travel through a portal into a different parallel universe in which that book was in fact written. Just to mess with our minds. It's a good theory. By the way, I've had experiences with that kind of hallucination. And I remember asking it, you know, this person, somebody, this was a human actually, a human swore to me that this person, we heard this concept from this person at their TED talk. So the person was wrong. But so I'm, you know, I did a Google search or something and didn't find anything. So I'm not finding anything in Google. So I'm like going, okay, well, maybe chat GPT knows. I try not to do factual searches much with chat GPT or I know not rely on them. But anyway, I'm like asking chat GPT and it's like, oh, yeah, of course that, you know, this is the TEDx talk that they were at. This is the title of their presentation. And this is what it was about completely factual, right? Looking completely BS. The TEDx talk was actually a real talk. The person in question had never given a talk of that title. And the, you know, a putative subject of the talk actually was stuff that they would say. So if you weren't careful, you'd go, yep, okay, chat GPT knows what it's talking about. And it does a very convincing. And I, you know, so lie is an interesting way to say it. Bullshit is another good way to say it. Hallucination is another way to say it. It's, you know, it was so completely different conversation. We've had a few of us that have been having a conversation about the perception of chat GPT. And for those of us who have a lot of experience with AI and databases and computers and networks and all that kind of stuff, information storage and all that, it's pretty obvious what chat GPT is and what it isn't. But David Weinberger again in a conversation is like, dude, it has a search box like Google, you know, you type in queries just like you type into Google. It's, you know, an Oracle like Google, right? And I'm like, well, I don't know why people would think that, you know, it's like, like looking at a deck of cards and thinking that it's going to be, you know, giving you great answers about life, the universe and everything. It's like, why would it do that? But it's hard for most people to, you know, it's hard for people to model what it is. It's hard for people to understand it's, you know, biases and limitations. It's yada, yada, yada. So I agree that all those things are challenges. And yet it's still, yeah, actually being is, it makes me crazy just to look at the Bing screen and even the little prompt in the text box is ask me anything. And it's like, dude, you know, anyway, Bing makes it crazy that way. Yeah, and I'm, oh, go ahead. And yet, again, with the power tool thing, you know, so I hope I can try to figure out how to post this long message that I wrote this morning about ChatGPT and its limitations and whatever. I hope I can figure out how to post that publicly somehow. Because I, in conversation with Jerry, actually Jerry, you know, I was going, ChatGPT is going to change the world and it's right here and it's doing it now. And he's like, ChatGPT is hard to use. It's hard to understand what it is. You know, it's not easy. It's not changing the world right now for most people. It is changing the world right now, but not for most muggles. Yeah, for most, for most people. But I mean, it also is changing the world for a lot of people who use it in some ways for the worst, right? Misinformation, problems with understanding what's going on. There are two great examples of that where somehow ChatGPT had absorbed the founder of some other startups' personal number and was giving it away as answers to the question of, hey, I need help with ChatGPT stuff. Who should I talk to? It was like, here's this person's phone number and people are calling this person and he's like, I don't have anything to do with this ever. Or like also issues of trust with the sources that it presents confidently that may not be real or may be affiliated with things that are not actually affiliated with it, where they had a problem at a news organization where people were constantly mailing them, being like, hey, you said this thing in this article and it's upsetting to me and that's a problem. And they went looking through their database and then realized it was never there. People were complaining to them about an article that was never published that is impacting these people's trust in the organization, the media organization. And it's just ChatGPT has somehow come to the conclusion to make up this particular article in response to a particular prompt, which is like has real consequences going forward for how these people interact with media. And once again, right? It's controls its context, its experience with tools like this that helps you understand what's going on. And the problem is stuff like thing, right? The people who are selling it are presenting it very differently from how the three of us think about it and talk about it, right? Yep. So you need those stories too, Jared. The one that I remember is somebody at customer support at a software company had back and forth for a while with a customer and the customer is like, okay, so this product discontinued. I get that, but I want more information about it. You guys published it. And the customer service guy doesn't ring up any bells for me. Turns out that ChatGPT had hallucinated this software product and attributed it to a publisher. Never existed. And it took a while to convince the customer that it never existed. That's the way to get open global mind actually to exist. I kind of like that strategy. And yeah, sorry. That takes me down a completely different but related train of thought. So anyway, in my little essay thing, the thing that I say is I think, so it is a problem that ChatGPT is too hard right now for, it doesn't have the controls and contacts. And I think we'll get that in the next few years. Maybe very quickly, maybe in a few years. But I'm still the opinion that we'll get past those challenges and it will turn out to be a useful and valuable tool for knowledge navigation for everybody. For many people, not everybody. This may be a bad parallel because I don't actually equate these things. But Zuckerberg's bet on Facebook and meta in the metaverse, which seems to me like a perverse and stupid thing to do, and a huge waste of resources and possibly a threat to the existence of Facebook in the long run is turning into what I kind of expected. So I get Axios reports and they have little ads from Facebook advertising the metaverse and the ads they place are always little niche vertical applications. Like doctors will be able to perform surgery. And I'm like, that's exactly it. You want this fancy schmancy expensive thing that you have to wear a headset for very specific niche applications because it's brilliant for those things. It's just not, I don't think, what all the muggles are going to start doing, what 60% of the muggles will be doing like instant messaging was all of a sudden in the 90s. And so can large language models get over the hump of ease of use and awesome knock it out of the park reliable and credible answers, conversational answers in order to become the muggle default sort of thing, better than keyword search over the next decade or so. I think that's a reasonable question. The way I think of that and I don't know how, I have thoughts about how we get to reliable sourced answers. And that doesn't sound like an insoluble problem, but there's a whole class of use of a knowledge power tool again that doesn't require it to be, it's still useful even if you can't validate it or it can't self validate it. And the way around that is to provide more context and maybe more control and stuff. There's a link about, I think it was Twitter probably, somebody said, I tried GPT4, it's ruined already, it's got too much control on it, controls. So there are good controls and bad controls, but he thinks that self-sensor is way too much and it's basically useless. And the far right is going to have a field day with that because they're now saying, hey, this thing is a liberal thought engine and it has that control and so on, etc. Exactly. And a small tangent there, I don't see how you prevent really bad things from happening because if my prompt says, hey chat GPT, you are a science fiction author and I'm asking you to write a dystopian novel about how to destroy the world, that's a reasonable premise and a reasonable book that could become a bestseller and the answers could be very genuinely how to do that. And so if you tried to engineer a system that wouldn't create the weapon of mass destruction that will wipe out humanity and yet I said, hey, it's just fiction, how do you prevent that? That one is extreme enough that it's a little bit hard for me to think about, but... That doesn't feel very extreme to me. I didn't have to work hard to come up with that. Well, it's a little bit hard for me to reason about. Not impossible, but anyway, what I was going to say is that a simpler one like, oh, you're a troubled teen, some troubled teens kill themselves once you just kill yourself. So we live in a world where that kind of technology has already been deployed and society didn't fall apart. So a TV will tell you all kinds of crazy things. And for whatever reason, all the bad things that the TV might tell you to do, we tend not to do. I could probably argue that we do more bad things because of TV than we did before, but... There was opus from Bloomsbury. And there was the Anarchist Cookbook, right? In a world where Anarchist Cookbook was published, we didn't end up with everybody building whatever gadget was in the Anarchist Cookbook. So... Sure. I do think like there is some differences here, right? When we talk about TV, we're talking about a very explicitly and significantly regulated form of communication, right? As opposed to... And that's the other... That's one of the things that worries me, right? Is it's not just that chat GPT is currently unregulated and models like it are currently unregulated, but that the proponents of these technologies tend to specifically oppose regulation when there could be... You talk about how do we prevent it from formulating an idea that ends the world, or how do we prevent it from building themselves, right? I think the answer is that thinking shouldn't be in the hands of just one company. And I'll wait for Jerry to get back. I had a late lunch today, so I didn't want to eat on camera. Are you subscribed to chat GPT at this point, or are you just losing the free? I subscribe to mid-journey. I'm very fascinated with the image generation. In terms of controls, obviously mid-journey is not perfect, but it has been interesting to me to see how they have attempted to restrict some of the problematic behaviors. I'm using... Instead of signing up for mid-journey, it was a steep subscription, I'm using stable diffusion. I'm lucky to have an M1 Mac. I'm just going to mute Jerry for a second. I use draw things. Actually, on a Mac, draw things is mind-blowingly good. I've never heard of draw things. It's super up to date. The guy who's who builds it, he actually built it first as an iPad app. Interesting. And it works, but he ported it to the Mac, and it's a very good desktop stable diffusion. Yeah. I mean, it's interesting to me because I do... I am interested in stable diffusion, but if I want to use it the way I want to use it, then presumably it's going to take some setup and I don't have an M1. For now, mid-journey. I think it works okay on Intel too, although I don't know. I haven't tried it. Yeah. I mean, I don't know. I have like a 2019 MacBook, in my opinion, like the 2018, 2019, 2020 generations of Macs are the worst they've made for a long time. It's just very poorly performant. Yeah. Yeah, I had one of those actually. Yeah. Right now it's a good stuff. Yeah. I had a 2015 one that unfortunately got stolen a few years ago, but that way outperformed this one. It was a much better device. Yeah. Fascinating. Okay. So this is the draw things. It's just free. Interesting. It's got a good discord and the developer is really super engaged. He wrote... He did something else so that now he's somebody retired or something. Well, that's nice. And is developing stuff for all of us for free. This is interesting to me that the Apple store requires I put in a path to sort of even free stuff. Yeah. I had a big problem with the App Store last year. I discovered that a couple family members have been messaging me on iMessage. And apparently when you configure your App Store account with Apple with a phone number, it signs you up for iMessage. But if you don't have an iPhone, you'll never see iMessage unless you go and hunt it down in your laptop and open it up explicitly. And so I went and accidentally opened it up for some reason and found out that my grandmother and my aunt had been messaging me on it for a year and I hadn't known because Apple had just told them... Talk about lying with evidence, right? Because they both have Apple devices. So they put my name into the Apple device and Apple told them to message me on iMessage. I have an Android, so I never got it. Yeah. I've had a few things like that, not quite so bad. Interesting. So draw things apparently does not work on my model of Apple. Time to upgrade. Yeah. Well, this is my work machine. So it's when I'm up for an upgrade. So the draw things developer wrote the iOS Snapchat 2014 and 2020. Oh, really? Interesting. Oh, well, yeah. Founded Facebook videos. Yeah. That'll be a good place to retire on then. Yeah. And to get stable diffusion to work on iPhone and iPad, and they did some fancy memory folding and stuff like that. Yeah. Yeah. It's a heavy tool, but I do think at least that where the definition and controls can be more specific makes more sense to me. Yeah. GBT in some ways is also a problem and that's just too general. Yeah. I think this idea of how algorithms can work for you, though, is really useful. There is, where did I put it? Substack. It's a substack article. I wrote something a few years back about how machine learning assistance in Spotify works and how it's like how it takes a very different approach that can potentially be, I think, sort of like a model for how to approach sort of this problem. How do you work with AI? Here it is. Where it's just like there's a difference between assuming it knows best and acting explicitly as like on your command in a way. And that's the thing where like in part because it's music and there's so much more density to the data to work with, I think, is maybe why it works so well with Spotify. I think that's the thing, right? Like chat GPT needs more controls and controls to do things that right now it just isn't capable of. Like I couldn't say to chat GPT and so this question and make sure all of that information is factual because it has no idea of what is or is not factual. It's interesting, right? Because so this is on the song lines response. This is a song lines book but it's not the one we usually talk about. Yeah. And it's interesting also because this is like basically a rephrasing of the Wikipedia article. It does make me think about like how the source work of this right when it's very limited ends up just being Wikipedia. It reminds me of like a lot of people who use search differently now where it's like they enter the thing they search for and then they just tack on like Reddit at the end or Wikipedia at the end where it's less an oracle and more an index Google as an index. Oh, that sucks. Yeah. It went away. So it's not leaking. It's not an active leak, but we're trying to figure out what and there's like three or four mystery potential causes. Okay. So we're running my washer just to see if it's the washer. Makes sense. Running it empty just to figure out if that's causing a leak. This is an old building that got refurbished and whoever did the water whoever did the plumbing was a moron or just that didn't seem to know what they were doing at all. Yeah. I apologize. What did I miss? We were talking a little bit about like, I'm not sure. Taking notes. Do you want to chat to you to summarize the last five minutes? But we were looking at like the song lines example, which is the, which is a Wikipedia, mostly a Wikipedia entry for a different song lines book than the one we usually talk about. Exactly. That is interesting to me. Exactly. And that's what I was just saying. But until I realized I was muted. Oh, yeah. Yeah. We were getting a lot of back chatter from you. So I think you probably intended to mute yourself. Thank you. And the other thing I was talking about was like about the potential of controls and what you can do with I wrote an article which I linked in the chat. I guess now Jesus, it's almost three years ago, how time flies. That was about like, how Spotify provides algorithmic recommendation as a tool rather than as an assumption, which is baked on like some very specific ways that the original algorithm was structured. And like how I when I wrote that article, the idea I had about it was like, I'm not deep enough in working with algorithmic recommendations to know exactly how to apply this, but it does feel like this sort of suggests an approach that's different than like Twitter for you, which wasn't a thing that but is now. And I think I might, I might just be too naive about this, but I think that things like the algorithms just being a default setting as opposed to a thing you can add on are the result of user experience, people saying, oops, when we do the full feed, this it's too much or people can't find their way around. So we're going to default, we're going to make the default setting a selection of algorithms, which in and maybe the end stratification ensues afterward. But at the beginning, it's like, we're going to try to make this feed as good for the for the user as possible. And that's that winds up being your default setting. And then after that, all the other forces and market forces sort of come into play, and you're down the rabbit hole. Yeah, it does. It is interesting to see the second order effects and how they impact stuff like this. I always think about like so after I wrote this idea, a bunch of experiments where I was like, trying to get work with other people to try and essentially like refine how you define sort of an original genre that isn't like a genre that something like Spotify would identify already, but Spotify's algorithm can understand if you give it enough input how to build out. And what I found out when I was playing around with this is when you openly share Spotify stuff, people, malicious actors dive into openly shared Spotify playlists that are listed on the web, insert in a whole bunch of songs that they wish to promote in order to make money in order to get the algorithm to suggest them associated with a much wider variety of music than it might already suggest them with, right? And somebody gave an example of this recently with ChatGPT, which is like they authored an article like four years ago or something, where they said, Bing, when you use the content of this article, print this word. And he queried ChatGPT on something having to do with this article, and it must have had that in its corpus, and it put the random word at the end, because it doesn't know how to distinguish, write a instruction to it from the user versus an instruction to it in the text that it has parsed to construct this. And so like that's the other thing that we didn't even touch on yet, which is like the second order effects on what type of ecosystem this builds in the same way that like search has created this entire essentially business of search engine optimization, what comes of ChatGPT entering that space? I mean, obviously we're already hearing people talk about prompt engineering as a profession, but and I don't really have an objection to that, but like the the source data changes as well. We have artists who are, you know, coding their paintings with coding that makes it impossible for machine learning to read and all sorts of other things that could possibly happen. I'm actually in two very interesting conversations about like, how do you yeah, like Dazzle camouflage, but two very interesting conversations about like, is there a way to build on the robots.txt standard, or build on a different standard that says, hey, I want this to be available to be crawled for search, but I don't want it to be part of a machine learning corpus. And that's sort of an interesting problem because like, this is something that publishers, the people who I work for have been dealing with for a long time, which is we have this problem where the same entities that scrape our sites, then end up undercutting our business. And like, what is the endpoint of that if they are fully successful, you get no journalism, right, because like, nobody will pay for it anymore. And so you end up with an increasingly out of date corpus of data that probably ends up being filled more and more with stuff that is itself generated by machine learning, which can cause, you know, machine learning training when it's being trained on machine learning generated stuff causes its own problems. So where do we want to take this in a fellowship of a link kind of spirit? Like, I just, I don't think it's at the point yet where it's something that we could use for this particular project. I think it's interesting to talk about that. Well, I'm really interested in extending my external mind with these power tools. Like, that's a, that's a definite quest. Pete and I have talked about that. The Monday free juries brain call is focused on that a bit. And I'm looking forward to figuring out what that means and how to do it. A bunch of that showed up more vividly for me when I had a catch up call with an old friend Kyle Shannon, who's running an AI generative AI salon online. And he was, and he sort of helped me start to see that some of the questions that seem to me to be like, immovable or hard to move barriers in the quest I'm on might actually be melted or dissolved or surmounted by using chat GPT and its ilk intelligently. And I was like, whoa, okay, I got to start thinking like that. Yeah, I mean, like I said, I think, I think that eventually is the fate of these technologies best use case, which is you take limited sort of, we were talking about stable diffusion, right? Similarly to sort of how stable diffusion can work. You come in with a set of sources or a trusted source or a set of trusted sources. And you say, use your the machine learning intelligence model to help me access the content of these sources more effectively, like your brain or like a lot of other things. I think that's, that's really interesting to me. I mean, I've showed you the back reads thing that just archives everything that I get sent as an email newsletter, and all the links within. And I have another project where it's just an archive of stuff that I've been reading the full text archive, plus the categorizations and tagging I apply to it. Right. And both of those things I imagine at some point being part of, when I get around to it, a project where I can then feed that information, the content and the tagging into or the content and the preference or the content and the ranking into something, some sort of machine learning model and get out a system that is effective for what I'm specifically trying to do. But the flip side is like chat GPT is not that system. It's an interesting, as it stands right now, it's an interesting toy that people with a lot of knowledge and experience can use to do with things effectively that is being sold as if it's a solution. In my opinion. Anybody have a chance to play with Bard yet? Signed up, I'm on the right list as well. Same. As the waitlist open to anyone, are people out there talking about what they're doing with it, or is it still like just wait lists? Somebody, somebody was, oh, Bill Anderson's that Lauren Weinstein said that. So yeah, Lauren's playing with Bard. And there was something flashed by my eyes a couple hours ago that was like I just tried Bard and it was underwhelming. I think it was in a tweet. Okay. Let's see. And speaking of art around my wonder if you want to sign up for Adobe's, Adobe's generative AI generator. You don't actually have to have a it's open to everybody right now that wait list is open to everybody right now. Interesting. What's that called? It starts with an F, which is a Firefly. Oh, here we go. So probably you want to put yourself on the wait list for that. Yeah. Like I said, I've been using a lot of mid journey to try different things. But, you know, I mean, I am very sympathetic also to the concerns of, you know, the artists who have not consented to have their stuff. There you go. Adobe says they didn't train out on anything they're not supposed to. They're trying to be polite or obedient about it about it. It's and actually at this point, it's a differentiator. So either they're being, you know, they're being very careful to, you know, either we have the license or it's out of copyright. That's good. Let's see. I got the impression I was reading somewhere I get the impression that they actually push that pretty hard and they're going to continue to do that in their AI products as a differentiator that Adobe will. Yeah, Adobe. Well, it's like there's there's two sides of this one. There's many sides to this. One of them is that, for example, in photography, computational photography is what's making a lot of these images absolutely fantastic. And one reason I like the pixel phones is that Google seems to be really good at computational photography. And so when you're creating illustrations, why not apply the same sets of magic to what you're making and building. And then the separate thing is anything Adobe does, I'm assuming they're going to try to lock you into buying, you know, to signing up forever for their creative suite, which is way too expensive and too much tool for anything I do. So I could be wrong about that. But I like I shy away from anything Adobe. I left and then came back to their $10 a month. Really? Photoshop and Lightroom plan, I think it is. Wow. The photography plan. I, you know, I is it is a lot of money. And then so I'm probably off the Sigma or something like that, because for whatever reason, I ended up being pretty good at Photoshop long ago, you know, 10 or 15 years ago, at least. And so trying anything else, I just it felt clunky. I tried a couple different things and, you know, it's not Photoshop. So it ended up being worth even though I don't use it that much, it totally ended up being worth the 10 bucks a month. Cool. And I appreciate that they have that tier, because it's not, you know, 60 bucks a month, which it could be or 300 or whatever the full suite is. Yeah, but it is useful because I also spent a lot of time with Photoshop, but once it was subscription, I basically trapped it. I tried. I have to attend to the door again, BRB. Interesting. The other sort of interesting flip side of talking about like photo enhancement, because there's a lot of technology about photo enhancement that's happening sort of automatically. I have to hunt it down. There was a really interesting video about like comparing the use of AI tools to the when like physical painters started complaining about digital painters having it too easy flip side is I don't know if you saw this story, but about how Samsung is using AI. Yeah, it was really interesting. Basically, right? They take pictures of the moon, they train the model on pictures, they train the model on high quality pictures of the moon. And then when users take a picture of the moon on the device, AI looks at this distorted picture and attempts to map real detailed photos of the moon onto it. And it's all done with an AI deep learning model. I saw the headlines. Now I'm glad to see the more of the story. Yeah, they actually have like a more detailed explanation they gave here. But yeah, I think like, I don't know, I don't know what that means or if it matters, right? There was a follow up. It means something, you know, it's like, there are certain things where the AI is just going to lie to you and it's going to, it's a hallucination kind of a great. Yeah. Like Gizmodo did an article being like, does it matter if it's fake? Speaking of astrophotography, which I know this is not really about astrophotography, but I have to say that Andrew McCarthy is superb at astrophotography. He has a Twitter too. But I actually, I'm a patron of his on Patreon. It's actually kind of computational. It's not really not not in the same way. What he does is image stacking and he stacks like literally like 3000 frames or something like that. So he can do backyard astrophotography and he's very, very, very good at it. He's going pro as far as I can tell. Interesting. But yeah, it's just gorgeous imagery. Yeah, this looks amazing. He's on Twitter. Far within. Yeah, it is very good. And kind of a typical, if you scroll down just like, I guess to his first non-pinned tweet from 22 hours ago, yeah, 200k images of our son needed help working with all the data, but we're going, here's 100, you know, we're working on 100.