 Good morning, John. It's funny. I have also been worried about AI this week, which is actually not very weird at all because we have actual intelligences, not artificial ones, and thus we are bound to be anchored to different interests and worries. I wrote a couple of books about how when new things arrive, as they inevitably do, we tend to retreat to our camps. That new thing might be a pandemic or a large-scale social change or really powerful new technology or an alien. It doesn't matter what it is, the more confused we are, the more we head to the place where we feel safest. And from the safety of that place, we think about the concerns of that place, and then we ask questions specifically about that concern. And then the more powerful places tend to have their conversations be the ones that are dominant in the broader conversation. And then eventually it all bubbles out into there being two groups that have lots of internal divisions, but mostly are budding with each other. And you can watch these groups starting to define themselves and differentiate themselves right now in real time. And it will be probably five or ten years before we actually know what questions we're trying to get answers to and what the kind of dominant perspectives on it will be. And I don't know what they're going to be. I don't even know what questions we need to ask yet. I certainly don't know the answers to those questions. The idea of these models is they don't copy things. They look at a lot of different things, whether that's text or pictures. They learn how these things are structured and then they output things that have that kind of structure. But when they do that, sometimes their procedures make them copy. Like I asked Mid Journey to imagine an Afghani woman with green eyes and this is what it gave me. Mid Journey's model does not contain this famous image from the 1980s National Geographic cover, but it has been trained on that image and so it is plagiarizing it. Can you sue it for that? Is it okay for it to have trained on so many copyrighted works that didn't agree to have it trained on them? Should artists and rights holders be able to opt out of that training process? Is any of this a violation of existing laws or is it a call for new legislation? Who will use this? What will it enable? Who will it hurt? I don't know! People are going to disagree about this stuff like crazy. So here's my big concern, John, because of course you've got to have one big one. You're going to have a lot of little ones, but one big one. We have not yet gotten through the last massive revolutionary shift in human communication. We don't know what to do about last Tuesday yet. We are still actively figuring out how to be humans and societies with the current tremendously disruptive tools that we were given 10, 20 years ago. We are just realizing that we live a lot of our lives in places that are not democracies, that we do not get a vote where the leadership can change based on who has enough money to buy the thing that we live in. A lot of these platforms haven't really ever been through a recession. They've had so much money. They've been able to grow and grow and now are they at a ceiling and what will they do to punch through that ceiling? What? Like we don't know! All of that is very big, very disruptive. It's been difficult. And now it feels like we're about to have another tremendously weird, entirely new wrench thrown into the works of human communication. Now a lot of the things I see people saying chat GPT is good at, I don't think it is good at. Like it's not good at replacing Wikipedia. It very confidently states things that are both untrue and true and there's no way to know the difference without fact checking and I don't think that it has a good way of figuring that out itself. I don't just don't think it's going to be good at that. But that's a really hard problem and there are really easy ones that I think it'll be great at. Like for example getting people to dislike each other more. And there are lots of people who want people to be more afraid of each other and to dislike each other more. Both external and internal. Like a lot of get out the vote measures is about getting people to be more afraid and angry. And getting people to be afraid and mad at each other, very easy. Like trivial problem for an AI like this to solve. All you have to do is emulate a human in its worst moments and in our worst moments we are not complex. We have not figured out how to be a society inside of the current communications revolution and while people and societies can and will change very fast, it's usually not fun. I for one wouldn't mind if things got a little less interesting for a little while. But we don't get to choose these things. John, I'll see you on Tuesday.