 If we pause AI development as a society, well, I look forward to seeing what you'd say about this. I have a blog post below that you want to read this instead of hear it. But I'll riff off of that idea, but here. So apparently, I'm not AI because I'm still trying to figure out what I want to say. OK, I am self-adminably an AI optimist. So you should know that upfront. I have always been optimistic about the potential technology to help humanity. And I see it as a tool, just like money is a tool, just like electricity or fire is a tool. It can be used to burn down houses or it could be used to fuel engines and cook food, everything else that we do with fire. And AI is the same way. People who are, well, first of all, let me just say briefly that this was instigated by there's an open letter going around that is signed by Elon Musk and a bunch of other famous people that we should stop, pause AI development for six months. And it sounds so reasonable. I mean, yes, it lists the dangers of AI. And of course, I know about them too. Now, I'm not responsible for them because I'm a citizen and I have no control over these companies and these systems. I'm just simply a user of it. And I would say that I am a positive user of it. And I'm teaching others to use it positively, which, by the way, trains the AI system to go in one direction or another. So it's kind of like social media. I've been saying this for years. Good people, please do not leave social media because what happens if you leave social media? Who's left? Well, everyone else is left, by the way. The masses are still there except the most, when I say good people, yes, I'm praising all of you watching my videos. You are the wisest and most loving people on the planet. You're the smartest and most loving people on the planet. If you leave social media, who's left? The masses are left and left without your voices. So same thing with AI is if we good people do not use AI, who's going to use AI? You think it's going to stop? You think it's OK. So let's talk about this. Should we pause AI development? OK, let's think about this for a moment, OK? If we put up that rule, that policy, oh, let's all hold hands and pause AI development, you know who's going to pause? Sure. A couple of the good companies will pause. The good guys will pause because they're law-abiding and they want to have good PR. You know who's not going to pause? China, Russia. By now, I'm not saying China and Russia is all bad, but there are elements in certain less democratic countries. Would you agree, China, Russia or less? Certain countries that are less democratic are working full time on AI. I mean, they're working really hard to catch up to the US companies about AI. And do you think they're going to be using it in as benevolent of ways politically speaking and misinformation? Do you think? No, of course, they're not going to stop development AI. I mean, that's crazy. I mean, not crazy. It's highly, highly unlikely that even if they put a PR move saying, yes, we're going to sign this XYZ Accords or whatever, do you think secretly they're not doing it? Of course, they're doing it. And even the companies in the US, you think just because we all sign and hold hands, we're all, no. So here's the open letter that's going on right now. Do you know who's signing it? Do you know who's signing it? It's the competitors of the two leading companies who are signing the letter. Interesting. How come the two leading companies, none of them, nobody from those companies signed the letter? And yet the competitors are all signing it because the competitors say, wait, wait, wait, we can't catch up to what you guys are doing. Oh, yes. And societal dangers. Everyone knew about this. The leading companies know about the societal dangers and are trying to just particularly open AI. If you haven't yet watched the interview between the CEO of open AI, Sam Altman and Lex Friedman. This interview just happened in late March of 2023. Hello, future people. Some of you are like, wait, that's old school. But yes, go and watch that interview. And having watched interviews with Sam Altman, the CEO of open AI, having watched how they do things, I have developed a sense of trust in them. And you might not because you're reading The New York Times and you're listening to news and other things. The news makes money scaring you, including The New York Times. You don't think that's the case. Of course, any news makes money scaring you. And not just scaring you, but angering you. Anger and fear are the two greatest ways for news outlets to stay in business. And you know who's really threatened by AI? The news outlets. Because AI can generate news articles way faster than any journalist. And of course, the journalists that are good will modify it and put in their own voice to make it more human and authentic. But the most threatened professions by AI. You know who it is? It's not the plumbers. It's not the people doing blue collar work. They're not threatened by AI right now. You know who's threatened? Financial sector is very threatened by AI. The bankers and analysts. Their jobs are because who can analyze numbers and create financial reports faster than much way more efficiently? AI can't. Who else is threatened by AI? Like I said, the legal industry is very threatened by AI. Because who can analyze laws and create legal briefs? AI can do it way faster than humans. Who else is threatened by AI? Let me tell you. So financial sector, the legal sector, the, sorry, as a human being, I have to still look at notes. The media, as I've said, is very threatened by AI. And many techies. Because AI can generate code. It's getting better and better and better at generating good code really fast. And the techies who spent years and so much money getting their tech degree, coding degree, it's going to be obsolete pretty soon. They're going to have to do things that they, as techies, don't like to do, which is interact with people. Do we project management? All that's it that the coders don't love. But it's like, well, if AI is going to do the code, what are we left to do? We're overseeing the AI, do it. So financial sector, legal sector, the media, techies, these are the most threatened positions. And also artists, sorry. But copywriters, writers like myself, teachers like myself, kind of independent teachers like myself, we're all threatened by AI. And so not surprisingly, these are the sectors that are yelling the loudest about, we got to stop this AI stuff. We got to stop using it. Artists are saying, we got to stop using AI. We're going to pledge to not, okay, you're going to do that? All right. So you're not going to keep up on your skills of using AI to create even better art than non-artists to use AI to create art. Did you, of course, as artists with great taste, you're going to create better AI art than I can. I don't have that good of a taste in art, but I can create beautiful art now thanks to AI. It looks good to me, but to you, you can see, as an artist, you can see it's not that great, but you can use it. So it's like, are you kidding me about wanting to, first of all, either pause societal development, which is, like you say, it's not going to be enforceable? No, really? Really? Do you really believe that good and bad companies, good and bad actors are all going to hold hands and say, yeah, that's right. Let's stop. No, no, these people are cynically trying to stop the two leading players who are the only two non-signatories. Everyone else is saying, yeah, can you guys please stop? And so we can catch up. Not saying that. They're using the society. They're using, they're pulling our heartstrings to do this, but no, Elon Musk is mad at open AI for not being able to control that company. He co-founded and he left. Matt, he has a feud with them right now and he's like the top sign. So it's like, do you really see this as being a good society move? Like they're all hippies trying to, they're all good people. No, they're not. This is so, what's behind this sort of call for a pause is very commercial and political and selfish, in my opinion. And also not only selfish, but they themselves know that it's not going to happen and they're just signing it. In case something goes wrong, we could say, see, I told you so. It's so incredible. So let me tell you, what do we do instead? If we can't pause AI development because no matter how many regulators come and say, FTC will say, stop everybody, stop, stop, stop. People are still going to do it in secret. And worse yet, it's going to create a national security problem for the United States when China and Russia and other countries that are less democratic and more anti-US when they develop it and they're going to have a much smarter AI systems than us, where the US is in trouble, North America basically is in trouble. So, no, we should not stop development of AI because it's unrealistic to stop it. So what should we do instead? We should regulate it, yes, but we should develop ethical guidelines that are widely propagated that every company who wants to come out with an AI product needs to basically say, yes, this is what we're following. And guess what? Open AI is already doing that to a really good job. Have you ever tried using chat GPT? Have you noticed? You try to ask it to do this or do that with that. They have like very tight guardrails. You can't say this, you can't say that. So sort of the locking down of free speech is already quite strong with chat GPT and open AI is, and like I said, please watch the interview with the founder of Open AI, the CEO of Open AI, and you'll kind of, after hearing him talk for an hour, you'll get the feeling of it. You'll get the feeling of, okay, I see what this guy's trying to do here. Okay, so what should we do if we can't pause or stop AI development? Here's what you and I, because nobody watching this is the leader of an AI company. If you are, I'm so honored. Thank you, Sam, for watching this now. No one watching this is in any position of authority to stop this stuff or pause this stuff, right? If you are, please comment below. I'm really impressed that you're here. But what we are doing is we're like little people, like we either decide, so you and I have two choices. We either use the AI or we don't use the AI. We have two choices, right? What other choice do we have? We either use it or don't use it, okay? And I've noticed that the people who are criticizing AI don't use AI. Well, of course, if you're gonna have a negative mindset around AI, I understand why you won't wanna use it. Or if you only use it enough to criticize it, right? You don't really learn how to use it like the way I've learned how to use it, right? So the critics and the pessimists are not using AI and therefore they can continue, they can continue damaging its reputation, which I understand you're so focused on the dangers for society, for misinformation, for careers and for the environment or whatever. Okay, I understand. Actually, irony is AI might actually save the environment, but that's a separate topic because it's been trained that way, but anyway, let's... Okay, and then the AI optimists, like me, obviously we lean in, we learn this stuff and we find so much benefit from using it. And we're like, oh my God, you're crazy not to use it. You're crazy not to, this stuff is so helpful. How can you not use this, right? So the two sides have to come together and go, let's talk, let's hear each other out and let's develop because the genie's not going back into the bottle, the toothpaste is not going back into whatever analogy you want to use. The cat's out of the bag, right? There's lots of these analogies going around. It's not gonna go back, it's not gonna go backwards. You can protest all you want. It's too convenient, just like social media was. It's just like the internet was, just like electricity was, just like fire was. It's too damn convenient for it to stop. Society is just gonna keep going faster and faster for this stuff. So you have a choice, use it or not use it. And I think you are truly in danger. You focus on the dangers of society. You are truly in danger, honestly, if you... No, sorry, I don't mean to strike fear, but this is the first time you'll notice in my channel I don't strike fear, right? But this time is different. This is the first time I've been so awake about a technology. I've usually been a laggard. I've usually been like, ah, Bitcoin, NFT, whatever. I mean, you saw me, I was like critiquing that stuff and I never invested, you know, NFTs, Bitcoins, crypto. I'm like, set aside. Some of you are crypto enthusiasts, sorry about that. But I just saw that it wasn't gonna end well for most people and it didn't, right? And most other technologies, even social media, new features, I'm like, whatever. I'll just wait until everyone tests this stuff out and it really keeps being used on a daily basis by my clients. Sure, then I'll learn it and I'll teach it really well. This is the first time where I said, this is not, this is time is different. This is very different. And you either are going to endanger your own career. Now, let me say this again. Again, I don't mean to strike fear, but I do mean to be really serious with you about my concern for you. You either are going to damage your career with every day you don't learn this stuff or you're going to accelerate the value you can add to society and to your clients by learning this stuff well. You don't have to learn it from me. Plenty of YouTube videos are free to watch about how to use chat, GPT, how to use mid-journey. Those are the two that I recommend learning first and then there's tons of other tools. You start finding out all the other best-in-class tools. You either endanger your career by not, oh, but people always need human connection. Okay, I'll say this. If you're a massage therapist or other body-based, like you literally work with the body and your business is doing great, okay, that's a question too. As people get more and more mesmerized with AI, you think social media was mesmerizing, you think social media was addicting, you haven't seen anything yet, right? Wait until the sex bots come in. No, really, I mean, you haven't seen anything yet. We're going to be able to create movies instantly. I want to see the Matrix 5. And then by next year, this is my prediction, you're going to be able to create Matrix 5 by typing it or by speaking it and Keanu Reeves and Carrie-Anne Moss will be acting and you can barely tell it's not them. And it'll create a plot for you instantly. Because you can already create a plot. Creating a plot can be done instantly by text and now they're doing text to video very, very rapidly and within 12 months, it's going to be deep fake, it's going to be so easy. So it's going to be so mesmerizing, people won't, a lot fewer people will go to massage therapists and body-based businesses also, which is the sad part. So but then again, you have a choice. You either learn to use this stuff to enhance the values that you want to promote in the world, the values of ecology, of human connection, of authentic art and beauty, whatever it is, the values you want, spirituality, spiritual growth being primary above productivity and above money-making and above, yes, I agree. You have a choice of either using this stuff just like in the beginning social media, you have a choice of using it or not using it. If you use it, you get to promote your values in the world and help shape the world and by not using it, you are retreating and letting others, if you think you're going to be treating to your own, you know, little circle, but that circle is spinning out, right, because everyone's going to mesmerize with AI. So you either use it to promote your heart and values and keep, stay human, right? Or you don't use it and you fall behind in your career and your ability to promote, to share your values, to have the world be your values, to have the world embody, more of the world embody the values you believe in. What's your choice? Blue pill, red pill. I don't know if that's the best analogy these days, sorry, but I choose to use this stuff for good because it's not going back in the bottle and oh, you're afraid of misinformation. Yes, I know, I understand. But what's the antidote to misinformation? It's systems that counter misinformation. What's the antidote to deepfake politicians or people deepfaking you and scamming people you know? What's the, you're going to wait till that happens because it's going to happen, it's going to, right? But bad actors are going to do it. They're not going to stop with AI development. It's systems, good actors who use probably AI systems to counter the bad actors. It's like email, in the beginning of email, there was spam already, spam and scams. My own family was scammed out of tens of thousands of dollars because my father in the beginning of email didn't know and fell into some scams. But now of course, he knows very well and there are systems to counter spam is also now quite common. So it's not going back into the bottle guys. It really isn't. I mean, you fantasize that it will, it won't. I mean, look at reality, okay? Just look at it, just observe. And so we have a choice. You choose to use it for good or you choose to hide and it'll come for you before long. I'm not saying that to fear monger, I saying that in truth. So I hope you will make, well, I'll let you make the choice. So all right, thank you so much for watching. I hope this was interesting at the very least and I look forward to seeing any comments you want to add below. Thanks.