 Welcome to DTNS Experiment Week. All this week, DTNS is on summer vacation, but in its place is Experiment Week, where our producers and contributors are trying out new show ideas and releasing them right here on the DTNS feed. Enjoy. Welcome to the inaugural episode of AI Named This Show. So here's the deal. Some people think that AI is going to save the world and some people think it will end it. And that is where we come in. We are your hosts. I am Teja Kastodi. And I'm Tristan Jutra. And on AI Named This Show, we are decoding all the jargon and keeping you up to date in the fast moving world of artificial intelligence. And speaking of some people thinking AI is going to save us or maybe kill us, we have aptly named our first episode, Tristan, will AI save the world or kill us all? Place your bets. We'll see what happens when we're 70 episodes in. So this was inspired by none other than the creator of one of the original web browsers, Mosaic, which later became or evolved into Netscape back in the early to mid 1990s, Mark Andreessen, who is now one of the principals at Andreessen Horowitz. And he posted an essay on his blog at a16z.com at their website, basically stating why AI will save the world. He was putting his stake in the ground and responding to what he called AI doomers and describing them as a bit of a cult that there's a lot of people out there attracting attention to themselves by being chicken littles, basically. Now, some may accuse him of baby being a little too Pollyanna-ish that, you know what? Everything's great. AI is fine, it's going to save the world as he describes here. So what we thought we would do for inaugural episode of the AI named this show is explore what Andreessen's thesis is here and then explore at a very high level what if some of the counter arguments are as well. And as time goes on and we get into more and more episodes of this with all the news coming at us every week and various deep dive issues, we'll see, was Andreessen right? Or were the doomers right? I suspect it might be a little bit of each. Dun, dun, dun. Okay, I wanna get into this article which I think is really, really comprehensive, really well done, but I wanna start kind of near where he starts, which he talks a little bit about the panic that we see. And he really says that this is underscored by two things. People have a panic that AI is gonna take our jobs and they also have a panic that in the wrong hands, it could have really detrimental effects. So his whole point here is that this is not the first technology we are seeing that sets off like an initial type of panic. To me, it's just so different than like what a lot of us could have imagined. Like, I don't know when you were growing up if you could have pictured this type of like machine essentially at our fingertips. I'm not sure if that's maybe also underscoring where a lot of this comes from. But something else he says is that, another issue leading this panic is perhaps maybe the lack of regulations around AI. We've talked a lot about this before Tristan of, tech moves really fast and a lot faster than sometimes some governments can regulate. So I'm throwing to you now, do you agree or disagree? It's hard to blame people when you think about the steady diet of media that we've had over the last several decades, including but not limited to the notion of Hal 9000 in the movie 2001 and basically an AI of the ship's computer who goes rogue. And there have been countless stories about just this sort of thing happening. We create something and it gets out of our control. Now, in response to any kind of moral panic like this, Andreessen describes two camps of people and he likens it to the days of the prohibition of alcohol in the early 20th century in the United States. And he describes these groups as the Baptists and the bootleggers. And the Baptists were the folks who were well-intentioned who were trying to preserve the moral integrity of society because alcohol can be a problem for some people and it can have detrimental effects on people, families, their jobs, and so forth. And you had temperance leagues fighting for the abolition of alcohol and they eventually won. Well, that's all well and good, but in the real world, people were still gonna drink. And the bootleggers are the ones that step up and provide the supply to meet the demand even though the alcohol was technically prohibited. Andreessen says, no matter what those who are clutching their pearls about the dangers of AI might think, there are others who are going to still be pushing forward on AI anyhow, and it would be foolish to not do the same. You know, as responsibly as possible, of course, but to basically sit back, slow down, let you wait for government regulation, which if we had done that with the web, he contends that it would be a very different web today. It would be, the innovation would have been much slower. Some people argue that maybe it would be a bit safer. Then of course, no one in the early 90s predicted what social media would become and all the knock-on effects of that, of course. So Andreessen is like, okay, this is gonna happen anyway. Do we wanna pump the brakes as some people are arguing or do we wanna go ahead as responsibly as possible because others are going to do it anyhow? And this is really what leads him into his first risk that he outlines, which is the common question here on today's theme, will AI kill us all? He's really not holding anything back here, but he's essentially saying that throughout all of our humanity, probably about a million different inventions and things, people have had this thought in the back of their mind, like a fear with new technology, you know what I mean? To me, it's more think about it rationally, which I think it's part of his article as well. And his viewpoint is very much so. We created the AI, so therefore we can theoretically set parameters that would control it. I'm saying at least to an extent, he's kind of blanket statementing that. So he really breaks it down and says, you know, AI is code, it's math, it's created, owned and built by people. So he is saying at the end of the day, that's who controls it. So he's very much on the side of, we have the control here, not the AI. And he says, you know, people misconstrue this. It's really a computer program. And he's saying, you know, like it's, yes, it is trained to learn and it is always learning, but people are mixing that up with actually being a living human being like you or me. AI is not a living being. And that should have been really the name of this episode. Darn it. Well, there are people who have legitimate fears about AI causing death, societal ruin, mass unemployment or inequality. He contends that such fears are fallacious, irrational, or exaggerated and AI can actually be a savior, not a destroyer of the world. As alluded to already, there are others that are developing this anyway and the greater risk is the West and its allies not pursuing things. And again, doing it in a responsible way with not too much government regulation. We ultimately need to win the global AI race by accelerating AI development and deployment in the West. Now, the people who argue against Mark Andreessen's very optimistic view of whether AI will kill us all or not and he believes not that it'll actually save us. Some of people such as Nick Bostrom, the author of Super Intelligence, I believe was the book, and others have said, well, AI is just not merely a tool. It's not just something that we create and use to make blog posts and pretty pictures, but it could be an agent whose goals are misaligned with ours. They may have different values than ours. And they also contend that there's a risk of it developing Super Intelligence, AGI, artificial general intelligence. Super Intelligence is another layer higher than that. When it becomes smarter than humans, then it might be able to outfox us and manipulate us and pursue its own objective. And this whole idea is founded on the ortho... Orthogonal... You got it, you got it. I don't even say orthogonality thesis. Say that five times quickly. In today's word of the day. And so basically, which means that intelligence and goals can be combined independently. So it's one thing, it's developing intelligence, but is it also developing goals? And will those things just kind of magically merge? Will these, its own values and goals emerge from it? Or is it just believing? Is the AI believing what we tell it to believe? Or is it going off and developing its own thoughts? But to that point, that's where Mark Andreessen has a real problem. I wanna pull a quote for a second to really sum up the counterpoint to the counterpoint, which he says, quote, AI doesn't want, it doesn't have goals, it doesn't want to kill you because it's not alive. AI is a machine, it's not gonna come alive any more than your toaster will. Yet. And quote, yet. Which you got me thinking about, you know, like perfect for that quote because some of these experts even that are making counterpoints to his article are using language like values and morals. And to me that means, well, that means you have cognitive dissonance. And that's a whole other issue that, what are you saying? Because is it gonna get like that? You know, we'll touch on that in a second. But to his point is no, we control it and it's like a toaster, we could pretty much unplug it. But something else really interesting that he says in this risk argument is that, you know, like that got me thinking was, can we really trust the naysayers? You know, like it got me thinking of think of where a lot of this is coming from, just like with anything, check your sources, right? So like as an example, if you're the CEO of like a big tech company, you may have like a different goal in mind and it might be under that common umbrella of, oh yeah, we want regulations for AI, but like what's your actual goal? Because maybe you're trying to stop the competition, right? And it's not actually about public safety or regulating the AI. So it's just a really good thing to keep in mind when we're talking about any for or against or just any type of cool discussion around AI is think of where this thought leadership is actually coming from and follow the money trail people. That's what I always say. And to be fair, a lot of the naysayers are simply academics. Some are maybe have books to sell, some are, you know, bringing glory on themselves for being the cautious ones out there because it's a great way to get clicks with the headlines about ending the world and so on. And then there's of course those such as Sam Altman, the CEO of OpenAI who has called for regulation. And that was after a whole bunch of other people signed a letter including Elon Musk saying, oh well, OpenAI needs to pump the brakes and not release GPT-5 for at least six months, cease all developments. Basically the cynic might say that's so others could catch up but they're claiming, oh it's for safety reasons. You mentioned regulation and that's one key thing is that regulation tends to favor the incumbents. The larger players especially who have the resources to go through all the red tape and meet all the regulation and that tends to stifle innovation especially from startups, especially startups that are bootstrapping who don't have tons and tons of splashy cashy venture capital and angel investor money. But all I can say is I for one, welcome our AI toaster overlords. Same and I want everybody to mark the date, whatever date you're listening to this and I'm in agreement on this point with Mark Andresa and I'm going to say that AI is not going to kill you or me or your friend or my friend. But it might burn your toast. If it starts printing messages on your toast then we should be concerned. Sell your house and run. So risk two, will AI ruin our society? And Andresa contends, well no, it's not going to create harmful outputs or it's not going to ruin society by creating harmful outputs like hate spirit or misinformation because we already have existing laws and regulations that can prevent or prosecute issues like that. And AI won't impose a narrow morality but will empower diverse expression of preferences and values. So it seems a little familiar to some of the positions of certain social network CEOs who are like free speech is good. It's a pretty libertarian minded idea. It used to be the purview of the left. Now it seems to be more the purview of the right but the libertarians have always been all about free speech and there's a big libertarian streak that has run through Silicon Valley traditionally and when it comes to social networks, folks such as Elon Musk have said, well, you know, we'll let anything as long as it's legal in a given country. So anything that's illegal in a given country, we would ban off of Twitter or the platform formerly known as Twitter and we'll let the law deal with it. But there's a lot of stuff that is not particularly savory that isn't illegal either and is that creating an environment to attract users? So by extension, is that conducive to creating AI agents, chatbots and tools that one would want to use? We saw Microsoft a few years ago creating its chatbot called Tay and this is even before Google invented the transformer model and Tay was sort of a slightly dumber chatbot and it was trained on, I believe, the corpus of Twitter and it didn't take very long before Tay became pretty hardcore racist kind of reflecting a society, certain elements of society but it kind of took the worst of the worst and a lot of these chatbots when they're pushed, they sometimes see their dark side as well. But again, Andreessen's saying, no need to worry, we've got laws to take care of that. And in fact, he's kind of saying and asked the question will regulation just lead to more regulation and how we regulate this could be the most important thing we've regulated, basically. So I mean, in terms of his argument there, it's on me because any type of technology, even if we're talking about camera tech, it is biased on the backend, the people that are coding it, what it's trained on. So you have to look at this as that exact same type, theoretically. So when you're looking at a large language model and it's trained on everything, all the inputs it's getting, all the queries it's getting but it's also pulling from, let's just say a set for sake of argument, a set amount of information right now from the internet which historically can be quite problematic. It contains multitudes. It contains facets, it's very faceted. But you know, so it's, I do have hope in terms of as we also lead and everything we're doing is also leading to training these models, I can hope we can train it a bit better on, you know, you can't say that or that's actually based in this misogynistic thought or whatever it might be. It's like training your children. We could train the model a little bit more and if enough of us to do that, that to me was one of his weaker points in his, you know, defense against all that is AI but he does move into point three which is one of the things you and I have talked about ad nauseam which is what we hear all the time and that it's will AI take our jobs? You know, and people are always coming at me like it's gonna take our jobs and it's gonna do this and it's gonna do that. His argument, I bet you know where this is going is very much that it is just not the case. In fact, he says that this argument is flawed and can't hold up because it's based on a flawed theory that there's like a set amount of jobs in our economy at any given point in time and that there's no fluctuation on that. Me personally, I know we have talked about this a lot. I've already seen things like these large language models creating jobs. A little tiny example is that there's actual job is to be a prompter. So companies. A prompt engineer. Yes, I'm sorry. Yes, that's so much more advanced than a prompter. You get an extra 25K with put engineering on it. Exactly, exactly. You know, where like companies are hiring for people that know how to prompt the language model properly to get the proper output from it. It's a real job. And so, but you know, his argument obviously goes a lot deeper than that in terms of, you know, not only is it not going to take your job but it's going to create more jobs and then also, you know, this increases productivity. It decreases costs for whatever company and manufacturing. Therefore giving all of us huge purchasing power and he makes this giant argument. Are those even connected? Oh, this just in. That prompt engineers are now out of work because AI itself can create better prompts. Whoops. Whoops. So yeah, this fallacy that you refer to is called the lump of labor fallacy. And again, his contention is that with virtually every new technology that has emerged, it has been additive. There's created, some jobs have been lost for sure. We don't have a lot of buggy drivers anymore by buggy whip manufacturers anymore. And think of any technologies that has been replaced. We don't have huge clerk typist pools anymore. We don't have a room of people who have been now replaced by a single Excel spreadsheet. I think there are so many jobs that have been just done away with because of increased productivity thanks to technology. And that's not to say that there's not gonna be disruption. There's gonna be a plenty of disruption in the longterm. But he's saying like in the medium to, I guess, sorry, plenty of disruption in the short term, but in the medium to longterm, it'll be additive, well, there'll be more prosperity. And if you look, I mean, history is kind of proven amount with, as we've learned to harness fire and we invented the wheel and aqueducts and modern plumbing, et cetera, et cetera, things have gotten generally better. Quality of life, reduction of poverty and so on. So we're going to have more high quality jobs. That's the argument. Now, the counter argument that a lot of people are focusing on tends to be certain types of economists and policy makers who point more to the short term disruption. The upheaval and the consequent societal pain, it's no fun when people lose jobs. Look at various industries like coal mining in various parts of North America or in England, for example, as the world continues to decarbonize in certain parts of the world, there are going to be lost jobs. And you can't just say to a coal miner in Virginia, oh, you should learn to code. Like it doesn't work like that. I think it's inarguable that there will be some inequities that happen because not everyone is a techie. Not everyone can be a coder. Not everyone can be even a prompt engineer. And part of the other issue is that the way this tech often works, it can be a winner take all type of scenario if we look at things like search engines. I mean, until recently Google was it for a couple of decades. And it's only now that we've got this disruptive technology with generative AI and Microsoft jumping on that quickly that people are starting to talk about Bing again. And otherwise when it comes to social networks like the network effects kick in and it favors fewer larger platforms. And the little ones experiments tend to come and go. And it's very rare that something will actually bubble up. Certain types of technology really favor consolidation. So we end up with digital monopolies or oligopolies. And a lot of these big players will sometimes buy up the little players. And it's arguing that this kind of stifles innovation or competition. And then you have all these other externalities throughout society. And the current AI systems aren't really designed to take any of that in mind. I mean, we could talk all day about inequality if that's where we want to go. Which was his fourth risk. Like will AI lead to inequality? Which I will say it doesn't lead to it. Like I don't think the AI is causing the inequality. I think it's already like you're saying an issue in our society, but is it really the advancements in tech that's causing this? Inequality's been around as long as humans have been around. So can we blame it on the AI? Well, it seems that no matter what economic system you choose that there's some sort of inequality even when said system espouses equality for all. But just read Animal Farm by George Orwell and we can see how that ends. So the idea that Mark Andreessen is positing here is that, well, AI will be spreading prosperity to all by lowering prices and empowering consumers. Refuting the Marxist view that technology owners will exploit workers and emphasizing that the technology producers aim to sell products to people at the lowest possible price. And we've seen that. If you think about compared like back in the day how much computers cost like my first laptop in 1998 my first brand new laptop was over $7,000 Canadian probably was sort of at the time maybe 4,500 US and you can get much more bang for the buck nowadays. TVs, flat panel TVs, you think like when plasma TVs were new 42 inches for again $14,000 Canadian say let's say $10,000 US and now they're giving them away when you sign up for fiber internet with your local provider. So I mean, he does have a point there like tech, the development of technology tends to bring down prices. Now, I guess the unanswered question here is like, okay, well maybe it'll be driving down the price of AI products, but will AI enabled products become cheaper as well or give us, you know, I guess increased productivity in the workplace can help increase prosperity, increase profit margins, increase the wealth for shareholders, how much of that will actually make it to everyday workers. We're seeing productivity gains by folks. You've got regular computer programmers like the one X computer programmer. There's the myth over these. There are like 10 X computer programmers, people who are very, very efficient. They're like savants. And then you've got your garden variety programmers who say, we'll call them one X. What is happening with some of these AI tools like GitHub co-pilot is that your regular one X programmer, your software developer can get up to like six X or seven X they're increasing their productivity, which is great. I mean, it kind of fuels that whole pandemic phenomenon where you saw people taking multiple full-time jobs because they were using tools to increase their productivity. We'll see how much that or how long that will last. But increasing one's personal productivity is great. The question is how long until everyone is expected to do that. And it's just reversion to the mean now. It's like, well, if you're not using that you're falling behind and you're actually not as attractive and maybe actually made redundant. I didn't think we'd get so heavy in this episode. So the moral of this part of the story is it's in your best interest to learn some of these tools to get more productive because eventually you're gonna be expected to be anyhow. So my weight. Absolutely. And speaking of morals Tristan, I think the big question here and how he kind of starts to wrap up his piece is the big question of will AI lead bad people to do bad things? Bad people, they exist. Right? How dare you? And yes, moving on. No, so yes, but his argument is kind of twofold here. It's that there's already laws in place to prevent certain things, which we've talked on a little bit, but he really wants to see this AI used as a preventative tool. Whether we're talking about like Homeland Security or cybersecurity or defense systems or whatever you might say. His big thing is we need to do it first and we need to do it the best because if we're not doing it, guess what other countries already are. And that's the scary part because while we're over here, Tristan talking about, oh my God, chat GBT can write my essay. Hee-wah! Like that's a real cutesy way of looking at AI. And I can assure you there's a lot of other countries that aren't looking at it as just like, hey, what a nice way to like be an aid in our workday. Or how about AI-powered drones using facial recognition to target people who look a certain way? Bingo. How about a certain government that wants authoritarian control? How about that? And maybe wants to keep people in line and maybe keeps score. Really the biggest detriment he's saying is, like, yeah, it's a risk, but we have to do it first and we have to be the best at it because there are countries like China that definitely have a different vision for their use of AI than what maybe North America's vision is for the use of AI. And that to me is not just like the biggest risk, but to me is the scariest thing to think about. And in terms of like, you know, just because things are risky doesn't mean we shouldn't take that risk. We should, and especially in this case when that technology already exists, countries like that are already developing theirs, we really do have to do it first and best because we need some defense in place and we need the checks and balances. And if it comes to it, we need to protect those that are stuck in the countries where these checks and balances aren't in place. It's a lot to unpack. So some of the naysayers would argue that, you know, AI can develop a misalignment with human values that can lead to harmful outcomes such as social manipulation, environmental degradation, and more, and that claims that current AI systems, which are designed to optimize specific objectives, often disregard broader human consequences or preferences. And there's that old nugget that I believe was trotted out by Nick Bostrom in his book, I believe it was the Super Intelligence, about if you created an AI to design paper clips and didn't put the right kind of parameters on it, it would basically, and if it had access to the right resources, it would keep doing its job until basically the entire world was turned into paper clips and then eventually would consume, you know, the solar system and what, and if not the galaxy and so on and so forth. So some of the naysayers talk about making human compatible AI approach. Well, humans are complicated and they're messy and they have different sorts of beliefs about how countries should be run, different political systems, how much personal freedom and autonomy folks should have. So if you're going to have myriad types of AI being developed with different sets of human values imbued within, it's complicated. It's one of those things I think, you know, it's not an all or nothing approach. I think just to give the AI makers a blank check and it's like, go nuts. It's like, again, at the risk of over-regulation by government, I think to be fair, most people want to do the right thing and most people don't want to let bad people do bad things. They think about all the controls there are around the proliferation of nuclear weapons, for example. There are very few countries around the world that have access to that. There's active measures have been taken to ensure that said weapons don't get into the wrong hands. Sometimes they do, but they would have a lot more if not for those controls. Similarly, there have been embargoes put on certain countries regarding the acquisition of various types of AI optimized chips, processors, and to help slow down their development again to give the West the edge. Well, we could talk all day about, you know, the responsibility aspect when it comes to building this AI and using this AI, but something tells me we're going to talk a bit more about that maybe next episode or a few episodes down the road where we, spoiler alert, talk about we can't put the genie back in the bottle. So how can we do this responsibly? Well, Tristan, we did it. We've come to the end of our first episode. Huge thank you to everybody or our one listener perhaps for tuning in to AI named the show. Listen, because we're a new show, we really, really want your feedback. So we want the good, the bad, the ugly. You can email feedback at dailytechnewshow.com or feedback at ainamedthisshow.com. And also if you like this show and you just can't get enough, you can find AI named the show anywhere you get your podcasts. So be sure to give us a follow, leave us a review on Apple podcasts. Obviously only five stars, thank you so much. You can also find us on all the socials we're at AI named this show on Facebook, Instagram, YouTube, I am still calling it Twitter. You may be calling it X, but we're on there too. And that's it for us. Thank you so much for listening. And thank you to the Daily Tech News show as well for giving us the opportunity. Bye.