 Hi, welcome to the Reason Livestream. I am Zach Weissmuller and I'm joined today by two gentlemen who invested a lot of time and energy and in some cases money into the topic of artificial intelligence, which is what we're talking about today. Specifically whether or not it poses such grave danger that all work on large language models like chat GPT should be paused until we figure out what the hell is going on. Jan Tallinn is a computer programmer and tech investor who was part of the team that developed the software for Skype and old millennials like me also owe him a debt of gratitude for making college dorm file sharing that much easier since before Skype he helped create Kazaa if you know you know. Tallinn is an investor in several artificial intelligence ventures including an early investment in Google's DeepMind and later in the AI company Anthropic where he's also on the board of directors and also very relevant to this conversation he is a co-founder of the Future of Life Institute which organized an open letter calling for an immediate pause on giant AI experiments a letter that was signed by the likes of Apple Steve Wozniak, Elon Musk, and more than 27,000 other people. Jan, thank you very much for joining us. Thank you very much for having me. Important correction I'm not a director of Anthropic although I am a board observer. I deliberately did not want to. Okay, thank you for that clarification. A board observer. Okay and Robin Hansen is an associate professor of economics at George Mason University and research associate at the Future of Humanity Institute of Oxford. He's the author of books like The Elephant in the Brain and particularly relevant to this conversation The Age of M which lays out in great detail one possible AI dominated in the future. He's also regularly about AI at his popular blog Overcoming Bias where he recently argued that most AI fear is future fear. Robin, thank you very much for joining us. Great to be here. I'm looking forward. Let's open with a discussion of the letter which has generated a fair amount of conversation media coverage and possibly even led with the heads of all the major AI companies. This letter definitely caught my attention. Going into this I'll admit that my inclination is to be suspicious of both doomsday predictions and definitely suspicious of calls for preemptive regulation of emerging technology which we know can cause serious unseen damage in innovation that never comes to be. But the fact that a lot of high profile people in the field have signed this letter combined with some recent public statements from insiders at some of the companies which we can play a little bit later has set off some alarm bells which is why I'm really grateful to have you both here to make your cases. So let's first look at the bolded demand in this letter which says we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. So Jan, could you make the case for why this kind of pause is necessary? Sure. So to give you a bit of a backstory, in the beginning of March I wrote a memo that I was going to distribute with people who are concerned about AI as well as people who are working at AI labs and I claimed that in my subjective assessment every 10xing every kind of order and magnitude of compute that's being thrown at these black box experiments that are unsupervised and have like massive that already have like massive compute that is being thrown at them every 10xing causes roughly like my 90% confidence range is 1% to 50% existential risk to humanity and AI asked for people to push back on that saying that you're crazy there is like no way like that and here's why and like absent that I said that we should be a asking for people in the labs what do they think the probability of complete destruction of the planet is as a result of the experiments that they're doing and drawing parallels to the report the first existential risk report that this planet produced was called report LA 602 by Manhattan project scientist who basically faced a similar situation where they weren't sure if the work will with the planet can withstand their work if the planet can survive the first nuclear detonation so the data homework the data calculation so I thought that it was it's time for the people working on this unparalleled systems to do similar homework or at least gives the give the subjective assessment and the other thing that I said that if that we should be asking for is basically as I say I kind of read it advocating for both voluntary and government enforced moratoriums on such scaling experiments until there's clear understanding of the existential risk they pose and I so that basically what's happened on the back end at the same time articles by Esra Klein and Yuval Harari came out also expressing kind of sort of unease about what is happening with AI so it was like sometime in the month it was like 21st March or something like that we had like future life institute call and we started discussing this situation and somebody proposed that wait a minute we we have experience with open letters we have the back end etc we have done letters with up to like thousand participants and thought that okay like let's let's just like take the temperature of the situation and sort of like create common knowledge so in some ways the goal with the meta with the letter was like more like a meta goal less about it was less about the pause much less about the six months post the letter said at least six months pause but it was basically the goal was to create common knowledge in a way that like people have something to point to and say that look we are concerned and we see that other people are also concerned let's do something about it what is your reaction to that robin the both the letter and yon's explanation of it being kind of a call to I guess rally some awareness as to the potential risks of AI and also create this sort of common rallying point for people to start talking about about this issue I agree that it's not that plausible that in fact we would implement such a pause soon especially through legislation or something but that this is can be seen as a political trial balloon that's often done in many contexts you raise a flag and you see who salutes basically so I believe that yes some people are concerned many people are concerned and this is a you know attempt to show that many people are concerned but it's not just showing concern it's showing concern in a particular direction it's it's indicating that many people might be willing to more strongly regulate or you know coordinate privately through something as a substitute for regulation in this direction so you know that's what they're trying to do here is to get more people to perceive that more of us are okay with something like regulating this because of particular concerns they have so you know my issues would be is this the sort of thing now that's worth doing that level of coordination or regulation that would be the key question that is this is a good thing to do if in fact we are near the time when it would be a good idea to coordinate in such a way and that this would be a good first step in that direction but if you think this is too early and we might go too far too fast then this would be a bad idea so that is the fundamental question that we're here to discuss today and I would say that the letter I at least I'm assuming that the letter has played a role in this because there has been a new stepped up effort or some momentum to push AI regulation we've got this meeting that happened last week where the CEOs of the major AI companies were summoned to the White House to talk it out with Kamala Harris there's apparently legislation underway that Chuck Schumer is laying the groundwork for according to some reporting from Axios there's four pillars to this the identification of who trained the algorithm and who its intended audience is is one of the steps the disclosure of the data source for training the AI and explanation for how it arrives at its responses so that's kind of in response to the the black box idea that Jan was raising earlier and transparent and strong ethical boundaries so and and also we've got you know on the heels of that meeting and I'm going to have Adam play this clip in a second is Sam Altman the CEO of Open AI which is kind of viewed as the industry leader at this this point in terms of large language models or you know chat bots has said he welcomes regulation so let's play that clip of Sam Altman and then I'd like to get Robin's reaction as a critic of preemptive regulation our mission is to figure out how to build these advanced AI systems and deploy them into society for maximum benefit and that requires partnership with government and regulation the companies can do a lot and we talked about this yesterday to get that started but long term we will need governments our government governments around the world to act and to put regulation in place and standards in place that make sure that we get as much of the good as possible of these from these technologies and minimize the downsides longer term as these systems become really really powerful I do think we will need some sort of international authority that is that is looking at the people building the most powerful systems and making sure that we are running evaluations for safety getting extra so what what do you think of what Sam Altman is saying there Robin well he's trying to be a good you know corporate citizen and saying that I'm not you know regulation isn't crazy and I guess what we want to support it as you as you probably know often industry leaders are fine with regulation that creates a barrier to entry to competitors so it's not that strange that industry leaders might be welcoming regulation but you know our fundamental question here has to be is this the time to do regulation and if we were going to do some sort of regulation what sort would be a good idea as you know some kinds of regulation can just be heavy-handed and really shut down innovation and so I would say if we're going to do anything in the direction of regulation let's talk about what would be robust gentle even market-based sorts of regulation that might address some of the concerns but not hinder the rest of the industry what just to follow up there what what are your major concerns with what non-careful regulation might do in terms of the development of this technology could you just lay out you know some of your some of your fears of AI regulation well humanity in the last century basically shut down nuclear energy industry pretty effectively and we basically said no we didn't want to go there with modest exceptions more recently we basically shut down genetic engineering and said no we would didn't want to go there we may just see AI that way and want to shut it down that is I some people say that this we really couldn't regulate this that would be infeasible there's just too spread out and too too many strong interests looking at the past I'd say it is possible to strongly regulate some industries and there is strong public opinion in support of being wary of AI and and holding it back so I fear the worst case of really just shutting down the whole industry and really foregoing enormous potential what do you think about that yawn the nuclear energy example is an interesting one because we've seen that we've seen the the consequences of that in in real time as of late where now there's you know increasing concern about carbon emissions and nuclear energy is one of the low carbon potentially abundant sources of energy but the kind of advancement was stifled and now we're stuck you know decades behind where we might have otherwise been if there hadn't been what might be characterized as a little bit of a panic around regulating it and making the barriers too high how do those thinking about that problem work into your calculus and putting out a public call for pausing AI at this point in its development I think this is like based on like unfortunately the common misconception of the letter the letter was like very very very explicit that this is not about pausing the AI research it was very particularly that this is about pausing the large-scale AI experiments not now now are kind of reaching hundreds of millions of dollars and I understand based on some unclear or like no super reliable sources the next GPT-5 is starting GPT-5 training will start like later this year and and probably will be yeah I don't know hundreds of millions possibly like a billion dollars and important that they are like super super simple they are experiments that are sort of like I call them summoning of AI because they are something like 200 lines of code and then like enormous amount of data and they are unsupervised the thing will be run for months without humans checking what's happening and then they stop then they do the checkpoint as they called and then they see what this thing is capable and then they can resume or or stop now the problem is that with these experiments they are producing uncontrollable minds the yeah that's why I call it summon and tame paradigm of AI the latest like LLMs how they work is that you summon this mind from the mind space using your data and a lot of compute a lot of money and then you try to tame it using things like reinforcement learning from human feedback LLHF etc so I think it's and like very importantly the insiders do think that then they can they are taking some existential risk with the planet doing those logical experiments so I think one reason for some kind of pause or some kind of timeout is that that's informed the planet that their lives are being risked now by the insiders the insiders agree that like I have not met with anyone right now in this lab so says that sure the risk is less than one percent of blowing up the planet so it's important that people know that their lives are being risked by these very particular experiments there are like less than 10 of those experiments happening this year and the letter says that we should not do them before this is like the informed decision yeah I want to get play some of the statements from those insiders in a second but I would like to ask you about sort of the mechanics of the pause you're calling for what how how would that work ideally what kind of what would be the sort of coordinating mechanism and what are the conditions that you would want to see met before you take your finger off that pause button in terms of training these large language models that are at the level of GPT for or above I mean ideally there should be you know affirmative cases made by people who are or companies who are doing those experiments that this will be safe and I'm not saying that this is impossible because like there are some invariance that you can that you can ideally prove about like what this doing the pre-training phase what LLMs are capable of doing and what they're not because of some kind of technical technical constraints that then should be like externally monitored and audited so but like how to do how to do the pause is very simple you just like have US government to say that like no such big training runs until there there is some external auditing and some constraints in place so by external auditing are you saying that someone from the federal government should come in and you know the federal government needs to establish some sort of guidelines and an agency to go in and make sure they're meeting these guidelines so this is actually like a point where I kind of like I have to say that like I am kind of less informed and less knowledgeable about like what is kind of the most effective and kind of most targeted intervention there I would like kind of the companies themselves together with some external non-biased stakeholders to work out what these constraints should be and I know that the companies again insiders are worried and like in private discussions and they are kind of actively working on various constraints so I don't think some is being kind of dishonest when he when he says that that there is like some kind of external ideal international body needed to to constrain the experiments before we start playing some of those statements from those insiders I just like to get Robin's reaction to that idea that there is this anxiety within the industry but because of the market dynamics there's no way to stop it absence the intervention of an external force like the government coming in and auditing and making sure everyone's playing within these parameters what seems to be the main issue we should be discussing here is how plausible is it that if they let GPT 5 do training in the untamed mode then there's a 1% chance of destroying the world in that training run because that just seems crazy high and is an estimate for that particular event I see there's lots of concerns where AI could eventually go and the kinds of issues about that and I definitely see that a lot of people in the industry believe that it's going to go a long way and that we should be focusing on that and addressing that and I agree with that but I disagree with the idea that the next training run would somehow destroy the world that is these things you know they just take input and they give you output until you hook them up as an agent in the world that can do things they can't do things and the usual few people worried about it somehow this thing will be so smart that it will then figure out how to improve itself but it doesn't have the abilities to improve itself unless you give it those powers so the question is why should we be worried that the next training run will destroy the world why not just talk about where this whole thing could go and how we want to worry about that and I'm happy like I said I have some compromised policy recommendations for how we should you know in a mild way try to deal with some of the bigger long-term problems but I just can't buy this short-term risk story what are the scenarios that you're worried about Jan I mean I know you can go out there and read all sorts of doomsday scenarios but what are the ones you know especially in the shorter term that are most concerning to you that you're trying to stop from happening with this kind of pause yeah so I would kind of like two two things are boring me first of all like the insiders who are training them they are concerned and again when I talk to them privately like I haven't found anyone yet I'm sure that if I go kind of like do like more searching I will find someone who will be dismissive but so far like just asking asking around people like yeah like we can't say like can't be like more confident than less than one percent of complete destruction and the other one is like the way there is like LLM training works is that you have basically like I think modified version of linux that is running those experiments and and basically they run on on data centers that have like tens of thousands of graphics cards I guess like thousands of machines I don't actually know what the ratio there is and the Linux system is insecure there are like bugs in there and we are training the system to a no very well that is a language model no very well what linux is no very well what the what its holes are and be a good coder in fact like there are a GPT-4 already you can give it a code and ask it like what are the vulnerabilities here so it's like instead of like anthropomorphizing mode and all that add to this data centers are not really secure in my understanding from going inside out like the data centers are like the firewalls the purpose of firewalls is used to usually kind of protect the inside from the from the rest of the internet but like they're much less secure from going from from inside out so like at this point is like at the risk of like some other anthropomorphizing but not like insane anthropomorphizing like we are training a system that basically knows that is being in the data centers it probably knows where the data centers are it probably definitely knows how the how the firewall works in fact like a couple of months ago I used GPT 3.5 to configure my own firewall at least get advice how to do that and so the so the worry is that like when we think that we are sort of okay let me add one more thing like during the training we are throwing your internet at it the internet contains data where where people ask or the text where the system is being asked to break out to do to take over the world and whatnot because that's what the internet contains a lot of a lot of information like that so like the system during training is constantly being put in mode in kind of motivated to do something to get out like it probably can't because there are some kind of constraints that still hold but like I'll do this constraints hold uh to the confidence of uh yeah less than one percent uh system escaping or doing something that that is like outside the nominal parameters has to say yeah not really what's your reaction to that robin the idea that you know even if the odds are are low one percent or you know some fraction of one percent the fact that we to go all rumsfeld there are known unknowns and unknown unknowns and that there's something fundamental to the nature of this technology that eventually something's going to happen and it's going to leak out if we are not extremely conscious starting now building these these fire these impenetrable firewalls I think we should identify the scenario of most concern and then add extra liability around that that's my proposal for what I've called fume liability because fume is the scenario people are concerned about here and this the liability the scenario people are most concerned about are an is an agent that is very smart as a gpt 5 would be but in addition sees itself as an agent with goals uh able to take actions in the world uh able to and inclined to improve itself and then um you know find some way to improve itself enormously over an enormous scope in a very rapid period of time in ways that the owners and monitors can't see that's a conjunction of scenario of assumptions together I find pretty unlikely but I don't think policy should really depend that much on my estimate of the chance or on your estimates of the chance fine if there's any sort of chance let's set up a policy that's robust to the chance of the problem and that would be basically if if some if one of these systems causes harm the more of these factors that are the risk factors for this problem that are involved the higher the liability would be we just have an extra penalty that would then discourage people away from the most risky scenarios uh that seems like a reasonable compromise approach to me but these first systems like just gpt 5 trained doesn't have most of those risky factors which is why I find it very unlikely very unlikely that gpt 5 by itself trained would be the problematic system that is the system just takes you know queries and gives responses it doesn't have goals it's not you know planning actions in the world it doesn't have power to do things in the world it can't improve itself it isn't even inclined to improve itself so it's not going to be the problem but I get that there's a scenario people are worried about and even though I give that a low probability I say fine let's find a compromise where we can increase liability liability is in fact one of our most robust regulatory processes we have in in our society rather than set a government agency and rules I'd much rather use law and liability to address whatever concerns people have and here's the poll that you ran on your twitter about this question let our rh pr respect human property rights equals respect human property rights what is your biggest ai concern super ai does not respect human property rights and kills us most ai's are not respecting human property rights and we starve ai's respect human property rights but take most jobs or d ai's respect human property rights but many humans fail to ensure against a i taking jobs in there half of your followers said super ai kills us all uh so uh what are you uh i'm going to bring in and so robin is saying that this scenario that people seem to be most concerned about of self-improving ai suddenly deciding we'd be better off as something you know not humans just you know a bunch of atoms to be used for something else that doesn't seem intrinsic to what chat gbt you know gbt4 or even gbt5 is it all capable of is he missing something in that analysis yan yeah i think it's just wrong uh like first of all like i do think that uh yeah fume insurance awesome like uh like i think uh the reason why letter kind of asked for just like let's stop the giant experiments that are being planned like was that like this would this is like the simplest police intervention uh like obviously like the correct police intervention would be something more complex but it like developing it and agreeing to it would take much more time but definitely would would uh like share on various forms of uh insurance or liability liability schemes that are ideally commensurate with the risk that that people are taking so when it comes to agents uh sort of agency of gbt models there is this post called simulators unless wrong and they're gonna cut it tldr r is that you should think of gbt large-angle models as soup of agents that you can kind of fish out agents or like processes that are that are potentially very agent by prompting so like if you are if you ask give me a dialogue between steve jobs and and play though it will basically that will kind of like fish out uh like certain processes uh in that uh from that soup that tried to uh try to mimic uh the real world processes uh that be labeled steve jobs and play though uh and including uh thing including their uh preferences including their uh go direct and this include including their ambition uh of course they're only like uh sort of low resolution approximations uh but like there will be higher and higher approximations as uh as the as the technology improves and the more compute we spend uh on on training those uh so yeah agent the engine to nest and coolness is only uh only before the prompt the moment we put the put the prompt that the agent that comes comes out and importantly the prompts are like the way the system is trained is that it will give given like a trillion prompts or like at least billions of prompts and that that's just that nature like there is like no fundamental difference between it being used by people giving it prompt or or it being trained during the train time is being prompted and then like the response is being checked against the next token and then like it is uh kind of back propagated uh based on whether the prediction was correct or not so like if there is uh like a text of play though on the internet uh it will try to pretend to be played during that training moment if there is a text of like some maniac that's tries to over the try to take over the galaxy it will pretend to be a maniac that tries to take over the galaxy uh and uh yeah and so okay the self improvement bit it's correct that it is uh you know it is hard for a system that is trained with like lots of compute to self-improve in a naive manner however it is very possible uh it's very likely that for human level uh you know strategic operator being as the biggest human level strategic operator you don't need to have like that much power because humans have humans operate like 12 watts rather than something like 10 megawatts or something like that how the GPT is being trained at so it's it's like one concern is that it will just develop uh kind of some kind of new kind of AI that humans just didn't didn't figure out rather than going to try to self improve in a more brute force way and the final one is that the owners can't see well first of all yeah they can't because like that that's how the system currently runs that they're being it's being left alone for months and uh but the but the second thing is like sometimes the like owners seeing something only matters if the owners have a plan to do something when they see something they don't so to the point about uh you know developing some sort of unforeseen capability uh you know i want to return for a second to the point that yon raised earlier about every insider he's spoken to has expressed some level of concern about that being an issue they're not they they're a little bit nervous or scared by the fact that they're seeing things that they they can't quite explain or that they were not expecting to happen um we had a commenter here who kind of summarizes the position chris brown says if a nuclear engineer says something is dangerous non-nuclear engineers should respect that opinion um i'm gonna play a cut we're gonna play a couple clips from some of these insiders the first is going to be sundar pichai the CEO of google who uh recently had this to say um on 60 minutes about the state of AI but one of the things we need to be careful when it comes to AI is avoid what i would call race conditions where people working on it across companies etc so get caught up in who's first that we lose you know the potential pitfalls and downsides to it and then we've also got a former google employee one of the pioneers of the kind of AI research that has led to the the emergence of these models jeffrey hinton who resigned from google he says so that he could speak a little more freely about the risks that he sees i want to play a clip from him as well on cnn talking to jake tapper i'm not an expert on how to do regulation i'm just a scientist who suddenly realize that these things are getting smarter than us um and i want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us um and it's going to be very hard and i don't have the solutions i wish i did so the thing that worries me about this a little bit robin is that i have witnessed a pandemic that may have leaked from a laboratory and i've seen the way that scientists kind of closed ranks to make it difficult to even talk about that possibility early on and so i guess i've become a little more sensitive to this idea of tail risk and um when when you've got people that are close to the problem saying or close to the issue saying there's a problem it it does give me a little bit of pause does do any of these recent statements sway you or in the pause direction i see these statements as mainly speaking to the risk of ai in the next few decades and that we should be addressing that and dealing with it and i agree with that i let's see them saying that the next gpt five run could kill us all that's the thing i was disagreeing with if say you make a gpt five it's not an agent by itself but i agree that you can make an agent out of it uh and as you say you could make billions of different agents out of it but if millions of different people make millions of different agents out of gtp five then they may well get better and smarter but they are in a world of lots of other humans and systems and other ai's so the scenario of concern seems to be that one of these ai's somehow takes over the world so it has to not just outcompete its owners and builders who are watching and testing it it has to outcompete the entire rest of the world in terms of military and police and monitoring and it has to outcompete all the other ai's made out of gtp five all the other agents if if you release gpt five and many people can make agents out of it and they do they will each one of them will be trying to control their thing it might get out of control but one person letting their agent get out of control of them doesn't destroy the world so the the the scenario whereby an out of control ai destroys the world is a very different sort of concern than an out of control ai might you know deplete your bank account or you know destroy some system that it that it could get access to or things like that i think there's a lot of reasons to think about you know powerful ai's and what they can do and how to manage a world with them i'm just being skeptical about the scenario that the next time we run gtp five by itself which isn't even an agent it's going to destroy the world let's go past that and ask what other scenarios get bad how fast and what we can do about that that prospect of lots of different ai's out there sort of vying for power or dominance is as an interesting and kind of scary in some ways synnet uh thing to think about but uh it it does you know make me think about in terms of the pause is this even really like meaningful at this point is is the cat kind of out of the bag there is this memo that leaked from google um that sorry this is the run slide uh um from one of uh the senior software engineers within google and what it seemed to be saying is that well here i'll just read a little bit of the note the uncomfortable truth is we aren't positioned to win this arms race and neither is open ai and what this memo author is saying is that that google's competitor is not really open ai or any other firm but actually open source models so they've been people have been training you know metas training data i guess leaked out and open source engineers have been uh creating ai's on that data that seemed to at least according to this author uh be catching up to what open ai and google are doing he says they're doing things with a hundred dollars and 13 billion parameters that we struggle with at 10 million and 540 billion dollars first of all yan um how much credence do you give that that you know the open source movement is going to kind of catch up or maybe even lap the big centralized firms and if that's true does it change the way you think about what can be done to reign to to create safety rails around existing ai yeah i mean it's it's uh i mean i don't know is like simple answer like more more complex answer is that like the think that pause achieves again pause on like pushing the frontier in terms of those large soon to be multi-billion dollar uh ai summoning experiments like one thing it achieves again i i said that like you will get like lots of agents during the training run itself and like so you you have risk of self-deployment self-deployment the way you deploy an ai is a series of computer computer programs sorry series of computer commands uh that are sent to the data center uh the ai's in the data center is perfectly capable of sending sending those uh commands so so the question is like oh there what kind of prevent what kind of like firewalls etc are in place to make sure that it like it it can't self-deploy but but yeah so one thing that the pause achieves is that we will not push the frontier in terms of like risky pre-training experiments the other thing it achieves achieves is that the open source uh and the word at large uh has been working on has been working from the from the kind of biggest uh open source available pre-trained system that i think for a while and perhaps still was the facebook uh llama that they trained them in my opinion like completely responsibly just dumped on the world uh so uh so like the open source certainly currently does not have the ability to throw like hundreds of millions of dollars or again soon to be billions billions at these large ai summoning experiments it can kind of take over the pre-trained ai's and then customize and optimize them and i think that is where uh i think kind of robin's insights are much more uh valuable uh because as a metano general i think like our main difference with robin is that like i i work from like insight uh so like insight view i see like what what's happening on the on the low level and then i think about what are the what are the interesting properties that might kind of have uh unexpected effect on the world uh and that thing has served me indeed like super well in my life in general and but and robin says kind of like outside view taking like looking at the world in general identifying interesting patterns and think of distilling them down into into different contexts where i can be insiders might miss them with the um i want to return for a second to the the quote from jeffrey hinton about this worry that you know he expresses a worry that they're going to suddenly or are suddenly becoming smarter than us than humans um and there was this interesting conversation that was well before i get into that conversation what do you think is the substance of his worry there like what does he mean when he's saying that it will suddenly surpass human intelligence is that something that i mean what do you think is causing him to have that concern jan oh i mean i that's kind of question to jeff hinton exactly what if you ask me there is this concept called or this phenomenon called crocking again you can search you can you can find a paper by nil nanda uh researcher deep mind where you're kind of when you train a large language model uh so the so-called kind of loss function it's ability to predict the next token kind of tends to go smoothly down however various capabilities uh that are sort of like upstream of that prediction um what are you predicting the next token ability they tend to go through like phase shifts so for example ability to do arithmetic like in the beginning like it it's just can't do it and then like if you if you if you like as a function of this less loss function the ability to do arithmetic uh does not go does not grow smoothly it tends to go like a little bit and then just like phase transition kind of it it crocks the crocking is like a hacker term uh about like truly understanding and nil nanda has done a work where he has actually looked under the hood of the uh network and found that indeed the system develops uh some kind of weird organic looking but precise function uh for the thing that it is being asked to ask to do and this function basically needs enough examples at which point it kind of coalesces and and we will see like a sudden jump in capability so so that's like a sort of sort of mechanistic reason uh to expect sudden jumps in capabilities because they they are just the nature of llms does yeah yeah go ahead robin so what he's talking about are jumps and capabilities in relatively local scopes of tasks so the question is how plausible is it that the entire system's capability will jump overall by a large amount that's much less plausible just given the fact that sometimes you grok a particular thing uh i want to notice that basically in our world the smartest people don't take over the world consistently we we do have super intelligent creatures our corporations and and nations and organizations and they are in fact much more smart than any individual but we tame them largely through um law and competition and so the the scenario people are worried about isn't just something being smart you have to add a bunch of other things that to the worrisome scenario it has to have some unusual abilities in some particular area and then it has to have very different preferences than it you know its builders intended and then it's you know it has to be able to take advantage of some unusual opportunity and unusual preferences to greatly improve its position those are the scenarios of concern which i find relatively unlikely but i'm happy to develop some liability approaches to deal with but you know just notice generically things being smarter in particular tasks isn't a big problem in the world today what how was how would the liability work to protect against this sort of risk if the risk that some of the most ai worried people are expressing is basically total destruction because if that happens then liabilities meaningless right so what how does the liability function in in this world right so the idea is that when you're being risky and taking chances uh if there's identifiable elements of the riskiness of the chance you're taking most of the things that would be risky aren't going to destroy the world but they are going to be closer or farther from this scenario of concern so uh basically sometimes people will use an ai sloppily and then it will cause damage to somebody else and then we ask how many of these problematic elements did that scenario involve and we penalize you more the more of those problematic elements were involved so that you knew ahead of time you should just stay away from certain parts of this space of possibilities because if if you do something that causes a problem you have to pay a lot more there so it's just a general way of pushing people away from the problematic corner of the parameter space which as i said before it involves a whole bunch of things that have to go wrong together to make the bad thing happen uh and so yes if the worst thing happens you can't penalize them then but most of the times when people do things near the worst thing uh it won't destroy the world you could penalize them a lot and then you could discourage in that way by the way i would combine this with some sort of required liability insurance that would even strengthen it such that the bigger the damages the more they have to make sure somebody some insurance company is set up to pay for it and that would also discourage them in the sense that it would be harder to arrange for that insurance and the insurers would say you know and you know show me that you're not going too close to this problematic part and they have trouble doing that and that's why they would stay away that's the idea and to return to your poll for a second here it's it's interesting that you frame it in terms of respecting human property rights is that how when we talk about things like AI alignments so trying to make sure that artificial intelligence is aligned with human goals and human prosperity is that your way of thinking about it that respecting human property rights is more or less like the i don't know moral or ethical framework that designer should be aiming for i would say in our world today the main way that we try to decide who we can trust and be near isn't so much checking that they share our values it's checking that they are law abiding and in some sort of reasonable legal regime including respect for property rights so i would say that's what you should more worry about for a eyes is that they won't respect property rights they may steal or have a revolution or things like that you know the reason why you might be worried about their values is because you think it might induce that sort of an event so that's the more fundamental problem ai's that do respect property rights i mean there are concerns people have but that's more like okay they'll out compete us in the long run they'll get more wealth they will sort of drift more to send us a control and power and even if there's peace and prosperity many people don't like the scenario where ai's just slowly peacefully displace us and we could talk about what to do about that and i think there's other robust policies to consider there including robots took your job insurance um but again you know if you're worried about that we could talk about that but if you're worried about the ai's suddenly taking over and killing everyone then you're talking about some violation of property rights right to say the least yes um so uh i want to play just to snip it from this this conversation actually i don't agree but oh no please yan if you have disagreement jump in so like one concern like one potential problem i can see is that the ai suddenly quickly changing i mean it's a particular scenario so it's unlikely but like that said uh like evacuate into atmosphere is that like violation of property rights yeah pretty much i think so okay i mean if you sue somebody in a court of law for taking the atmosphere over your property i think you'd win the lawsuit i think that would be considered a reasonable tort okay our concept of the law that yes all right then then that that the need would have been appropriate if you're efficient so the one of the most vociferous and prominent ai concern uh people out there has been uh eliezer eliezer uh judovsky uh judovsky and um he writes here that pausing ai developments isn't enough we need to shut it all down um and that the key issue is not human competitive intelligence as the open letter puts it but the more likely results of building a super humanly smart ai um and that's you know the ai does not love you it's not hate you uh and uh nor does it hate you and you're made of atoms it can use for something else that that's kind of the fundamental problem and that's you know uh a pause is really not sufficient what what did you make of his reaction to the open letter yan um i mean eliezer is the person who gonna cut me into this uh yeah i risk in a first place in in like 2008 2009 so i have like tremendous respect uh for him uh and i think he's just right in the limit like once we indeed have super intelligence like and again i give i can't rule out that gpt if i will be super intelligent in that way uh like then we have like a lot of problems like we are not bound by like pheromones in anthills uh or like the rules of anthill just like i just don't see how how like superhuman ai would be bound by the rules of human economy um but like of course i'm just i'm trying to understand the the disconnect here between you and robin on the likelihood of chat gb of gpt five becoming super intelligent you are saying that you think that's you can't rule it out robin is saying that's almost certainly not going to happen what do you think is the substance of that disagreement is that question to me uh sure i mean whoever uh yeah i mean what is there anything you've been hearing from robin i mean i i yeah i would just like point to gpt uh two gpt three gpt four uh and the fact that like gpt four already uh like i think ai is in general in the last few years have been knocking down various kind of benchmarks uh faster than can then we can come up with them so it's kind of uh i would say it's almost strange uh to be confident like confident as in like less than one percent that gpt five would not be significant as superhuman i mean gpt four is already sort of like like 90 percentile uh college student in in in many domains so i'm granting that gpt five could be superhuman on many performance characteristics what i'm doubting is that that destroys the world that is we have to construct a scenario after you have very smart gpt five such that the world gets destroyed and that's the path where i'm i'm skeptical i i i see you would use it many people would use it many people would then have access to a somewhat superhuman uh capability many people would use it for contrary purposes uh police would use it military would use it all the people defending us against theft and destruction would also be using it we would all be increasing our capabilities with superhuman ai but that doesn't directly destroy the world yeah i i i think i as i said earlier i think right in the limit uh the really kind of like question is like what happens between now and the limit and i think they're possibly kind of the laws of economy might kind of play like some kind of shaping role but also possibly not like again if the system kind of self deploys out of out of while being just trained without any kind of interaction with the economy in a way that the makers intended then like it might be more right than wrong is there a point robin where you would suddenly become concerned and want to start signing on to the open letter like what what are you looking for that is going to raise the alarm for you well certainly if people were trying to make the worst case systems and like the management and funding of a particular you know most advanced projects were trying to make the worst case scenarios they went out of their way to make those scenarios well then i get a lot more worried about those scenarios i think they're a priori less likely and i want to discourage them through some sort of liability so people would only accidentally produce the worst case scenarios let's push them away from that but obviously if you show me somebody's going out of their way to make it then i'm gonna say hey what about the what about this argument that some of these advances have been very surprising there's a black box element to neural networks and machine learning where these advancements can be made totally unexpectedly and that there is potential for quantum leap and it's better to act now than after that happens we're going to see quantum leaps in particular abilities robustly as as yan mentioned often you just rock something and you get a local advancement and we could have systems that are just just generally getting better steadily but the the question is how does that kill the world i can see long term where ai's take more of the economy they take more positions of power they slowly have more influence and then the world drifts in their direction and if they drift in various directions you might worry about that that's completely what we should be talking about and i can see humans might lose most of their jobs in a relatively short period say less than 10 years and we should set up insurance for that i mean i think there are these more likely scenarios and sort of robust solutions in a situation like this you like you're focusing on where we disagree but i think we should try to focus more on where we agree and what things we can agree would be a good idea could you tell me a little bit more about those immediate concerns upon which you agree and flesh out the idea of the disruptions to the economy and how you think those could be mitigated so i would say the most robust fear people have always had about ai going back centuries really is that humans would lose their jobs that the ai would just outcompete the humans and most people today don't actually own much wealth besides their ability to work so if they suddenly unexpectedly lose their ability to work they don't have much other assets to go back on and you know it's everybody's long been worried about that and that's what to do many people say well what we'll just do is have governments do a ubi then they'll just hand out lots of money but the problem is the new ai economy may be very unequally distributed in the world you're going to tax your local tax base to pay for a ubi to cover your local workers out of work you may not have much of a tax base to tax that may not work so what you need is something more like global insurance through re-insurance but you don't need this insurance to be very specific to any individual you don't have to do underwriting for a person you can just create basically a trigger event i.e. say you know within a five-year period say a global labor force participation falls from above 50 percent below 20 percent some drastic fall and the number of people working in a short time that could just be the world trigger event and then everybody who has a certain asset that's set up with that just gets paid an annuity of that asset and we can just create that asset and the people who buy the opposite side of that asset i.e. you know get money if that doesn't happen they're basically the people selling the insurance the people who buy this asset are the people buying the insurance and we can just set up a situation where the people most at risk we say buy this asset it wouldn't cost that much when we're well away from the problem and the chance is low cheap asset to buy so we should just set up an asset and basically recommend people get it maybe get their employers to give it to them maybe get their governments to give it to them but the key point it needs to be an asset based on a global asset base not a local tax base so that people could be assured that in the case of the problem they get paid are you in agreement on that yawn that that you know massive job displacement is likely in the near future and that what i mean what do you think of robin's proposal there yeah i have like a few notes that i first i want to mention the like there is sort of one thing to say is that like the pre-training phase is unsupervised there is like just no place to kind of correlate the the shogot that is being trained its preferences with with the preferences of the of the builders so like that happens after the AI has been summoned and this is being kind of fine-tuned there is like indeed like super unresolved question in in gonna pay a line of community which is like how hard is coordination as a function of intelligence and and i have uncertainty about it ideas or things is trivial many people point point out that like you can construct games that were like intelligence just will not help you so so it's it's like a kind of fertile area to research then when it comes to kind of agreements disagreements yeah i think it's important to kind of keep in mind that like the thing that i'm like arguing here for is basically being conscious of the tail risk and and making sure that like we have some guarantees that the pre-training uh like the most massive pre-training experiments are not going to destroy the destroy the world and like this grant is a problem possible but the problem is that the company's current they're not incentivized to come up with those those guarantees and uh there was one thing one more thing i wanted to mention oh yeah like when robin said that like he would be worried if people would be deliberately try to create like worst worst-case scenarios make ai to do destroy the world deliberately well open source that's how like if you it's not hard to find or the existing projects that try to use open source ai's to maximize the damage i mean the ai's aren't that competitive that competent yet so they're kind of like a more sort of fun and games project at this point but this is just like early 2023 but how would you even regulate open source development of these products i mean that that kind of seems to undercut the entire idea of there's even much you could do about it yeah so like i mean we already have uh kind of penalties for developing viruses and stuff like that so it's it's uh like something probably could be uh develop some kind of liability uh it can be assigned uh from there but i but i i agree this is like super super much more hard harder than just like making sure that you're not going to summon even your more competent minds and and release them uh on the public but but yeah so like my worry is uh mostly the tail risks and when it comes to kind of more mundane risks once we are handled uh the situation that uh sort of ai spinning out of control completely uh then uh yeah i think kind of robin's expertise and and people become more traditional technology regulatory expertise their their kind of competence would be great help so here's a another compromise solution i know people who work around secure operating systems who say that you can provably uh show that some operating systems are completely secure and maybe you could just require they use those operating systems for the 200 lines of code here that would be a relatively low cost honestly and it would actually help kick start this secure operating system world and that seems like a reasonable regulation that would be relatively low cost uh addresses your most direct concern here uh again i'm looking for compromises i'm looking for things we can agree on yep like 100 agreed and it's like including kind of increasing the data center security and security of this uh information security making sure that the weights don't leak make sure that the system is not capable of getting out self deploy to self deploy itself uh like just these are seem to be like fairly low hanging fruits but really importantly the companies are currently racing and they are not motivated to take any of those steps the danger here is if we empower some regulatory body to do stuff it won't just want to do a few best things it'll have a big public behind it and they want to you know take make speeches and show how concerned they are and just do a lot of extra too much stuff so we want maybe even if we authorize some regulation to make it limited and that's part of the problem here the regulation isn't often limited yeah yeah like i i agree with that like the of course the problem is that the world is on a deadline now so it's like the scaling next generation scaling will start pretty much uh imminently and uh and like the regulators just will not do any imminent actions unless like pressured what about the argument that to that point of the way that you know over regulation or or create an entire apparatus to do this could slow things down to a degree that we know there's other countries that are investing a lot of money into ai some of them are geopolitical rivals to the united states notably china right now u.s companies seem to have an advantage isn't there just an argument that's slowing down that advantage in any way is itself dangerous i think it's important to stress that the current ai paradigm produce uncontrollable minds like the reason why why baidu recorded pre-recorded all the demonstration is that they i mean i don't know exactly but i can speculate that they were just afraid of the mind saying things that would get the deck their their ceo disappeared so it's like i don't see like china being super keen on on producing those uncontrollable minds and neither should should be the west but yes ultimately as we get like better in controlling the ai's at least superficially so to not be in political danger and that's the hardware companies are producing and making it easier and easier to produce these superhuman minds we definitely i don't think anything else than international sunken international regulation that we have with things like nuclear materials etc short of that would help i'd actually say that in the last century we've seen a huge shift to from a world that once had competing nations where the elites of each nations were primarily loyal to that nation to a world where elites around the world are primarily loyal to a community of elites around the world that's caused a great deal of convergence of regulation in many areas even without international governance bodies or rules and we could count on a lot like that continuing so basically even china craves international respect in most regulatory areas they mostly go along with international standards of what everybody else does so that they can look like reasonable people in those forums and i think if you would ask for small things for ai risk like using secure operating systems for controlling runs or something and that was the standard practice that our regulations required they would probably if they saw it as a low cost wanting to go along with conventions they would probably go along with that i think that's actually a reasonable prospect because you ask them to shut down the whole thing they're not going to do that i mean that's that's a great point that like if it as we solve it like cool reactions like they were like right surprising uniform in the world i mean like what's like sweden and the rest of the world pretty much you know if i want to bring a little bit i want to bring up robin's book that we mentioned near the top the age of m i think i've got the cover here and let's see there we go and you had this so this is like a prolonged thought experiment about where ai could go by an m you mean a an emulated brain so someone to create the the ai they've literally used technology to scan the human brain and then create a kind of consciousness out of that i have pulled an excerpt from a ted talk that you gave on it that i'm going to play here in just a second adam if you could cue that up for us and then ask you how you're feeling about the prospects for the age of m at this point someday we may have robots the smartest people artificial intelligence ai how could that happen one route is that we'll just keep accumulating better software like we've been doing for 70 years at past rates of progress that may take centuries some say it'll happen a lot faster as we discover grand new powerful theories of intelligence i'm skeptical but a third scenario is what i'm going to talk about today the idea is to port the software from the human brain so do you you know is that given what we've seen unfold over the past year or so with regards to the large language models chat tbt do you think that brain emulations are still a likely scenario in the future or have you changed your mind so first even if we get general agi soon if it doesn't take over all the human jobs it maybe takes over 70 of previous human jobs then we would still when we are able to make brain emulations we would want to make them and then the brain emulations would still do the remaining jobs and the age of m scenario would play out roughly as described so the main way in which you know that's not our won't play out would be if ai takes you know 99.9 percent of the jobs i.e there's just no place left for brain emulations however i'm just not actually that convinced we're going to get agi that fast so you know i i get that the current moment as people are really you know enthused and excited but i have to remind everybody which older people should remember that basically every decade or two or three for the last century we've had a burst of concern and interest in automation we have some new demos or technologies and people say oh my god like machines might take over everything soon and we've always had like a substantial minority of people say that's a real prospect now we need to take deal with that like in the 1960s there was a presidential commission on that about that and big you know top news headlines about it and again in the 1980s i left grad school in physics and philosophy to go off to find my future in ai because i heard it was almost going to be too late ai was going to take over everything soon that's what i did in the early in the mid 1980s and this just happens over and over so we just seem to be really bad at looking at current technologies and saying how close is it to doing all the jobs i still think we're a ways away i the most likely scenario i think is that it's just going to be many more decades maybe a lot of more decades till we actually get ai that can do most human jobs i mean so the really impressive thing about the recent generation is they basically do pretty well at the kind of things we have college students do they can replace college students and college colleges are big like filter that we say on who can be elites in our world so that's pretty crushing to our pride they can they can pass as a college student pretty well but you know we don't actually have most jobs being done by college student skills right so the the big question is to what extent will these ai's actually be able to do most of the other tasks we have that in economy and i'm my bet is at least many more decades before that happens which gives more time for m's to show up yon do you think that maybe there's a little bit of hype going on at the moment that we are really decades at least away from something approaching artificial general intelligence which as robin puts it would be able to take away most human jobs it doesn't look like that to me however of course i can't be sure correctly like the llm's just seem to respond by becoming more intelligent what if you if you pull in put in more more compute cycles so they're they they haven't saturated yet and people plan to put in like a lot more compute cycles now unless they are being forced not to of course it's possible that they will saturate at one point like i think we have something like six orders of magnitude left between like six so it's like one million times more powerful than gpt4 i think this planet can still train possibly given the hunger to throw more money at it and the hardware advances that are on the horizon but after that yeah you would need kind of like the entire planet to put like significant portion of the global gdp into those training grounds which hopefully will not not happen so yeah it's it's definitely possible that this paradigm will saturate i don't think it's likely how would we know that this thing called artificial general intelligence has been this condition has been met would it just be obvious or is there some indicator that that tells us well it really is a matter of definition like i mean i'm sure like you can come up with a definition that gpt4 like easily passes already it's certain like very generally compared to like previous ai paradigms but like if we are talking about things that is able to do ai research better than better than humans then of course that would be potentially manifested by the fact that we're dead yeah i mean what what's our what definition robin are you looking at to to you know make your prediction that this is several decades out what what do you expect is still or what are you saying is still pretty far out of reach in terms of capabilities well the key definition we all care about is what fraction of human jobs do they actually do right and you know that test of that is how fast do we actually displace humans with those tasks and for-profit companies are quite eager to do that replacement as soon as they can so uh there's not they're not holding back per se it's just hard to you know actually automate many particular tasks and you know that's the main parameter i'll be looking at in the next decade is how what fraction of job tasks are in fact displaced and basically how fast does the world economy grow i mean those are the key parameters we're interested in and that's what we should be tracking and at the moment those needles haven't moved very much but of course it's only been a few months so let's watch those needles i mean this kind of illustrates kind of how different how robin goes from like outside in i go from inside out because the thing that i would be tracking is like how much uh in every successful every successive generation uh how much effort was input kind of put in input by humans and how much was ai because it's already non-zero by ai when it comes to like designing the chips and architectures there's one more topic i want to hit on before we close out the stream but i did want to ask yon about this phrase that you've brought up a couple of times of uncontrollable minds you say that uncontrollable minds have been created could you just explain that phrase a little bit more what do you mean by that so one way of explaining it is that the minds are kind of still controllable like we are creating more and more powerful like smarter and smarter machines while using the fact that they are done to control them so like this thing cannot last on go on forever and i think eliezer has correctly eliezer kowsky has correctly pointed out that the big problem with like all the ai paradigms pretty much currently is that we don't know okay correction all the ai paradigms that rely on back propagation like gradient descent where we basically have this kind of black box that we train to perform a function like all their problems like they all have this problem that they we they are researchers actually do not know how to reliably point this black box to like any goal just like evolution who had like a perfectly clear goal it did not know how to point humans at that goal we did not become kind of self-replication maximizers which are just like a soup of heuristics that work in in ancestral environment but not so much once we are gonna smarter how much do you think about the question of sentience or consciousness in that regard is that something that you're worried about emerging spontaneously in a way that we cannot predict so i don't know my i i kind of observe that people are often confused by mixing up consciousness and competence i think it's no reason to mix them up we in fact we don't know like any task where competence in it requires you to be conscious perhaps writing about consciousness or something like that i i guess you can probably write about consciousness without being conscious so by default i expect the machines to that kill us be not conscious but i i am very uncertain about it it's it's of course possible that they will be conscious and what about you robin do you have any thoughts on consciousness emerging obviously if we have brain emulations they will be conscious but emerging through other routes when how do you think about that my guess is consciousness is a relatively robust feature of large complicated mental systems that have self-awareness and some self-model but i'm more interested in the previous discussion of minds out of control i want to point out that the old soviet union was really obsessed with making sure it controlled its citizens and all its organizations and had spies everywhere and you know all sorts of things to like make sure they never deviated from their you know official agenda and that didn't go very well very well and our world of say the united states in the west we do not keep peace by making sure everybody has fully controlled values and you know make sure we agree with us on everything we allow big organizations and groups of people to have quite varied values as long as they keep the peace in terms of respecting the law so that's what i would say is like you know if you want to say we don't know how to make an ai respect the law or obey social norms that might be the more the criticism saying we don't know how to make an ai that is fully enslaved and will always do exactly what we would have wanted even if we aren't around that just seems the wrong standard to be thinking in terms of so many things to say there but like yes if it could make ai to obey the norms in a way that it just wouldn't exploit lots of loopholes by enough but that's what you want just by letters humans are also able to exploit noims and exploit loopholes and our world persists even in the face of that most for-profit firms would be happy to exploit loopholes and violate norms if they could get away with it the reason they don't is because they can't get away with it that not because they're not very capable well they're really capable are you know walmart google they're they're large organizations are really again there are lots there are lots of norms in anthill who cares like we're not a bit no we're not obeying those norms uh you uh so i what if i hear what you're saying correctly yawn you're saying that the norms are not gonna matter because or the laws even because the uh superintelligence ai or even a just very intelligent ai will be able to fairly easily evade any of these restrictions yes unless there's some kind of like dynamic that kind of imprints the respect for laws as it as it gets more powerful and potential itself improves which like i'm super interested in arguments for but i haven't heard any the obvious dynamic is a world of millions of ai's who have to keep the peace with each other any one of which can't take over the world because they face the obstacle and opposition of all the other ai's who don't want the one ai to kill them all and take over the world so as ai's together get better then they would have to have some system to keep the peace among themselves law is such a system and it probably would be the system they would use to keep the peace we're not a sort of human law again like we don't we don't kind of like share our legislative practices with chimpanzees but they would start out in our world initially ai's would be in our world obeying our laws and they would be initially imprinted and trained to you know be in our world obeying our laws and it's only over time as they became a large fraction of the world that the world's laws would now evolve in their direction but now we're well past these early years we're we're well into the you know scenario i said sort to be concerned about what happens when ai's dominate the world just because they become totally more productive and and wealthy yeah i think it's plausible uh but again this is of like going from outside in and like looking at at like these actual servers uh that are humming i'm gonna be having hard time to cross the cap like how how this could be enough like parts and like participants in human um actually let me take that take that back it's just i cannot see that there is some kind of gap uh between the what's happening on the metal level and and what's happening on the on the social social level it it's possible that it will be crossed but i i have a lot of uncertainty there let's close out the stream with a little bit of optimism because uh it really is it's incredible technology that we're seeing where we're living through some pretty exciting changes right now um let's imagine a world where things go roughly how each of you would like to see them go and we'll start with jan if if ai progresses in a safe and productive way what are some of the things that you are most excited about over the next five years to a decade i mean there are a lot of ways how this planet is suboptimal uh people are suffering just like a thousand kilometers where i'm sitting there are explosions going off and and like people are dying of old age which like if you ask them they would always like to like almost always want to live like one more day so like these are clearly things that additional intelligence if kind of well managed could help help a lot but in general i don't i don't think that much about good futures uh i i'll think about kind of properties that i would like them like those uh futures to have for example i want them to be a lot of fun like fun is something that is kind of introspectively available to people and they they know when when they're having fun and when they're when they're not having fun and also like things like you know optionality ability to choose uh and not be coerced uh and uh and things like kind of uh uh sort of fairness in the form of rose on uh veil of ignorance uh like you should be both you should be voting for between different or choosing between different futures without knowing who exactly you will be there uh and kind of there thereby achieving like uh kind of uh good future for all and robin if the government and various regulators around the world fail to crush uh ai and its infancy what is what are the prospects about an ai future an ai dominated future let's say that most excite you well so over the last few decades we've seen many new exciting technologies appear compared to those this is more exciting because it seems to have more potential but it's also in some sense more democratic that is most of the people figuring out what this can do are just people trying it themselves they don't need to get a big startup and lots of funding they just start using this in their ordinary business life and see what can happen and that's really exciting because if a lot of them find a lot of useful applications we're just going to get this big wave of productivity where a lot of people figure out a lot of ways to do things better and that will not only make us all richer and healthier but it will make more of a sense that we should be expecting and wanting innovation and change and technology to spread and more of an optimism that the future could be better because of that and more of a a an eagerness to maybe instead of looking for all the things that go wrong and trying to fix them that we should be looking for all the things that go right and how to encourage them and that sounds like an exciting future over the next decade is world gets more optimistic and excited because great stuff is happening and they're more interested in pursuing new options than in closing them off and then complaining about why things aren't better that that would be wonderful to at the very least cut down on the constant complaining but i can't complain at all about this stream this was extremely enlightening i appreciate both of you taking the time to talk through this really important issue robin hanson yon tallin thank you for joining me today thanks for everyone who tuned in and and watch the stream and is listening via podcast we will be back here next week same time thursday one p.m we will see you then thanks