 There is this memo that leaked from Google here. I'll just read a little bit of the note. The uncomfortable truth is we aren't positioned to win this arms race and neither is open AI. And what this memo author is saying is that Google's competitor is not really open AI or any other firm, but actually open source models. Meta's training data, I guess, leaked out and open source engineers have been creating AIs on that data that seem to, at least according to this author, be catching up to what open AI and Google are doing. And he says they're doing things with a hundred dollars and 13 billion parameters that we struggle with at 10 million and 540 billion. How much credence do you give that that the open source movement is going to kind of catch up or maybe even lap the big centralized firms? The open source and the word at large has been working on, has been working from the biggest open source available pre-trained system that I think for a while and perhaps still it was the Facebook llama that they trained and in my opinion, completely responsibly just dumped on the world. So, so like open source, certainly currently does not have the ability to throw like hundreds of millions of dollars or again, soon to be billions, billions at these large AI summoning experiments. When Robin said that like he would be worried if people would be not deliberately try to create like worst, worst case scenarios, make AI to do, destroy the world deliberately. Well, open source, that's how like if you, it's not hard to find or the existing projects that try to use open source AIs to maximize the damage. I mean, the AIs aren't that competitive, that competent yet. So they kind of like a more sort of fun and games project at this point, but this is just like early 2023. How would you even regulate open source development of these products? I mean, that kind of seems to undercut the entire idea of there's even much you could do about it. Yeah. So like, I mean, we already have kind of penalties for developing viruses and stuff like that. So it's like something probably could be some kind of liability can be assigned from there, but I agree. This is super, super much more harder than just like making sure that they're not going to summon even more competent minds and release them on the public. So here's another compromised solution. I know people who work on secure operating systems who say that you can provably show that some operating systems are completely secure. And maybe you could just require they use those operating systems for the 200 lines of code here. That would be a relatively low cost, honestly. And it would actually help kickstart this secure operating system world. And that seems like a reasonable regulation that would be relatively low cost addresses your most direct concern here. Again, I'm looking for compromises. I'm looking for things we can agree on. Yep, like 100% agreed, but really importantly, the companies are currently racing and they are not motivated to take any of those steps. The danger here is if we empower some regulatory body to do stuff, it won't just want to do a few best things. It'll have a big public behind it and they want to, you know, take, make speeches and show how concerned they are and just do a lot of extra too much stuff. So we want maybe even if we authorize some regulation to make it limited. And that's part of the problem here. The regulation isn't often limited. What are your major concerns with what non-careful regulation might do in terms of the development of this technology? Well, humanity in the last century basically shut down nuclear energy industry pretty effectively. And we basically said, no, we didn't want to go there with modest exceptions. More recently, we basically shut down genetic engineering and said, no, we would didn't want to go there. We may just see AI that way and want to shut it down. That is, I, some people say that this we really couldn't regulate this. That would be infeasible. There's just too spread out and too, too many strong interests. Looking at the past, I'd say it is possible to strongly regulate some industries. And there is strong public opinion in support of being wary of AI and holding it back. So I fear the worst case of really just shutting down the whole industry and really foregoing enormous potential. If the government and various regulators around the world fail to crush AI and its infancy, what is, what are the prospects about an AI future, an AI dominated future? Let's say that most excite you. Well, so over the last few decades, we've seen many new exciting technologies appear compared to those. This is more exciting because it seems to have more potential, but it's also in some sense more democratic. That is, most of the people figuring out what this can do are just people trying it themselves. They don't need to get a big startup and lots of funding. They just start using this in their ordinary business life and see what can happen. And that's really exciting because if a lot of them find a lot of useful applications, we're just going to get this big wave of productivity where a lot of people figure out a lot of ways to do things better. And that will not only make us all richer and healthier, but it will make more of a sense that we should be expecting and wanting innovation and change and technology to spread and more of an optimism that the future could be better because of that and more of an eagerness to maybe instead of looking for all the things that go wrong and trying to fix them, that we should be looking for all the things that go right and how to encourage them. And that sounds like an exciting future over the next decade is a world gets more optimistic and excited because great stuff is happening. And they're more interested in pursuing new options than in closing them off and then complaining about why things aren't better. Hey, thanks for watching that excerpt from our live stream with Robin Hansen and Jan Tallinn about artificial intelligence, existential risk and the dangers of preemptive regulation. You can watch our full conversation right here or another excerpt right here.