 I think there's a lot of misunderstanding about the letter. Some people say it bans AI research and so on. What it really is saying is we've developed this technology that's pretty powerful, but we haven't developed the regulation to go along with it. At the moment the technology is moving very fast, governments tend to move very slowly, so we need a pause on the development and release of still more powerful models so that Incense regulation can catch up. What the large language models are, right there, enormous circuits with perhaps a trillion adjustable parameters, and we adjust those parameters to make that circuit very good at predicting the next word given a sequence of preceding words, which sounds like a very harmless and innocuous thing to do. You say happy, it says birthday, so that's not too frightening. But you say, how do I build a biological weapon in my back garden, and it proceeds to give you detailed instructions for how to do that. That's a little less innocuous. The systems have already been shown to be capable of deliberate deception. Another example of a harmful output is a user who asked chat GBT to provide documented examples of sexual harassment in academic settings. And chat GBT fabricated an entire incident but named a real individual as having been responsible for something that never happened and backed it up with citations to newspaper articles that didn't exist and so on. And so that's a pretty serious harm applied to that individual. The excuse that it's very difficult to get it to stop doing that, that's a pretty lame excuse, right? If I was a nuclear power plant operator and I'd say, it's pretty difficult to get it to stop exploding, we wouldn't accept that as an excuse, right? No, the law is the law, the rules are the rules, and if you can't get your system to follow those regulations, then you can't release the system. It's as simple as that. This is a problem with the fundamental design of the large language model. So it's not actually something that they can easily fix by continually saying bad dog, bad dog. It's just the way they work. And they will output racist remarks and use stereotyped examples because those are again plausible things that people have done in the training data. And that, but that should not be the goal, right? The goal should not be say things that are plausible based on training days. The goal should be produce outputs that are beneficial. I think there are people who believe that we're not going to be able to regulate properly and there will be a disaster and then we'll try to regulate, but it might be too late. And you could think of this as like Chernobyl or you could think of this as like an asteroid is about to hit the earth and destroy all civilization. The time to develop your planetary defense system is not after the asteroid hits the earth. It's before the asteroid hits the earth. And so I don't think we should wait for a big disaster before we regulate, before we figure out how can we make systems that are safe and beneficial? Because if they're not safe, they're not beneficial, right? It doesn't matter if, you know, we make lots and lots of profit and people have all kinds of cool toys. If, you know, 18 months later, lights are out.