 OpenAI announced version three of its image generator, Dali. Artists can send images to OpenAI that they wish to exclude from the training models. A future version of Dali can then block results that look similar to the artist's image and style. Well, don't they have to train something to recognize whether the art is being created in that style? At which point aren't they training a model to do just that? Right. How would the model know to avoid your thing if it didn't know your thing implicitly know your thing so it could avoid making your thing? You're, I think you're totally right. I think that's probably true, but you know, that's where these, that's where the nuance comes in with this whole conversation. It's like, well, this, we have to look at it this like almost like a person. And I don't mean like in the AI sense. Oh, it's just like a person. I don't mean that. What I mean is for this thing to actively avoid this and apply these quote, unquote ethical standards, they have to know what they're being ethical for.