 asking if it's time to move beyond Robots.Tex for determining what can and cannot be crawled on the internet. Robots.Tex is a text-based file sitting at the root of most websites that purely on goodwill for the past three decades has determined what could and cannot be crawled. No trespassing signs are great when you are only dealing with the people in your neighborhood. No trespassing signs are not great when you are talking about aerial photography for an entire county. Is there a method by which people should be able to say that if I am publicly available on the internet, you should not be able to train your AI on my content in a perfect world? Yes. Is that hard to enforce? Also, yes. And the goal for the AI company is that you never have to go to a search engine. They are in essence the search engine. They get the advertising dollars. They get the money from people who have pro accounts. They are ultimately going to profit off of other entities' content. And that's where everyone has such a big problem with it these days.