 People have been told for years now that automation was going to take away their jobs. And this sentiment has been greatly magnified in recent years by the proliferation of neural networks and large language models that people generally refer to as artificial intelligence. But one thing that I think a lot of people got wrong about the disruptions that would be caused by AI are the actual fields of work and fields of expertise that AI was going to endanger or potentially even eliminate. Because growing up, I was told that the manual labor jobs were gonna be the first ones to go, especially the low-skill manual labor jobs, the kind of stuff that, oh, you can just go off to a trade school and learn how to do that. And this made sense to me when I was growing up, when I was younger, because I would look back at the history of technological innovations, especially in the world of farming and food production. Because basic agriculture literally took humanity from a hunter-gatherer species. Like that's what we did for most of our existence as homo sapiens or whatever. Like everybody's job was food production. And it turned us into what we have now where people are able to specialize into all these different fields of work that have absolutely nothing to do with food production. And of course, this got exponentially more true with things like industrialization, because then instead of having to feed and house a bunch of uppity serfs on your potato plantation and having to worry about them getting wiped out by rogue barbarians or a plague, you could just have machines now that do most of that difficult planting and harvesting and watering labor much more efficiently than even your most eager, energetic, medieval peasant. Now, we could go on and on about industrial society and its consequences, but one of the consequences that I think most people actually enjoy is the fact that artistic people have more time for their art. In fact, the 21st century has probably been the best one yet for artists because the internet has removed a lot of the gatekeeping that was involved. Like before the internet, if you were good at drawing or painting, you had to get someone who runs like an art show or I guess who owns a museum to put your artwork on display so that people could actually know about you and your artwork. And then from there, I guess some rich people would eventually sponsor you to paint them some stuff and now you're finally able to make some money off of your passion. And the barrier to entry was also a lot higher because paint, brushes, canvases, that stuff's expensive, man. And I would imagine that the cost to become really good at painting or drawing, especially centuries ago, is gonna be way more expensive than getting a computer or maybe getting a digital pad to draw on and then get good at digital artwork. Not to mention, digital tools make it so that some things that would normally be very difficult to do, like changing the color of something or undoing a mistake, that can just be done by pressing a couple of buttons, clickity-clack, undo, change color, boom, now you're done. But what's funny about most of the AI tech that we keep hearing about that's supposed to be replacing our jobs, is that it doesn't really endanger the guy that's doing manual labor as much as we thought, like carpenters, plumbers, welders, even truckers, are not yet being put out of work by AI. Of course, there's self-driving, that's, you know, those AI are getting better each day, but it's still not good enough yet for most people to want to put a self-driving AI in charge of a 30-ton truck that's full of expensive merchandise on a highway with other people. Like legislators, you know, the companies that own these trucks and earn the merchandise, you know, nobody is quite ready to take that step. But digital artist, man, that industry has been hit hard by tools like Dolly 3, Mid Journey, and Stable Diffusion. And just like how the AI for self-driving cars is trained on the driving habits of good drivers, these AI art tools are trained on the artistic habits, which is just ultimately the artwork they create, of good artists. And obviously, that's got a lot of artists pissed off because the AI that's being used, the AI is literally using their work to take their jobs away. Now, so far, the way that most artists have been fighting against this AI-generated artwork is in the courts. They've been filing lawsuits against companies that make these AI tools, but that really hasn't been going so good for the artists so far. You know, it's kind of hard to use our current copyright laws to interpret what these AI art tools are doing as an infringement of copyright. And so there isn't much keeping the so-called prompt engineers, the people who are really, really good at generating the specific kinds of images that they want and that other people like to look at with these tools from just beating digital artists, like people who are really good at Gimp or Photoshop or whatever in contest and of course in the workplace, except for instances where private companies just outright ban the use of AI-generated artwork like Steam did. But then you've got a whole other problem now, which is how do you determine whether or not something is actually AI-generated? Sometimes you can tell with specific images, like AI tends to have trouble with creating realistic-looking hands and especially photorealistic faces because there's a lot of details that go into a human face and I guess a human hand as well. And if you get any of them slightly wrong, it really triggers that uncanny valley sense, like it just doesn't look right. But when it comes to cartoons, backgrounds, logos or anything else that human beings aren't as sensitive to, the AI is just able to rip off your artwork, it's able to pretty much be used as a drop-in replacement for commissioning an artist to do that artwork. And like I said, it'd be very difficult to tell some of these, I guess, simpler designs are actually generated by AI. But now the artists actually have a way to fight back against this with a technique called glazing. So this is a free tool called Glaze that was developed at the University of Chicago. And what it does is make small changes to your image, kind of like adding a filter, but the purposes of this filter is to actually filter neural networks from being trained on your image. So obviously these tools are very early into their development, but we can see some examples here in the UC Chicago news post. So on the left here is an example of an image that doesn't have any glazing added to it at all. So this is just digital artwork made by somebody or a scan of artwork made by someone. And then here in the middle, there's just a small amount of cloaking, as they call it. And then on the right is an example of the same image with heavy cloaking. So you can see that glazing does cause some minor alterations that a human being can pick up, like the yellowing in this girl's arm that's then going up into her veil. And then around her face, it also seems like there's a little bit of yellowing and distortion. And this right image has way more yellowing and distortion going on, kind of throughout the entire image. Like it seems like there's some artifacts all throughout this image. So I can imagine that a lot of artists would be somewhat hesitant to use this tool on their artwork, like before they're gonna publish it for the world to see, because of that visual distortion. But again, this tool is free. And I suspect it's gonna get better over time. So hopefully it's still gonna be free and something that you can run locally. And as you can see down here with some more examples, the tool is effective. So on the left is an original work of art by Carla Ortiz. And the middle one here is considered to be plagiarism, right? It's a plagiarized version that's created with AI, you know, in the same style and whatnot. And on the right is what the AI would produce when it's trained on a cloaked version of this original artwork. So clearly the, and now this is an original, I don't think they have a example of the cloaked version of this. But if we were to just compare it to this one up here with some fairly minor visual distortions, clearly those minor visual distortions are magnified through the neural network to produce kind of this acid trip visual that looks nothing like the original artist style. So the artists are using the same exact AI tools, they're not the artists, but the people who developed these tools that the artists are using to protect their artwork, they use the same exact AI tools like PyTorch to fight against the artwork being ripped off. And I think that there's something truly poetic about that fact right there. You know, literally fighting fire with fire. But some of these glazing tools do more than just prevent the neural networks from ripping off their art style. Some of the tools like Nightshade, which were actually made by the same group that made glaze are offensive tools. They actually sabotage neural networks that end up using Nightshaded images in their training set. So let's say, for example, that you wanted to create a computer program that knows what a cat is, not even necessarily one that can create pictures of cats, just look at an image of a cat and accurately identify it as a cat. And then, you know, if you tell it like, hey, is this a cat? It's able to tell you yes accurately that that is a cat. You pretty much have to solve that before you're able to solve the generation of cats anyway. So creating this program that can identify cats is going to involve creating a neural network and then feeding it lots of images of cats and also some things that aren't cats. And then you reward it when it correctly identifies the cats and over time it gets better and better at that. And also the bigger your sample size of cat images is, the better that neural engine is ultimately gonna be. The larger the sample size is and then more or less the longer you train it, you know, the number of rounds of training that it gets is going to influence how good the final product is. So you're gonna want millions, if not billions of pictures of cats, which is no problem because the internet has loads of cat pictures. But if just a handful of these pictures are night shaded, it's going to completely throw off your AI's ability to identify what a cat is, right? So like you can see an example here where, so dog is what we're trying to create here and this is what, you know, stable diffusion, I think SDXL is stable diffusion XL, some examples of like when you tell it to create a dog, create a car, handbag, hat and so on. But if you were to throw in just like 50 poison samples of dogs into, they don't specifically say here how many it was trained on, but it would probably be millions, right? There's millions of dog images out there on the internet too. 50 poison samples result in it creating this abomination that's like, I don't know, it kind of looks like a rabbit to me that just wants the sweet release of death. A hundred samples and at this point we're making cats. We've completely changed what this neural network's image of what a cat is in its neural brain, right? Or what a dog is in its neural brain and here we're actually starting to get some pretty good cats. But we're prompting for dogs, right? We didn't want a cat, we wanted a dog. Same thing with the car here. Now you get a cow that looks like it's melting. Handbags become toasters, hats become cakes and so on. Again, with just a handful of these specifically crafted poisoned images as people call them. And I don't really think there's any effective way to fix your large language model, you know, your neural network, once it's been trained on these. I mean, I think the only way to really fix it is to go through the sample data because obviously some of it's poisoned. Pick out the 100 or so poisoned images out of millions and then retrain that neural network on a good dataset which obviously is wasting hours and hours of time trying to pick through these images which I guess you're gonna have to have a human do it. Well, there are some tools that people manage to create to identify poisoned images. But you know, if you don't know about those or if the poison images beat the tools, at that point you gotta use a human to pick it out which is really gonna suck. And it's also going to waste many, many hours or not hours but kilowatt hours of electricity. And some neural network people are even trying to argue that tools like Nightshade are actually dangerous malware that should be made illegal because they're files that are corrupting other people's computer systems which is a bit of a stretch and I actually disagree with that idea because even though I really like AI art tools we shouldn't restrict what artists are able to do with their artwork just because it messes up your neural engine when you use it to train it. And of course, like I said, there's some other tools like Nightshade, Antidote and things like that that are being used to fight back against Nightshade but the way I see it like kind of the high level view of all this is it's ultimately more free software that's being added into the world that anyone's able to use and it's clearly useful for some people which I'm always for. I just kind of wish there was a version of Nightshade that was able to run on Linux instead of forcing people to use Windows or macOS. But let me know what you think in the comments below. Should Nightshade be legally treated like a harmful piece of malware? Do you think it's actually gonna be effective at protecting AI generated artwork or will people just use computers to fight against the other computers that are trying to corrupt the artwork generating computers in the end?