 So you trained your data, but you have no idea how to use that training to generate new images. Not a problem. After your training, you'll see a bunch of safe tensor files in the output. These represent the beginning, middle and end of your training sessions. Copy them and go to your stable diffusion folder, web UI, models, Laura, paste it in and now we're going to download some more stuff because when we generate stuff, it's important that we make our AI model as close as possible to the exact style that we want before we add our training to it. I am specifically going for a very vibrant colorful anime style. So there are a few things that I'm going to want to download to make sure that my workflow is simpler, faster and easier. We'll start by downloading forge UI, download it from the link in the description, go to where your Koya and stable diffusion folders are and add a new one called forge drag your forge zip inside and extract its contents. Now you were going to want to copy the model from your old stable diffusion UI to the new forge UI in your new SD web UI folder, go to models, copy the folder, go to forge UI, web UI, forge web UI and paste the models folder to replace the one here. From this point, you can run forage the exact same you normally would run stable diffusion. Just click on the run executable. Now I'm going to go ahead and download model ex since I think that one is going to help us a little better than what we're using right now. Once it's downloaded, go to forge, forge web UI, web UI models, stable diffusion, paste it in. And now when we run forage and click refresh, you should be able to click model ex right here. And last, we're going to want to control the negative embedding because a lot of people forget that if you are not specific, you will not get anything specific. If you do not want ugly, disgusting, amateur, low res blurry slop, you have to specifically tell the AI, hey, yo, I don't want none of that shit here. This is what negative embedding does. People who have been working really hard on this have already basically figured out a big list of words that we usually don't want popping up in our designs and they have organized them into a nice neat embedded file that you can find. Link in the description. Once you've got it, go into your forge UI, forge web UI, web UI, embeddings, paste it in. And now you are technically ready to generate your stuff. It's always good to test things before the Laura training and after. So we'll just ask it for a rough estimate of what the character looks like, which is our keyword, woman, long hair, ponytail, shorts, orange hair, and something simple and fun, like playing on the beach. Now if we want to add our negative embeddings, we have to go to textural inversion, refresh, we should see it here. And if we click this, now we can dump them right into the negative prompt here. If we generate now we can see what the AI would normally give us without our training. All right, looks pretty good. This is a good base for us to build whatever we need on specifically an anime style. So now let's see what it looks like when we add the training that we just finished in the last video with our custom character. We can do this by going to Laura, refresh, pick the training models from the last session. And to get a good comparison, we'll recycle this exact seed. All right, look at that. That is definitely my character. And the cool thing is because the AI saved multiple versions of the training, we can actually see how the result would differ if we used less trained versions of the model. So this is the last training session. So let's see what it looks like when we input the first training session. And as you can see, the features from my character are a little bit less defined because the Laura we're using is a much earlier training. And sometimes that's a good thing, because sometimes the final session is a little over trained, and it's not flexible enough to use for general things. For example, like if you over train it, it might be difficult to get her to wear different clothes. So we can easily just go back and use a less trained model from an earlier training session, if we think it's going to give us the results we want. Now something cool that we can do is we can actually adjust the weight of the training in the prompt. Here, you can see this number one, which means the training has a weight application of 100%. But if we change this to 0.5, this means that the strength of the training will only be applied at half the strength. So the features of my character will be a little bit less defined, allowing the AI to diverge from the original to create some new designs. So now that you've seen this process in action, I think you're probably starting to get an idea for how this actually works. Training an AI to get your work consistently is not really a one step kind of process. With the good data set and good tags, you can probably get 80% of the way there pretty fast. But ideally what you would do from this point, in order to get a much higher level of consistency is when you find generated images that are really close or exactly like what you want, you add them to the data set, and then you use those to retrain the data, which will hopefully give you even more images that are closer to what you want and then get the best images you get there and retrain it again. That's how you're going to get the best results. But if you just want to be general and have fun with it, I mean, you can see the kind of results we can get from just doing one training session. Essentially, what you are trying to do is guide the AI so that it understands what you want to generate when you ask for a certain style on your specific character with that character's specific features. And we do this by leveraging the things that are already inside the AI's default tendencies. That is why it's hard to find tutorials on this kind of stuff because the process and the tools that you are going to want to use and download will be very different depending on what style that you're trying to go for. We were specifically aiming for anime style, but if you're aiming for a different genre with different dimensions with different lighting or more realistic or more cartoony, then all of that affects the type of models that you would want to use instead the type of negative embeddings and the type of base model that you might want to start out with, which means that in order to get the most out of your art, then it really helps to understand the fundamentals of classical art, the names of the different styles, the different techniques, the different genres, the different types of lighting, different art philosophies, photography terms, camera angles, portrait landscape, and all the traditional art jargon, which is why if you are a programmer or an accountant or a doctor or a lawyer or really anyone who's not familiar with the art jargon that professional use, then I got you covered and I have a gift for you. I come from a family of classically trained artists and me and my family have put together a 100% free introduction to art crash course where I take you through all the most famous and most important historical art styles, classic artists that everyone should know, and some little fun facts that you probably have to go to art school to learn. And no, this is not a trick or a gateway. There are no videos hidden behind a paywall on Patreon. In fact, everyone on my Patreon squad wanted this information to be free for everybody and that's where it's going to stay. So check that out if you're getting into art for the first time because it's a lot of fun and it's extremely helpful when you know how to describe exactly what you're aiming for. I from one I'm really glad to have you here and I think more people interested in art is a good thing. So check out the series it's free. I hope that was helpful and as always I hope you have a fantastic day and I'll see you around.