 So, you got your image data, you got your captions, you installed Koya, you are ready to train your Lora. Awesome. For simplicity's sake, I will be demonstrating on images trained from my stable diffusion doll, which can be found for free on ArtStation. And inside the free version of the doll are the files I used for the training dataset with all of the captions and a JSON file with the parameter settings used for this video. So if you open up Koya under Lora, training and folders. If you load the JSON file that comes with the doll, it will automatically set all the parameters for you. All you have to do is set the image output and log folders to the appropriate locations of your Koya project and determine your model output name. This is just what the AI will call the training session. So if you like the results, you will be able to identify which session it is and use it for later generations. I'm just going to call mine Blenda SD 1.5. Okay. So that's how you can technically just hit start training and it will do its thing. Now while it's doing its thing, I will explain some of the most important parameters that you might want to know in case you need to make some adjustments. Now keep in mind this tutorial was made for the average user. And when LaFourb and I were going through and recording these videos, we made the decision to assume that most of you don't have top of the line machines. You are probably working with, you know, a computer that you bought in college or something more average and affordable. So a lot of the settings that we chose here were to accommodate what we assumed most people's computers could handle. But if your machine is amazing, you could probably change some of these settings. So under basic Laura types, there are a lot of different laws out there. They each do different things. But we are just sticking with the standard model because that is the most documented and easy to work with low con and low ha theoretically can be better, but they give you less options to toy with after it's done. Training batch size generally works well at one. If you make it higher, it will train faster, but the results also tend to be a little bit worse. Epoch is how many times it will repeat the training. So if you remember that we put the number 20 in front of the name of the training folder and how that means it would go through each image in the folder 20 times during a training. Epoch is like another layer on top of that. If we set epoch to one, the AI will go through every image in this folder 20 times. But if you set epoch to three, it's going to go through every image in this folder 20 times times three. So the training is going to be three times as long. But from what we're doing, we're just going to leave it at 10 precision really depends on your graphics card. If you have a really powerful new card, then BF 16 is probably what you want. But if you have an older card, probably best to use FP 16 here instead. Number of CPU threads generally to if you have a quad core CPU or if you have a six core CPU, then you could probably run three. The scheduler basically dictates how efficient your learning rate will be over time. If you'd like to more know about that, I highly recommend you check out this article when you get the chance. Long story short, cosine with restart worked pretty well for me. And so that's the one I usually leave it at. But feel free to tinker with it and find out what works best for you. Max resolution generally 512 by 512 is the default recommended to save on VRAM. If you do that, it will also train faster because it's less resolution. The text encoder and unit are kind of complicated to explain. If you really want to know how they work, I'll link the wiki in the comments below. But generally, the text encoder is its own AI that associates words with the shape you ask for in the diffusion model. You net is a neural network that you partially train with the Laura. It basically trains things in parts. And these things have weights that can be used to change the composition of your generated images. Learning rate is basically the speed at which you train the steps. The higher it is, the faster the training. But the faster the training, the more you run the risk of overtraining, which will deteriorate your network. We recommend a slower rate because it's generally safer, but feel free to try out different rates and test for yourself. Network rank generally if you lack VRAM you want to lower this number 32 is also fairly safe. 64 is pretty good. And for these down here, you can just read their description. And if we go back up here under the advanced settings, keep tokens, we just leave it at zero. This affects what tags get priority during the tag shuffle process. Now here really the only thing you have to worry about are if you want to save VRAM you can click gradient checking. If you have an NVIDIA GPU, you don't need to check memory efficient attention because you will have X formers, which will normally overtake it. And if you want, you can ask for samples during your training. If you said something like 100 here, then every 100 steps, it would generate a sample of your training at whatever state it currently is at using the prompt that you said here. This is useful when you want to check how the training model is doing in the middle of the training. So those are the most important settings, regardless, if you don't want to mess with these settings every single time, if you go back over here, you can always save the settings. And then every time you open Koya, you can just load your old settings and have everything just be reapplied automatically. Also every time you do a training session, you will be able to find the JSON settings that were used to train it. JSON files are created at the start of each training session. So you can always go back and load it even if you didn't hit the save button. So even if you forget to save the settings, it doesn't mean you lost it. You can always just load the training version to find out exactly what things were at during that training session. And that's really it. You're done. Just wait till the computer is finished. And when the training is over and you look in the output folder, you will see how the AI was starting to train the data. Now if it looks like mush, don't worry, it's really not expected to look perfect. The main point of this is just for the AI to get the general gist of your character, because a lot of the magic happens in the generation process, which we will cover in the next video. So that helps. As always, I hope you have a fantastic day and I'll see you around.