 I'm not sure what happened. I suspect an internal dropout, but then my OBS dropped out as well. Okay, let's jump back into development. We have this tool here, it's just an example of neural network training with the default parameters. This is what it does. I suspect the default parameters will have to be changed. And we'll leave it with the description at the bottom of the page. See how we go. Just loading the model, just loading the HTML and JavaScript. And just the Python. The Python currently is empty. There's nothing happening at the back. And so if we increase the training size and hit the train button again, we'll see if it's doing any better. No, the first hidden layer. Yeah, that three, let's put it at the level five train again. See how it goes. Yeah, not much better. Put the second hidden layer size to five. Train again. Yeah, now it starts, no. So it started starting converging. So noise level is fine as the noise level in the input. Now learning rate, actually restart this. While I'm playing this, I'll just read what GPT-4 got to say. Okay, we'll go over these comments, probably one, one by one. We have a good thing about GPT-4 is that you can do this. So we are changing parameters. I don't think the default parameters are very good. Can you suggest how to change the default parameters in the HTML? Now, every time we train, the training loss function is being added to the existing chart, which is a bit odd where I want the first prediction chart to be having the same way where waveforms are added to the chart or refresh the chart every time we train any suggestions? Welcome. Yep, it's making quite a few suggestions. We have another problem. We are currently not using CSS. Could you, well, because the one we have is not working correctly, can you regenerate or generate the CSS script? A couple of things, well, actually, two things. The CSS is okay. I just want to make sure that on the bigger screen, the controls should be on the side of the two charts. And on the smaller screen, they can be below the charts. A second thing, we want to default the values for the controls to be specified inside the HTML. So any changes that you are suggesting, please change the HTML with that. Yep, cannot suggesting to check the media. Some reason it's going for 768 pixels as a cutoff between large and small screen. It's like, okay, set the bottom quickly. Can we regenerate the whole CSS script? Also, when you use any colors, can you use the following template? So start the CSS script with these five colors defined. Yeah, that's probably helping adding more epochs. If can I get reduced? No. It's starting so hard, starting from around four. They're not really reducing. We changed the learning rate. Really know what that does. A number of epochs. No, there might be, it was a bit odd. They reduced the noise level. All right, that's work. Yeah, something's not right. When you change parameters, the noise level should have went away. It's not reducing, is it? All right, this is very, it's a bit odd. Yeah, let's sort out the CSS first. Have the port making this for us. It's not looking at the screen size. Let's control F5 this. Yeah, it's not very great, is it? Yeah, the idea for it was, the previous version was better. Let's go back for the first prompt. Could you give a description for this application? How it could be used? What the different controls are really struggling with that CSS? They meant to be displayed side by side, are they? But it doesn't fit. Can we fix the CSS? There's a problem that on the, can we make sure that on the larger screen, the charts, the two charts are displayed on the side of the controllers. And on the smaller charts, smaller screens, they, everything is displayed top to bottom. So, yep, let me read the description in the meantime. It's positive, it's also fixed the CSS and refresh it for, yeah, I'm checking the media now. The display size does, yeah, it seemed to be doing the right thing, but currently the problem with the CSS is that on a larger screen, the controllers don't seem to fit on the side of the charts. Can we fix this? Yep, it's giving them some space, but it meant to be going over there. It should maybe relate to the fixed width set in the media, so we can try using flex properties instead of allocating space dynamically. Okay, sounds good, sounds good, still generating. And we also need to change in the HTML. Can we go over each control and change the default value so that the prediction converges better? Yes, CSS is not happening at the mean max, a width for controls that we might need to change. We're actually changing the controls, changing where the position is on the screen a bit, which regenerated the parts of the HTML code. Okay, what's reading that code? I wish the text to speech was better in the sense that you could have more control over it. Otherwise, it's a pretty good voice, except hidden layer. Okay, so the hidden layer we suggest instead of 1 to 6, do 1 to 10 with a default value of 5. Try that straight away. Okay, let's quickly turn off the CSS for the moment, so we can actually see what's going on. Hidden layer 2 size at the robot, suggesting 0 to 10 with a default value of 3, which is kind of what we had at the noise level, because it's defaulting to zero noise. Well, actually, maybe it's not such a bad idea to zero noise for a second. Now learning rate. Learning rate is a bit odd. Instead of maximum 0.3, suggesting a maximum of 0.1, okay, I think this will make much difference with, is it code verging? Going down, so I could possibly add more air box, but I don't think it's going down sharply enough. Okay, air box. Instead of 100, suggesting a default of 200. I think we can leave with that. Nothing converges. That's not cool. A batch size, suggesting default of 10. Okay, let's try that. Don't understand it jumping around like this. Don't get it. What's the point? Why is it doing that? Activation function will have the same ones. And the default will be the first one, is it? Yeah, that remains the same. Optimizer. Optimize default. That remains the same. A loss function. Okay, that doesn't converge. Does it read that quickly again? No, actually, do something else. To a print screen quickly. Hey, without a prompt. See what it says? It's not why it's jumping like this. It's essentially going through 0.1 to 0.5. It keeps jumping around. Jumping around. That's not cool. I don't know why it does that. There must be something wrong. We don't need these comments. I don't know, do we need any comments now that we have LLMs explaining you the code? I don't think you need any comments. Well, nobody will be reading code anymore as well. What do you think of that? Should you be scared as a software developer? Probably yes. Just doing it while the bot is generating stuff. Yes, it's also doing better with 0.03 learning rate. It's not going any lower. Going down to 0.1 with 0.35. With some sort of table for the sessions. A question is how can I monitor performance of different sessions or between different sessions? No, sorry, between different trainings of the one session using the application. Yep, 0.45 getting down to 0.2. It's worse. 0.35 was better, about 0.29. Yeah, so there's this magic number. It's not cool. I don't think this is what a lot of people are trying to do. They're just manually fine-tuning stuff endlessly. So this is what you see with a lot of the Kaggle competitions where competitors are allowed to submit, I don't know, like 100 submissions per day. Just worried that it's jumping around like that as well. Can this image also wind this image? The training loss function is jumping between 0.5 to 0.165 and still generating. Generating, okay, so yeah, we'll need to... Yeah, I like the console logging. Need the run number. I have GitHub, Koopa are trying to fix this. Apparently, they don't have these variables. Okay, how do you do the hypothesis-driven experimentation? In this case, it should have been a simple task. Patch size of 20 doesn't do any better. Yeah, it's behaving a bit oddly. This one 0.25 does get to 0.1. Sorry, I have to remember to edit this out if I'm turning this into a video. I don't think this will be a good video, but we shall see later. Yeah, obviously, adding more epochs doesn't help in this case. How about increasing the training set size? I don't think that will do much, but we can try it. That's what we have this tool for. Yeah, it was performing better before. Yeah, this is a bit odd. So then Ctrl F5, 0 to 5, give me 0.1. That's not converging anymore. But yeah, something isn't right. So don't get it. Shouldn't be deterministic. Okay, if I go at learning rate and 0, if this is my learning rate, yeah, now I'm getting 0.14 to 0.23. I'll try to understand. Yeah, probably someone is shouting at me. Why for the same waveform? So the waveform doesn't have any noise in it. The noise is 0. Why am I getting different results? Yeah, the input, the training data is the same. So if the training data is the same and the noise level is 0, why am I getting different results every time? Doing 200 epochs is randomness in the process. Random initialization, data shuffling, stochastic gradient descent, hardware, hardware, what? Parallelism and concurrency. It's really weird. Yeah, right. We might have to continue this next time. So we will have some sort of metric of how the performance was tracking of, you know, when you load the page, you hit the train neural net multiple times. It will have some sort of table that will monitor its performance over time. We need some more changes to the code to be able to do that. And yes, do we need to change the model or something? Sure. We have all that stuff in the description of the HTML. Just actually have to turn it into HTML text. And we will continue from there. Hopefully we'll make this tool available for you soon on the website somewhere. Probably be, you know, the first at the top. And if you haven't checked the Barney Chaos yet, please go do so. There's a lot of interesting tools. You will be supporting the project this way by watching some advertisement. And hopefully you find something useful. And let me know how you go. Don't forget to leave comments with your feedback. I'll see you in a bit. Bye.