 Lastly, I want to mention we have a bunch of columns at the end that target the quality of forecast. Okay. Let me quickly just talk about those two. E is, as I said, it's just an error. It's a simple difference between the forecast and then the realization of demand. So these are basically my raw deviations. Then we will also need to square those for future calculations. So I'll talk about that later, too. In another way, we also want to take the absolute value. So just remove the negative signs, okay? The reason for that is a lot of the companies just care about the absolute value of deviation. You will lose money if you over forecast. You will lose money if you under forecast, okay? So the sign doesn't matter too much. You just want to look at the magnitude of deviation, okay? Again, some people may say, okay, magnitude itself is not enough. You also have to compare it with the actual level of demand. A deviation of two points out of a hundred, a deviation of two points to when your demand is a hundred is a lot less concerning compared to when your demand is like 10, okay? If your demand is 10 and you're deviating by two, you're 20% wrong, okay? So that percentage is also important and this APE column is taking that into account. Basically the way we calculate it is just by dividing the absolute error term by the, let's just call it blue, the absolute error term by the realization of demand. So sorry, I should have made this one blue. So 0.3 errors divided by 30 gives us 1%. So right now I just have one person error, okay? That gives us some perspective about the amount of error. Then we have some end of pipe calculation, end of pipe metrics, okay? MD is your mean deviation. This is the average of deviations. Remember again, these are deviations. The average of these will be my MD, okay? This tells me that on average I'm 0.9 above the real demand. So I'm slightly over predicting. Maybe my dampening parameter needs to be adjusted. Maybe I'm adjusting too quickly to the, reacting too quickly to the demand, okay? But a better metric is this mat, okay? This is mean absolute deviations, okay? So this is actually the average of these absolute deviations. Let's just give it a different color, and I might run out of color soon. So what is it, is it brown, yeah? So the average of these brown values will be mad. This is telling me that on average, I'm above or below by 1.48, and this is much more valuable information for me. Again, because I don't care about if I'm over predicting or under predicting. Mostly, it's more about how closely I follow the demand, okay? And then if you want to also look at the percentage, that's mape. So this mape is basically telling you it's the average of these apes that you calculate. So this lighter, slightly lighter blues, which will give you an idea of percentage wise, how much are you off, okay? So right now, I'm about slightly above 4% of the value of the amount. And this is super useful information. And you can directly convert this to value, to money in a company setting. And then we also have mean score error, which is a smart use of exponential smoothing for getting an idea of your mean score. Basically, we will start with some value, whatever it is, this arbitrary value. And then every time, you will just update that MSE based on your new demand in C7, also your old estimate, okay? So this is something I have seen used a little bit less in industry. So just refer to your formula for more details. We'll just kind of quickly pass over it. But it's just we updated according to the exponential smoothing formula all the time, so we can get a better, a more accurate estimate of MSE over time, okay? That's not something I want to focus on right now. That said, I want to cover two more points here, okay? So imagine this is a very good forecast because my red line is the actual demand. My green line is the forecast. It's following closely the demand and that's good. And this also shows up in my metrics. If you look at MD, mean deviation, it means on average, I have some errors here, right? Sometimes I'm up, sometimes I'm on down, but on average, they all cancel out. So at the end, overall, I fix my errors. If I'm up sometime, I will just go down in the next period and overall, I'm okay. And if you also look at Madden-Mabe, you will always only one point deviated and which is 33%, okay? Let's look at another estimate, another forecast that is biased, okay? This is what we call bias, why is that? Because I am consistently over predicting. I'm always predicting too much, okay? Even if you are just too much, you're above your demand slightly, but it's just the fact that it's consistent, that's a problem. That means you have not calculated things very well. So you're consistently making mistakes, okay? And that's something that will show up in MD, which says your order is always, on average, basically, two points below. And Mad is also going to show that. Again, on average, you're 267 deviated, which is 8%, okay? So when you have bias, MD and Mad, both of them will show it, okay? What will happen if you just have variance, okay? In this case, this is not a bias, this is just variance. Basically, I sometimes over forecast too much, sometimes under forecast too much. But overall, I'm not consistently up or down, okay? The problem here is MD will misinform us, okay? This is mean deviation. Because the deviations cancel out, MD will be something close to zero, okay? I mean, you'll think you're doing very well. That's why we need to always use the absolute values. If you look at the absolute values, actually, this forecast is now worse than previous forecast. If you look at that one, we had 2.67 for Mad, okay? So this is worse. And also, maybe we'll also show it. This tells us that we are 30% off all the time, on average, okay? So that's another point about variance and bias. These are two things you need to be careful about in your forecast.