 Hi everyone, it's MJ. And in this video, I'm going to give an introduction to the chapter called sampling and statistical inference. And I kind of wish that this was chapter one because what we're going to be doing in this course is defining a lot of the jargon. Specifically, we're going to be looking at what exactly is a statistic. And we're going to be seeing that a statistic is a function on a random sample. And what we're going to be doing in this course is just breaking down, okay, well, what do we therefore mean by a random sample? And we see that a random sample is something that comes from this idea known as a population. And a population has parameters and distributions. And this is one of the confusing things about statistics because a random variable and a population have got a lot in common. And essentially, it is the same calculations that you're doing in order to process it and get information on the two pieces that you need. So it is a little bit tricky. We also introduced saying known as an estimator and an estimate. And it very much is a jargon chapter. And one that we really need when we're going to go into the next chapter on point estimation, which is critical for doing confidence intervals, which is therefore critical for doing hypothesis testing. So it's an important link in the chain. But on its own, you might be thinking, okay, this is very abstract. This is very weird. Why am I doing this? My idea is that this is very much a foundational course for the rest of the inference part of statistics. So we've kind of almost finished with the mathematics. And now we're going more into this logical inference phase. And that's what this chapter kind of introduces to us. Anyway, I'll see you in the course. And as always, let me know if you've got any questions. Cheers.