 Okay, hello everyone, welcome to this course, Engineering Statistics. We are in the digital era now, where lots of data is getting generated and logged. And if you want to make inference from this data, we need to have a good understanding of how to deal with this data, and statistics provides those useful tools. For example, if you have a lot of information or data collected about weather behavior from the first few years, how to use their data to make a weather prediction about future. And that is where statistics will play a role, and in this course we are going to understand what are the sound methods to use statistics. In fact, in the emerging field of artificial and machine learning, sound understanding of statistical methods is very important to have a good idea of how the algorithms are built in artificial intelligence and machine learning methods. Okay, now about this course. In this course, we will start with revising basic probability and basic distributions. Then we build on that by studying various other distributions like gamma distributions, beta distributions, chi-square distributions, and see how these distributions can be thought of parameterized distributions. And we will study something called exponential family of distributions where all this majority of the distributions we will be interested in can be put together in a compact format. Once we have this understanding of the basic distributions, we will talk about how to generate data from a given distributions. In this we will talk about various methods like direct method, indirect method to generate data. After we know how to generate data, we will switch gaze and try to see that if we are already given data, how to infer that this data is going to come from certain distributions or how to extract information from the distributions. So here the information extraction, we will map it to parameter estimations and how to estimate this parameter using our data efficiently, we are going to study in under variance principle called the sufficiency principle and the likelihood principle. Under sufficiency principle we will study about various statistics and the minimal sufficient statistics or sufficient statistics and the minimal sufficient statistics. And in likelihood function, we will study about how to use it to generate maximum likelihood estimators. Maximum likelihood estimator is one example of point estimators, we will also talk about other point estimators like method of moments and Bayesian methods as well. Then suppose you are given a bunch of data samples and you have to decide whether the data is coming from a particular distribution and you have been, there has been claims that this data is going to come from certain distribution, how to accept or reject that claim. For this, we are going to study hypothesis testing and we will understand various properties of hypothesis testing and how to understand how good our test is. To understand how good our test is, we will construct p values and based on that we will develop test like t test, student t test and f test. So in hypothesis testing and when we apply some tests, maybe we will be using some knowledge of the distributions which are parameterized. To overcome this, we will talk about other methods like non-parameterized methods and in the non-parametric method, we will look into methods like Chi square test, Kolmogorovs Mirnov test and Nilifor test. In this course, we will also talk about basic programming language like Python and see how various libraries of Python related to statistics can be used to understand data and easily analyze them to make some inferences. And I hope this course will be useful to you and at the end of this course, I am sure like you will get information about how to understand data, how to use the statistical tools in a sound manner so that whatever inference we are going to make, there is a sound reason and logic behind it. Thank you all and again I welcome all of you to this course.