 Namaste. So far in this course we are using tf.keras as an API to build our machine learning models. tf.keras is believed to be a simpler API for building models in tensorflow 2.0. There is another way to build models in tensorflow which is through estimator API. Tf.keras is tensorflow's high level representation of a complete model and it has been designed for easy scaling and asynchronous training. In this module we will build machine learning models with tensorflow estimator API. In this exercise we will use iris classification problem for demonstrating how to build machine learning models with estimator API. Let us begin by first importing tensorflow and number of libraries that we need. We are mainly going to use pandas as a library for manipulating structured data. We have used pandas in some of our earlier exercises as well. Now that we have installed the required libraries or required packages, let us get into building the model with tensorflow estimator APIs. In this exercise we will build and test the model for classifying iris flowers into three different species based on the size of the sepals and petals. Iris data set has four features and one label. The four features identify the following botanical characteristics of individual iris flowers. There is a sepal length and width and petal length and width. So there are four columns and there are three different species in our data set. There is setosa, versicolor and virginica species and we have to classify the incoming flower into one of these three categories. And the flower is represented by four features sepal length and width and petal length and width. So the file the input file as five columns sepal length and width petal length and width and the name of the species. Let us download and parse iris data using kras and pandas. Let us get the training and test file and read the CSV to get a pandas data frame. So the training data is saved in train data frame and test data is saved in test data frame. Let us examine the training data. Let us look at the first five examples in the training. You can see that we have four columns and the last column is the desired label that we want to learn. So here we want to learn a mapping between these four features to the species. Let us get the species column which is the label from the data frame out and store that into a and store that into train underscore y list and for test we do the same thing and store that in test underscore y list. Let us look at the first five columns of the training now. You can see that the species column has been removed from the training because of using the pop command. Everything else remains the same. If you want to use estimator there are three steps. You have to create one or more input function that defines how the data will be input to the estimator. You have to define feature columns and then instantiate an estimator specifying the feature columns and various hyper parameters. Here we call appropriate method on the estimator object. Let us understand how this task can be implemented for iris classification. First let us create input function. Input function is a function that returns a tf.data.dataset object which outputs the following two elements as tuple. We have features and labels. Let us look at how the dataset object look like. So we have a dictionary of features which contains the feature name and list of values. So here we have for sepal length there are two values 6.4 and 5. Similarly we have features defined for sepal width, petal length and petal width with respective values and we have label as an array having exactly two values which is 2 and 1. So you can see that this particular input function defines an input corresponding to two flowers and finally it returns the feature dictionary and the label array. So this is exactly what the dataset object contains. In order to keep things simple we will be loading the data with pandas and we will build an input pipeline from this in memory data. The dataset API is very powerful as it can easily read records from a large collections of file in parallel and join them into single stream. However for the iris dataset this particular functionality will not be required. So we will use here pandas data frame and create the dataset using from underscore tensor underscore slices function. We take the dictionary of features and array of labels to create a dataset object. In case of training we shuffle the dataset object and return the dataset object in a specified batch size. Next we define feature columns corresponding to the features. Since all the features are numeric here we will use numeric feature column. Now that we have defined our input function and created feature columns next task is to instantiate an estimator. There are several pre-made classifier estimators defined in TensorFlow. There is a DNN classifier that is used for deep models on multiclass classification. DNN linear combined classifier is used for wide and deep model. Wide model works on a large number of features like a very large one hot encoding and deep model works with the features which come from embeddings. So DNN linear combined classifier is used for wide and deep models. Linear classifier is based on linear model. For iris problem we will use DNN classifier that helps us perform multiclass classification. Let us see how to instantiate this estimator. Here we define a DNN classifier with two hidden layers each with 30 and 10 nodes respectively. We also specify that there are three classes so that you know output corresponding output layer can be constructed and we specify our feature columns as input to the DNN classifier. Let us run, let us instantiate the DNN classifier. So we will train the model by calling the estimators train method where we specify the input function and also specify number of steps for which the training loop should be run. So the model is trained. Now evaluate the model with the test data. So we use the same input function instead of train we are going to pass the arguments corresponding to the test data. And we mention training and we set training to be false as against the training to be true at the time of training. And you can see that we achieve accuracy of 66 percent. We achieved test accuracy of 66 percent on the iris classification. Let us use this particular model for making predictions on unseen data or for inferencing. So here we will have to first specify what is what is the expected output and then we specify the features. So here the feature vector is specified as a dictionary where the key is the name of the column or name of the feature and followed by a list of values. So here we are specifying three examples with their values specified for each of the feature in a list. For example we have a sepal length of 5.1, 5.9 and 6.9 corresponding to the three examples and they have sepal width of 3.3, 3.0 and 3.1. So 5.1 sepal length the flower width sepal length of 5.1 has sepal width of 3.3, petal length of 1.7 and petal width of 0.5. So this is how you have to interpret an example. But it is specified in a slightly different format or in a transposed form. We specify the input function to get a dataset object from tensor slices and we give the predict underscore x as a dictionary to the input function which returns a dataset object on which we apply the prediction. So let us run this and see what kind of predictions are coming out and let us look at the predictions and the expected result and we will also print the probabilities. You can see that the first prediction is setosa where the actual label was also setosa and you can see that this prediction is with quite good probability or quite good confidence of 82 percent. The second prediction is virginica where they expected or where the actual prediction was versicolor but you can see that the probability of the prediction is less than 50 percent. In third case the prediction is virginica which matches the actual label of virginica and has got 60.5 percent probability of the label. So in this module we learnt how to use TF estimator API and we applied that for iris classification. We understood that in order to define a TF estimator API we have 3 main steps. We have to specify one or more input functions, we have to specify the feature columns and we have to instantiate TF estimator API with appropriate configuration. In the next session we will use this TF estimator API for building a linear model. Hope to see you in the next session. Thank you.