 Namaste. So far in this course we use TensorFlow API to build model for image classification and regression. In this module we will demonstrate how to use TensorFlow API to build model for text data. In this module we will we will use TensorFlow API to classify the movie reviews into positive or negative reviews based on internet movie review data set. So we have 50,000 movie reviews in total. We use 25,000 reviews for training and the remaining 25,000 are used for testing. In this example we will use transfer learning with TensorFlow Hub and Keras. Let's install all the required libraries and TensorFlow version 2.0, we use NumPy for data manipulation and we use TensorFlow Hub for pre-trained models. One of the beauty of neural networks is that we can use neural network models trained on one particular task, we can reuse that for some other task. For example, a model trained on image can be used to perform classification of some other images. So this is called as transfer learning where the TensorFlow the trained transfer from where the model that was previously trained on some data set called as pre-trained model is used as a black box in some other model. We can see that TensorFlow 2.0 is now installed. Now we will download the IMDB data set and split that into training and validation split of 60 to 40 percent, 60 percent training 40 percent into validation. We are using TFDS or TensorFlow data sets load method to load the IMDB reviews from the internet. Let's print the first 10 examples to see how the data looks like. We do that with the batch function. So you can see that each review is on a single line and so you can see that there are 10 reviews over here. Each review is in a single line and there are labels in the label batch. So you can see that most of the reviews in the first 10 reviews are positive and except for couple of them which are negative. This is also a 1D tensor with shape 10 comma. Now that we have explored the data, the next task is to build a neural network model. There are three main decisions when we decide to build a neural network model. The first decision is to figure out how to represent the test, how to represent the text, how many layers should we use in the model and how many hidden units should be used in each of the layer. In this example, the input data consists of sentences and the output label is binary which is either 0 or 1. 0 represents the review is negative and 1 represents that the review is positive. So one way to represent the text is to convert the sentences into embeddings vector. This is where we can use some of the pre-trained text embeddings as the first layer. This has got multiple advantages. We do not have to worry about text processing, we can benefit from transfer learning and embedding has a fixed size, so it is simple to process. So we will use a pre-trained text embedding model from TensorFlow Hub. Let us look at what TensorFlow Hub is. TensorFlow Hub has a number of reusable neural network models that can be used as black box models in other applications. We will use a text embedding model based on Google News which embeds a given sentence in a 20 dimensional vector. There are other embedding models that are also available on the TensorFlow Hub. But for this particular exercise, we will use a Google News based text embedding model. We will use Google News based text embedding model. Let us first create a Keras layer that uses TensorFlow Hub model to embed the sentences. So we can define that using Hub.Keras layer, we specify the model that we are using for embedding by a URL of that particular model. We specify the input shape, we also specify the data type that is string and we specify whether the model is trainable or not. So in this case, we want to retrain the model that is why we set retrainable, we set trainable to be true. And what we will do is, we will take first 3 examples and see what happens when we pass these examples to the Hub layer. So you can see that as we pass these 3 examples to the Hub layer, we get a tensor which is a 2D tensor which has got 3 examples and each example is represented by a 20 dimensional vector. Each vector is a real number either positive or negative and each number or each entry in the vector is a 32 bit floating point number. Let us build a full model and let us see how to use this particular Hub layer inside the full model. Now you are quite familiar with the sequential models in K-RAS. So we will add the Hub layer as the first layer to convert the sentence into the desired embedding. Later we take the output of this particular embedding and give it to the second layer which is a hidden layer with 16 units which uses Rayleigh as an activation function. Finally, we have an output layer which is a dense layer with a single unit because we have a binary classification problem here and it uses activation as sigmoid. So let us quickly look at architecture of the model. So what we do is, what we do is, so we have this is a text embedding model, text review as a input and it gives us for every review we get 20 numbers from this and then we send these 20 numbers to a hidden layer with 16 units and it uses ReLU as an activation function and it uses ReLU as an activation function and then it goes to a single unit output layer with sigmoid activation which gives us y which is either 0 or 1. So we get take the text, we pass it through some kind of an embedding, it gives us 20 numbers and it is 20 numbers going to the 16 numbers. So this particular part we define by the hub layer and then this particular part is essentially a dense layer and is another dense layer which is an output layer. I hope this makes the architecture clear to you. Now that we have built a classifier, let us run this and see the summary of the model. So you can see that there is a Keras layer which is an embedding layer which outputs 20 numbers. That output goes to the dense layer which output 16 number and then we have an output layer which outputs a single number which is the prediction and the number of parameters in the Keras layer are about 400 K, then 336 parameter in the first in the hidden layer and 17 parameters in the dense layer. You can clearly see that number of parameters is 17 because there are 16 inputs and one bias term that makes it 17. In the same manner you can see this 336 comes as 16 into 20 plus 16 bias units corresponding to each of the unit in the hidden layer that makes it 336 and then Keras layer has 400 K parameters. So each of these units in Keras layer has 20000 parameters per unit. Out of these 20000 parameters there is a parameter each for a word in the vocabulary and an additional parameter is used for out of the vocabulary words. So that makes it about 400 K parameters in the Keras layer. So we have 400373 total parameters to train. So now that the model is defined let us compile the model. We use Adam as an optimizer, we use binary cross entropy laws because we have a binary classification problem to solve here and we track accuracy as a metric. We will batch the training data into a mini batches of 512 samples and we run the training loop for 20 epochs and we validate on the validation set and we store the output of the fit in the history. We store the output of the training loop in the history variable. Since we stored it in history variable we can use it later to plot learning curves or any such statistics around the training loop. You can see that the training loss is going down after every epoch and the accuracy is going up. Also keep an eye on the validation accuracy and see what is happening to the validation accuracy. So validation accuracy also seems to be going up with each epoch and training and validation accuracies seem to be quite close. After 30 epochs the training accuracy has crossed 93 percent and validation accuracy has crossed 87 percent. Let us evaluate the model performance on the unseen dataset. We copy the result of the evaluation in the results variable. So here on the test data we get an accuracy above 86 percent. It is very close to 87 percent. So this fairly naive approach achieves an accuracy of about 87 percent. In this module we built a text classifier. In this module we built in this module we built a text classifier using TensorFlow API. We used transfer learning based on models in TF hub. Hope you enjoyed learning these concepts. Hope to see you in the next module. Thank you.