 Namaste. In the previous session we learnt how to use transfer learning in the context of convolutional neural network. In this session we will use TF.hub which is a way to share pre-trained model components with the community. We will use pre-trained models from TF.hub and use them for feature extraction as well as fine tuning. In this session we will learn how to use tensorflow hub with tf.keras api, how to do image classification using tensorflow hub and how to do a simple transfer learning. Let us begin. Import will import all the necessary libraries, install tensorflow 2.0 and import tensorflow. Note that we are installing tensorflow GPU. Since we are training a CNN on images which runs faster on GPU, we are using GPU as a hardware accelerator for this collab. For tensorflow hub we import tensorflow underscore hub library and also import layers from the keras tensorflow.keras library. Next we download the classifier from tf.hub. We use hub.module to load a mobile net and tf.keras.layers.lambda to wrap it using a keras layer. So, this is a URL for the classifier, the mobile net on tensorflow hub. We define the shape of the image which is 224 by 224 and we define a sequential model with the hub layer. If we run this particular model on a single image, let us see what we get. So, we load here we download the image using tf.keras.util.get file and resize the image by the image shape. You can see the input image. The input tensor or the input image is a colored image with 3 channels and has height and width of 224 each. We know that CNNs take 4D tensor. So, we add a batch dimension and pass the image to the model. So, the result of the classifier is a 2D tensor which has thousand elements corresponding to logits rating the probability of each class for the image. The top class id can be found using argmax. You can see that the class id for the input image is 653. In order to get the text representation of the class, we download the image net labels file and use it to decode the name of the class. So, you can see that the prediction which was id 653 corresponds to military uniform. We can use tf hub to retrain the top layer of the model to recognize the classes in our data set. Let us download a flower data set and demonstrate transfer learning with tf hub. We load the data into our model using image data generator which you can see here and we pass the rescaling parameter to it. The TensorFlow Hub's image modules expect a float input between 0 to 1. Hence, we rescale the input image. We also resize the image to the desired shape. We can look at the shape of image batch and the label batch. The image batch is a 4D tensor each having 32 images with height and width of 224 and 3 channels. So, for each image we have a vector of size 5. So, the flower data set has 5 classes and each class is represented in one hot encoding. Let us run the classifier on the image batch. Note that currently the classifier only contains the Keras layer from hub. If we apply the classifier on the image batch, we get a 2D tensor shape 32 comma 1001. TensorFlow Hub distributes model without a top classification layer. This can be used for transfer learning. So, we create a feature extractor as a Keras layer with the input shape of 224 cross 224 cross 3. It returns a 1280 length vector. The feature batch is a 2D tensor. So, for every image we have 1280 length vector. We freeze the variable in the feature extraction layer so that the training only modifies the new classifier layer. So, we attach a new classifier layer to the model. The new classifier layer has units equal to the number of classes in the images and we use softmax as an activation function. So, since we have 5 different classes the dense layer outputs 5 probabilities 1 corresponding to each class. So, the number of parameter for Keras layer is equal to the number of parameters in the mobile net. Mobile net has 2.2 million parameters and the dense layer has 1280 inputs so for every unit we have these 1280 parameters corresponding to each of the input plus 1 bias. So, there are 1281 parameters per unit and we have 5 units making it to 64105 parameters. So, we can see that the total parameters are some of the parameters in the Keras layer and the parameters in the dense layer. Out of these total parameters the parameters in the Keras layer are non-trainable whereas the parameters in the dense layer are trainable. Let us compile the model since we have 5 classes we use categorical cross entropy loss we use Adam as an optimizer. Let us fit the model we will fit the model just for 2 epochs and to visualize the training progress we use a custom callback to log the loss and accuracy of each batch individually instead of epoch average. We also compute steps per epoch and define a collect batch stats callback. We use the callback in the fit and the steps per epoch computed over here. You can see that after 2 steps we reached an accuracy close to 94 percent. If you look at the training accuracy by the steps you can see that it is increasing as we progress further in the training. Let us get the prediction for the image batch and plot the results. If the model prediction is correct we use a green color and we use and we use red color if the predictions are incorrect. Now you can see that most of the prediction most of the predictions are correct. Now that the model is trained we can export it as a saved model so that we can use it for deployment on some other device or we can also reload it for the future use. After saving the model we reload it and we check whether the results from reloaded results from the reloaded model and the earlier model matches that we do by taking the difference between the results. So, in this session we looked at TF Hub and understood how to use the models saved in TF Hub for transfer learning on CNS. Hope you had fun learning these concepts. See you in the next class. Than you all.