 Namaste, welcome to the course on Practical Machine Learning with TensorFlow 2.0. This is a joint course being offered by Google and IIT Madras. My name is Ashish Tendulkar and I am your instructor for this course. In this course, we will be giving lots of practical examples of building machine learning models with TensorFlow 2.0. Let us try to understand what TensorFlow exactly is. This is a logo of TensorFlow, which is an end-to-end open source platform for machine learning. TensorFlow has a meaning, TensorFlow is made up of two words, tensor and flow. Tenser is a multidimensional array and flow is a graph of operations. Internally, TensorFlow implements machine learning algorithms as a graph of operations on multidimensional array. TensorFlow was developed by Google Brain and it was released under Apache 2.0 license in November 2015. Current stable version of TensorFlow is 1.14 and it is a popular GitHub repo with 129k plus stars. TensorFlow is a vibrant active community of developers with more than 1800 developers actively contributing to the code base. This course covers concepts from TensorFlow API version 2.0, which is the newest version of TensorFlow. Why do we really care about TensorFlow? TensorFlow provides easy to build and deploy machine learning models for a newcomer in machine learning. If you are an expert in machine learning or a machine learning researcher, TensorFlow enables you to build state of machine learning models with Keras functional API and models of classing APIs. Another important thing about TensorFlow is that it supports production of machine learning models anywhere from CPUs, GPUs to edge devices as well as web servers. And TensorFlow API is available for Python, for Java and for Go programming languages. TensorFlow has a very flexible architecture. It enables easy deployment across different hardware platforms like CPUs, TPUs and GPUs and computing devices like desktop, servers, mobile devices and edge devices. TensorFlow is being used by lots of companies around the world. These companies operate in different domains and are using TensorFlow to carry out and build machine learning models in different domains. For example, Google is using TensorFlow to better its products, various products like Gmail or Doc. Airbnb for example is using TensorFlow to classify images and detect objects in their set of photographs. Airbus is using TensorFlow to detect interesting objects from the satellite imagery and make it available to its customer. TensorFlow is also used for lot of social good applications as well as in the financial domain like PayPal is using TensorFlow for detecting fraudulent transactions. Twitter is using TensorFlow to rank tweets. So you can see that TensorFlow is a versatile product and is being used for developing and deploying machine learning models by companies across different domains. You can check out some of these case studies on TensorFlow.org website. Let us try to build our first machine learning model with TensorFlow. We call it as TensorFlow Hello World. We will train our first machine learning model with TensorFlow API. This machine learning module helps us recognize handwritten digits. We will train the machine learning model with famous MNIST dataset. MNIST dataset contains grayscale images of handwritten digits. There are 60,000 images in training set and 10,000 images in the test set of MNIST dataset. Each image is of size 28 by 28 pixel and each image is tagged with a label that is actual number it represents. So this is a collab, this is a Google collab environment. It allows us to run Python programs directly in the browser. Here we code our model in the collab environment. It has got text and code cell. This is an example of a test cell and this is an example of a code cell. In text cell we have written comments or some text that will help us understand what is going on in the collab. In the code cell we write essentially the Python code. We will first go through the collab cell by cell and then run it. In the first cell, in the first code cell we import required packages and install TensorFlow 2.0. After installing TensorFlow 2.0 we import the TensorFlow package. Next we load the MNIST dataset. MNIST dataset is available in the TensorFlow dataset package. So we can directly load that with this particular command and so essentially MNIST dataset is defined as tf.keras.datasets.mnist and we load MNIST dataset with load data command. The load data command essentially gives us MNIST dataset in two tuple. The first tuple contains the training data and the second tuple contains the test data. So we have the training features in X-train matrix, Y-train vector contains the label of the training examples, X-test metric contains the features and Y-test vector contains the label corresponding to each of the examples. The ith entry in X-train metric represents features for the ith example and ith entry in the Y-train gives us the corresponding label. After loading the dataset, we normalize the dataset by dividing each pixel value by 255. The normalization helps us achieve faster conversions during training. Now that we have loaded the dataset and pre-processed it, the next task is to build a model. We will build tf.keras.models.sequential model by stacking the layers. Next we use loss function and an optimizer for the model. We select the sparse categorical cross entropy loss as the loss for this particular model and we choose Adam as an optimizer for this particular model. You can note that we first flatten our input. So our original input is 28 by 28 pixels. So we flatten it to make it into a vector of size 784 and this particular input is fed into the dense layer which has got 128 units. We use Redu activation in this particular dense layer and this is the only hidden layer that we use in this particular neural network. In addition, we use a dropout regularization with a dropout rate of 0.2. The output layer contains 10 units, one corresponding to each of the digit between 0 to 9 and we use softmax as an activation function for the output layer. Now that we have compiled our model, the next step is to train the model. We use model.fit function for training the model. The fit function takes the training features and training labels as an argument along with number of epochs. Just to remind you an epoch is one full iteration of the training set. After training the model, we evaluate the model on the test set which has the test features and the test labels. Notice that the model was trained on the training data and its performance was evaluated on the test data. This ensures that we have a fair estimate of the model performance on unseen data. You can observe that we have specified our model, its training and evaluation all in less than 10 lines of code with TF API. This is the ease of use that makes TensorFlow an API of choice for machine learning developers. Now that we have written the code for model specification, training and evaluation, let us execute the code in the notebook to see what kind of performance we achieve on this model. So, in order to in order to run the code, we have to first connect to the colab environment which we are already connected here. After connected to the colab environment, we can execute this notebook cell by cell. This run button over here, if you press this, the code in this particular code cell will get executed. Alternatively, we can press control enter button to execute the cell as a keyboard shortcut. So, let us download the TensorFlow 2.0 beta version which we have already downloaded in this case. If this is not downloaded, it will take us some time to download the version from the internet and hence this particular code cell might take a bit longer for you. Next we load the MNIST data and normalize it. Next we compile our model. So, as we press the train button, we can see that the model is getting trained and the progress of the model is shown with a progress bar and in each of the epoch we see some statistics about loss and accuracy and amount of time the training takes per sample. So, you can observe that the loss is going down with each epoch starting with 0.3, we got the loss down to 0.07 and the accuracy is going up. We started with accuracy of 0.91 and accuracy has climbed up all the way up to 0.97. So, we started with 91 percent accuracy and after 50 epoch we have accuracy of 97 percent. After training the model, when we evaluated the model, we achieve similar performance on the test data. One can see that the loss on test data is very close to the loss on the training data as well as the accuracy that we are getting on the test data is comparable to the accuracy that we are getting on the training data. So, in this model, in this module, we build our first TensorFlow model for recognizing handwritten digits from MNIST dataset. Just now we finished building our first machine learning model with TensorFlow. We called it as TensorFlow Hello World. You must have observed that we used Python in our browser. So, for most of the exercises in this course, we are going to use this tool called Google Collab. Collab is a Jupyter Notebook that can be run from the browser. It uses cloud run time and can run in the browser without you needing to do a lot of complicated setup on your machine. Let us try to understand the basic features of Collab so that it is easier for you to use it in the subsequent practical applications. So, Collab can be accessed by collab.research.google.com URL and when you open Collab, you can either load one of your existing notebooks that you can get it from the drive. You can load notebooks from the drive. You can also load notebooks from the GitHub. All that you have to do is you have to enter the GitHub URL of the notebook and the notebook will be open for you. You can also upload your existing Jupyter Notebooks through the upload tab and then the notebook will be available to you in the browser that you can run. Then we also have several example collabs that shows functionality of Collab like reading external data from drive, sheet and cloud storage, Collab for getting data from Google BigQuery or creating interesting input and output forms in the Collab. So, let us try to start a new Python Notebook. So, this is how we start Collab, we start a new Python Notebook, we can save it. We can save a copy in drive or we can save a copy in GitHub. Let me save a copy in drive and let me call this as Hello World. So, Collab file has an extension IPYNB which is exactly same as the extension of Jupyter Notebooks and in Collab we can seamlessly mix text and the code. This makes it a very nice platform to write the documentation along with the code so that it is easier for the reader to follow what is going on in the notebook. In addition to that there are elements of collaboration inbuilt in the Collab. One can comment on a cell or one can share the Collab with their collaborators with the share button here. The Collab has mainly two cells one is called code cell and second is called text cell. In code cell we essentially write the Python program whatever we write in the code cell and if we execute the code cell with run button over here the content of the code cell is interpreted by the Python interpreter and if it is the valid Python code then only that particular code will be run. So, let us try to print Hello World. In order to run the Collab we need to first connect to the cloud runtime. It is right now connecting it has connected and now initializer it is initializing. Now you can see that we are connected to a cloud runtime and we see the status of the RAM and disk in this particular cloud runtime. In order to execute the cell we simply need to run the cell and you can see that Hello World is being printed over here. So the other type of cell is a text cell. In text cell we can write content using what is called as markdown. I can write some sample content here this is my first Collab and I can highlight some of the content using a visual editor here or if you are familiar with markdown you can straight away use markdown syntax in Collab. Apart from text we can also have links, we can also insert links or we can also insert images. We can also have list of items in Collab. So we can say that in my first Collab here we demonstrate how to print using Python print statement and perform basic mathematical operations and you can see that as I was typing the actual output is visible in this part of the screen. So if you go to the other cell the output of the markdown can be seen over here. So let us try to perform addition of two numbers and say that A is equal to 3, B is equal to 2, Y is equal to A plus B and we can simply print and you can see that the addition operation is carried out and we can see here finally print Y. We can also write the content of this particular code cell across different code cells. Let us try to see that. So I can write comments like initialize the variables and then I can have simple code cell initializing A to 10 and B to 20. Then I can say add A and B and let us say I want to make this A. So I can also insert mathematical equations using latex. I can simply write the code saying that Y is equal to A plus B and I can say print Y and let us write the code to print the addition of two numbers here. Now we can run each of the cell and then perform the addition and finally print the number. So I can say that A was 10 and B is 20. We added these two numbers to get 30 in the resulting variable. So this is the collab environment. We can also import useful Python libraries like TensorFlow in collab and then we can execute or we can build practical machine learning applications in collab. One of the great point about collab is that you can execute your Python code in the browser and you do not need to do lot of complicated setup on your own machines. So this brings us to the end of our first module. Hope you enjoyed it. Namaste.