 Welcome to my PyTorch video tutorial series. Today we are going to see how to install PyTorch on a Ubuntu box in roughly 10 minutes. We will start by checking the hardware of our system. I will list the main memory I have on my machine. I will talk about the CPUs and the GPUs that we are going to be using on our system. Then we will update the drivers. We will start by updating first the Linux. Then we install the latest drivers from the NVIDIA website. And we will check that everything went fine. Moving on to the CUDA part we are going to be installing the toolkit. We will check the success of the installation by compiling and running a simple example. Finally we will install the CUDNN library. Then we are going to get Python 3 via Anaconda. We will download it and install it. We will proceed with the activation. We will add the Sumith channel and then proceed with the installation of Torch and TorchVision. Finally we are going to be validating that everything went just fine. Let's say you got semi-serious about deep learning. So you got a Ubuntu box. You bought 8 RAM banks of 8 GB each for a total of 64 GB. You have an Intel Core i7 CPU working at 3.5 GHz. 6 cores, 2 threads each for a total of 12 virtual CPUs. Finally you have 4 G4s GTX Titan X. When checking Htop you are going to see 12 cores which right now are doing basically nothing. And you have 64 GB of RAM and 64 GB of SWAP memory. Let's be good boys or girls and let's update the system. So we are going to be typing sudo apt-get update. And then sudo apt-get dist upgrade. And everything is updated, perfect. So we can move on and install the drivers for the NVIDIA GPUs. So NVIDIA drivers. So download, agree and download, copy link address. So let's go inside download. Let's make a directory and let's go Wget. This one, sweet. All right, done. And if we check the content we are going to see here we have our file. So we are going to change the running bit. So chmod user plus X. So make it executable for the normal user. And then the name of the file. So now it's going to be turning green and we are going to have a swastar at the end of the name. I have to tell you that I have an alias for L. So let's install now the drivers. So we are going to do sudo dot slash NVIDIA and I'm compressing NVIDIA X to the graphic drivers for Linux. I do accept. All right, huh, continue installation, continue installation, bam, sweet. Okay, yep, all right, and sudo reboot, bam. So let's check if we have installed the drivers correctly. So we type NVIDIA SMI and bam. We have all the four GTX Titan X installed correctly. None of them are using any memory because the screen is currently off and I'm SSH'ing into the machine otherwise with the screen turned on one of the four GPUs would have some memory utilized for driving the screen. Current use power 70 watt out of 250, which is the max, CPU utilization, 0%. Next point is going to be installing the CUDA toolkit. CUDA toolkit download, all right here, Linux 64 for Ubuntu 16 and I'd like the run file. Thank you. Let's download first the base installer. So let's right click, copy link address before starting to download the package. Let's open Tmax. Even though the connection may drop, we are gonna still have everything running on the server. So Wcat, this guy. Also let's download the patch. So Wcat and this guy, copy link address and go. There we go. Both the packages are downloaded here. So let's check the content. We have the CUDA package and then we have the patch. So let's make them executable CHmod user plus X and both of them. So right now, all of them are green. So let's start by running the main package CUDA underscore three seven five. Okay, I accept. I would not like to install the drivers because we already installed them. I would like to install the toolkit. Yep. Mm-hmm. Yep. Do you like to make a sim link? Yeah. I'd like, also, I'd like you to install the samples and yeah, sounds good. We can see a warning in complete installation. The installation did not install the CUDA drivers. This is because I decided to install the latest driver by myself before head. Moreover, we need to make sure that the path includes user local CUDA dash eight point zero bin and that the LD library path includes the user local CUDA eight point zero live 64. So let's do this right now. So we are going to be beaming our measure C. So let's add some pound signs just to make clear. That's a new division. Then we're going to be writing CUDA and then we're going to be writing these two lines. So export path equal dollar sign path, which we append user local CUDA bin. And this is because we decided to make a sim link. So if we go here and we check for this location, user local CUDA, we see that is pointing to the latest CUDA dash eight point zero. And moreover, we have an export at the library path as it was prescribed before equal dollar sign LD library path to which we append user local CUDA live 64, which is pointing again to the version CUDA dash eight point zero. Let's save and quit. Let's try to run one demo to see whether everything works fine. So we can go here and we see that we had the NVIDIA samples. We have several samples. We can go in the first one in the utilities. And then let's try the device query. So right now we simply can type make. We have generated a device query dash slash device query. And then we see that everything works. So now that we have installed the CUDA toolkit, we can install also the patch. So we can run the slash CUDA six point two Linux run I accept. And it was quick. So let's finish the installation of the CUDA related libraries by installing CU DNN. Let's find out where to find CU DNN download. Okay. Yep. Download login. All right. So let's go here. What types of data are you working with images and videos? What do you do? I do research. Which framework do I use? I use torch and by torch, of course. So I agree. Let's get the latest version. Since we logged in in the system here, we won't be able to download this file with up you get on the server. So I will have to download it first on my machine. And then I'm going to just pull it from the server. So let's get the link copy drop off link. Let's go here and let's go W ket. And there we go. Let's untar. So tar extract file, see you DNN. And we have a directory called CUDA. So if you go inside CUDA, we find the following. So let's go inside include. And we find a CU DNN H file. We are going to be copying this guy. User local CUDA include. And then we're going to go one level up inside lib64. And we are going to copy everyone to user local CUDA lib64. And we just install CU DNN. Let's install CUDA, which is going to be installed in Python 3, Jupyter and IPython automatically. We can simply Google Anaconda, we're going to get here. So we're going to choose Linux and download copy link address. And then W ket, this guy. Here we go. Anaconda, which is not green means it's not executable and doesn't have the star. So we're going to do CHmod. Here add execution to Anaconda. And now it's green and it has the star. So we can do dot slash Anaconda. Uh-huh. Yep. Yes. Do you wish the installer to prepend the Anaconda 3 install location to path in your Bish RC? Yep. And that's it. Well, let's do source our Bish RC. And let's type Python. Well, Python 3.6.1 from Anaconda 4.4.0 64 bit has been installed. Sweet. So now we can install PyTorch with CUDA support. So let's type conda config dash dash add channels. So with it, now we simply can type conda install PyTorch and Torch vision, uh-huh. That will be also updated. So let's see if we have a working PyTorch installation. So we can type Ipython. Then we can do import Torch. And we can see that both Torch and Torch vision are installed. We will go with Torch. Then we are going to type, although you may not understand, it's okay. This is just for showing that you have a working installation, T equal Torch dot RAND of 5. Then we can type T and return, and it's going to show us that we have a tensor of one dimension of size 5, which is populated by numbers from 0 included to 1 excluded, sampled from a uniform distribution. Let's try now to send the tensor to the first GPU, R equal T dot CUDA. Bam. The tensor has been successfully sent to the first GPU. If I type R and I press enter, so we can see that R is a one dimensional tensor of size 5 and it lives on GPU 0. The values of R are the same of T. Simply, T was residing in the RAM, so the system memory, whereas R resides in the device memory, device number 0. Let's say we would like now to send R to the GPU number 1 instead. So let's call it S in this case equal T dot CUDA. But now specify, please send it to the GPU number 1. And that was also successful, so if I type S and press return, we can see that S is a tensor of one dimension of size 5 and it lives on GPU 1, so the memory of device number 1. And this concludes the installation tutorial. Yay.