 All right, very well. Can you can everyone see the board here? Yes, very well So over there you have my Twitter account. It's a test to see whether you are active on Twitter. Go there Thank you very much for your support All right, so I'm gonna be trying now to entertain you for the next half an hour with some machine learning I'm not sure about your previous knowledge about machine learning So I'm I'm supposing you know nothing and I'm starting from scratch from zero and I start to build up some Concepts again if you don't follow what I'm saying stop me ask me to repeat or to clarify Okay, you have to make yes with the head. Otherwise. I don't know if you're listening and understanding what I'm saying So make yes. Okay. You can also do that if you're Indian. I'm fine with that too. All right Sorry about the joke So my name is too long Usually you can just shorten by just using the first three letters of my name so I can go by half Easy, right? And that's it. So let's get started So machine learning actually 2018 It's actually called deep learning. Why is that because it's cooler More or less, but it's always the same stuff for the last 30 years But now we have better hardware so it goes faster and we can do much more fancy things All right So let's start with the taxonomy of the machine learning the learning paradigms and the algorithms So here I'm gonna just try to give you a very brief overview about what things? What are the things that exist over here in this field? And I don't try to be exhausted. I just give you an introduction so far So we have here machine learning Which can be divided in three main subcategories? The first one is called supervised learning What is it supervised learning? It means you have a sample and then someone is gonna tell the machine Okay, that sample is a an apple perhaps. It's a fruit. It's an apple I provide another sample. Maybe it's another fruit. That's an orange. So I provide a third example Provide a yellow shape fruit. That's a banana So supervised learning is basically providing a machine a sequence of samples of training samples And I'm associated to every sample a label and I just ask my machine to learn the correspondence between that sample and the label so again supervised learning you get a sample and then you get the Feedback from the you know from the teacher which is providing a supervision for the student to learn Okay Second one is gonna be in said unsupervised learning here. The teacher is too expensive We don't have a teacher. They just went on a strike You have the kid with a banana an orange and apple and tries to figure out things by tone Right, so you don't have any more the teacher which is telling you the name and perhaps in English And you just have the samples you can play with you're just trying to understanding as understand the Structure the internal structure of those kind of data, but again, there are no labels provided Finally, this family here. We have the reinforce reinforcement learning. What is reinforcement learning? basically, you have a sample you provide a sample to your Machine you provide another sample provide another sample provide another sample and then bump again You provide the label so you have some kind of delay Label what I'm talking about. So for example, this stuff is when they are playing Let's say Super Mario on your Gameboy if you are, you know old or maybe other other games but basically you have to perform a sequence of actions and Like you know, you have your agent jumping around in your video game and only at the end you see oh You succeed you pass to the next level or you actually died at the end So you should have changed your strategy. So these three things can be divided this categories can be divided based on when supervision is provided Supervised learning you have supervision, which is the label provided with the sample every time and supervised There is no supervision and then there is the last guy where you provide some labels some some basically some supervision later on Okay, so these are the main three categories Someone can think to divide machine learning in Today, we are going to be actually focusing on the first part supervised learning, which is the easiest part Given that we are having those samples and those labels That's the easiest thing you can think of just having a machine learning a mapping from those samples to the labels Those other things on the right-hand side are going to be used for different Setups, which are a bit more complicated. For example, the unsupervised Learning is used when you don't have those labels because labels usually are very expensive They require some human knowledge, which are you know Providing some supervision and as I said before the teacher costs money They go on strike So you don't always have the capabilities of having labels for your data Last one instead is basically Having your machinery interacting with some kind of Other thing with the environment. So for example, for example, you can have like a supervised learning algorithm, which is maybe classifying Faces classifying cats versus dog making some classification or regression like trying to predict the temperature in a specific I don't know place of the planet given that you know some Information like how much how long was the Sun seen in that specific point in the globe. So that's Supervised for supervised learning for doing regression And the last one yeah, I was saying and then the reinforcement learning can be also used for supervised algorithms where you have this in Algorithm that may have may interact with the environment and have like feedback from users And then you have the system that tries to improve With some feedback which is coming, you know later on in the game All right, so this is just a really brief introduction about what we may see in the next next lessons Clear so far. Yes. No, I talk a lot. I know sorry about that next one. All right, so As I said before we have in the supervised branch there We have classification when we try to for example Classify oranges versus apples versus bananas. A lot of them are fruits All of them belong to the same kind of family, but we'd like to distinguish between different categories. So There you have like you cannot express those categories with like a let's say a Number you prefer to say it's associated to a group a group B group C Whereas on the regression we try to infer a real number for example the temperature on a point in the globe Or maybe the location of a point on a screen. So we're never you have to end up with some values, which are you know? Scalars with our but which are very varying. We would like to perform regression Classifications that tries to make to put things in buckets. Okay, so far. Good. All right. Let's do something more cool Next one. So machine learning that was the initial title of the thing, but we are actually we care about deep learning Huh, okay. The tattoo is wrong So this title should be a machine learning algorithms Love deep learning. So the first one over there. I call a NN which is shortening for artificial neural net Which is gonna be the main topic of you know, the current and the following lessons Then there are there only neural networks in machine learning now There is a lot of other stuff which doesn't really work, but I'm gonna show you just for you know politeness So we have like Support vector machines, but we may have decision trees We have I buy it Bayes, I don't know how to pronounce that word and you have a hidden Markov model a lot of stuff Which doesn't really work as well as artificial neural network do Does whatever neural networks do. All right. So where is this deep learning coming from? So that guy over there splits in two parts There are shallow networks and there are deep networks now. I have a question for you Since now we are playing and talking about deep neural networks and deep learning and all those very deep fancy terms Why would anyone play with shallow net? Second oh, definitely they are easy to train but So the point is that you didn't have hardware You know a few years ago only recently let's say in the last five years We had GPUs and we had enough compute in order to perform so many computations which are associated to very long neural networks So deep neural net means very long very, you know massively large neural networks That can be trained only if you have enough power and it comes available to you only if you use like for example Accelerations with graphic cards GPUs. Okay, so far is it clear what I have been talking about right any questions so far No, all right, so yes anything a shallow network can do that a deep network cannot do yes be fast And it actually is okay. It's not actually a joke So if you are speaking about Language and buildings they use like one layer and basically a neural net So they do perform. I mean shallow networks are actually used Currently whenever you need to really push forward performance. They don't need that much of a modeling power Okay, so shallow networks, although they may sound like a joke are still very commonly used and are very Useful especially in those cases where you need really fast computation And you don't need that much power for modeling some kind of data. Okay, so please ask questions Even though they may sound silly It's all good. We learned from that. Yes, please So you said so why does anyone use any of the other machine learning models with the deep learning is so much better now? so You shouldn't use SVM or the support back machines. I never said so. I simply Suggested that this works better. Okay. Uh, of course, that's controversy. I know I'm joking. Okay All right, then the whole point is that in here we basically these solutions over here We are called end-to-end. So you have some input on the left-hand side Yes, and you have some outputs So you have your fruits on one side and here and here you have your names banana apples and oranges And you just put fruits one side and you end up with labels on the other side You don't have to do any other thing in between if you use these guys over here For example, you may need to perform some kind of feature extraction, which are maybe hand engineer features Maybe if you're looking for images, you may use hog filters and other crappy things that have been used in the past In order to extract some features from the original data in usually when we use deep neural nets Usually the solution is said it's a end-to-end. So you have just raw input pixels of an image or, you know Data from a audio file. So you just use whatever data you have You don't have to use any kind of hand engineering of feature extractor in order to apply these kind of solutions So that's why basically these kind of algorithms have been Working very very well recently because whenever you have your system that is trying to learn those kind of Feature extractor, they will be optimal feature extractor given the data you provide them, right? So someone say, okay, I'd like to extract some Sounds from you know from a speech. So maybe I would like to plot the spectrogram So I get like time on one axis and frequencies on the other axis So you can end up with very different Ways of extracting meaningful information when you use deep nets usually you just provide raw data one side And you enforce some specific output label which could be like as we see before some number scalar number for regression or some Basically categorization. So basically some buckets basically some Yeah indication about which category a specific Sound or image or any kind of input belongs to So in short if I try to reiterate my and summarize what I said This solution here. Usually you just have the deep net. You don't know how it works. It just works black magic Seriously, we don't really fully understand what's inside That's why you have many people that are still making research here and asking for money to make research in this area Because they say oh we can interpret our solutions and interpretability is better than make it work Don't think so, but you know up to you if you want to make stuff work. I would say use this stuff Then it's up to you. It's like experimental. Okay, so you can try all of all those things, but You know something works. Something doesn't yeah Deciding whether network is shallow or deep Let's say One layer is shallow more than one layer. No a few layers. Let's say It depends in which domain you're talking about for image classification I would say Everything above five layers is deep less than five layers is shallow if you have audio Sorry, audio you may have different numbers. I think it's very dependent on the kind of input you feed these networks This is just introduction to my talk. So let's not try to focus too much on those little details All right, okay, so this was the introduction and where what would we do what we do? I mean, how do we implement this stuff? We're implementing this stuff. We're gonna be using by torch So I was planning to go through a tutorial on how to use tensors, but I think I'm gonna be skipping this part for sake of you know Oh, no, would you like to see that tutorial on how to use pytorch? tensors Yes, okay. All right, that's just something really better. You think we should do you're the expert. Yes, I know I'll see All right, let me think Okay, all right. Let's then do this tutorial All right and show So if you manage to install the conda and the Jupiter things you can follow along if you haven't it's fine as well Because I'm gonna be showing everything here So I just I am inside my repository here. I just run Jupiter Okay, so this is on my local machine. I don't have set up anything else Around this one and I observe this one It's black because it's cool You don't have to have black but my eyes are happier to see black stuff and I go on the first tutorial can't answer tutorial so If you haven't used before the Jupiter notebook things You can execute things by pressing control and then return or if you'd like to execute and go to the next line You can press shift and return. So if I press here control and return I am highlighting this set and I stay in that set if I press shift and return. I actually I actually move down So how do I get help? For example, I don't know how What function are they but so in the first line there I imported torch and in here I just press enter to edit the set I can go just after the sq. I pressed up and for example, I can see the auto completion For example here, you can see there is a square root. There is a square root with a underscore There is squeeze so anytime you don't know you just press stop after you type some letters in order to have some kind of completion Okay Going to the next thing. Let's say I know something ends up in a blood tensor But I'm not sure what's before the tensor. Can you see is it too small? Should I make it larger? I Can make it larger if you want All right, so in here Let's say I'd like to know all the function that end up with tensor So I say torch dot. I don't know star then tensor question mark if I execute this one For example by pressing control and enter I see below all these completions No, so for example, we have torch dot by tensor torch dot car tensor and so on So anytime you just you can replace the unknown part with the star and then you can figure out What are the other? options If I okay, I'm here and I'm not sure about I'm not sure how to use this module. Okay, so this torch and then module is something But I'm not sure how to use it. So if I press instead within this parenthesis here shift and tab I want to see a small Overview about the head function. Okay, so if you do shift tab, you're gonna see okay base call for all neural network modules Very nice. So I can get some information Otherwise, I can do basically the same by typing a question mark after the name So torch dot and then dot module question mark if I execute this line here It's gonna pop up this message below where I have the full help for a specific function Let's say now I'd like to know more. I'd like to know the source code of something sweet I just put a double question mark. I execute that one and here I see the full source code of whatever I'm looking at Okay, so these are very nice handy ways of checking help Having help for anything you're doing so you can do shift tab you get the quick Quick memo about what you're doing. You can have one question mark or two question marks All right, that was it Great now so let's start here with torch before we have an important torch. What is torch? So far for this small tutorial torch is gonna be simply a Tensorial library what are tensors? So we're gonna figure out very soon what they are Let's let's write this one. So we have t equal torch dot tensor two three and four If I execute this one, for example pressing shift return, I'm gonna see oh t is a tensor Wow, okay. Are you here? Are you following? Are you awake? Am I is it boring? Oh, okay? I Thought it was all right. Sorry about that. All right. So here we have created a torch tensor What are the sizes of this tensor? I just press I just type t dot size and here I see oh the sizes of this tensor are two three and four you can also I Press B to go below here. I create a new cell I can do t dot shape for example, which is like very common for whoever uses numpy for example If I press enter, it's gonna be exactly the same as I've done before So in this case this tensor has size two three and four means there is some memory allocated which has some space that which is a space of four times three twelve times two 24 so there are 24 slots allocated somewhere which are organized in a specific way I'm gonna tell you more later so This is just a fancy way of drawing those sizes I just use some, you know basic python in order to print here that this size of this tensor It's just two times three times four and also I show you that you can use Unicode in Python All right, so let's see how much memory this stuff occupies. So here we have that a Point in this tensor is our basically a 24 dimensional Vector so if you if you have one tensor of size two three and four That tensor can be also thought as one point in a 24 dimensional space Okay, so if you think about a 2d space a plane 3d space and the volume 24 dimensional space whatever space this tensor is just a point Over there and overall it has it has three sub dimensions. Okay, so if I when I press The beam here, I ask how many dimensions have this guy and in this case it has three dimensions What's up? What are the sizes of these three dimensions? You have seen that before through two three and four, okay? If you multiply two by three by four, you get the 24 Basically degrees of freedom this tensor has so basically again. This is like a point in 24 dimensional space All right, so let's actually print this tensor if is everyone following so far. Yes, did I lose anyone so far? Are there questions? Okay, should I continue? Thank you so much. Well, there is a question over there What was this we're communicating your line again say so I just press B to go below You can press a to go above you can automatically create one cell above or below And this works in the notebook, I don't know the other thing they use here All right, so I'm gonna be executing I keep executing. Okay, stop me again if you need any suggestion Is there any problem here in the first line? Okay, all right, I asked all right. So here I'm executing this team bum disaster. What's happened? I don't know overflow went on back in long Okay, I So something happened here something that basically was not initialized, but yes I don't have I didn't follow this Yes, I didn't follow any instructions I'm not sure Yeah, I was one of it. Anyhow something happened here I'm not sure but I'm gonna just replace whatever happened inside those team So whenever we are using some functions with a underscore We're gonna be overriding the content on something. So here I put t dot random underscore 10 in this case I just override the content of that initial tensor with random numbers from 0 up to 10 excluded And basically it fills up here these two tables that are three times four So right now, maybe you start understanding what is this tensor right before we said that sizes of this tensor where two three and four right and here you can see how there are two matrices of Three by four. So maybe you start getting a idea of what a tensor is but I still haven't defined it So right now if I press if I execute this line before it was actually crashing I had no clue what was going on if I gonna be executing this line here Actually, we see that the content of t has been overwritten by the content Okay, right so far good. Okay. All right. So let's keep going In this case, I say are equal torch tensor of t so basically Create a new tensor From t and then I would like to resize with underscore. So what does it mean underscore? Can you remind me? It overrides. So it does some It changes something, okay Yes, it's a torch specific thing. So in this case Are at the beginning here is gonna be just a clone of t sort of speaking in loosely speaking. Okay, it's not a clone. It's a proper clone. It's like a Copy, let's say it's neither a copy It's a kind of replica I'm trying to find a different words of the versus in place. Yeah, yeah, so I'm just Okay, just forget what I'm saying. I just read what's happening here So are is gonna be generated from t and then I resize Are okay, so if I execute this line here, you can see that now the overall tensor that we it was before Like expresses a couple of two matrices. Now it has different size, okay, but the content is the same All right, so now what I'm doing here, I'm saying are dot zero underscore cooking guess what's gonna be happening It's gonna be replacing the content of our and then you're gonna see that what does it happen now if I execute this line So the point here is that everything gets zero. So this is an important difference that It happens. So if you if you're familiar with matlab, for example, every time you copy things There is there are memory copies here. You have to be very very careful because in Torch by default there is basically sharing pointers to the same allocated memory So are and t share the same Storage space so the same space on the RAM on your memory system And then they have simply for example different views different kind of shapes or anything you made like Okay, so by default whenever you create new tensors from other tensors You're just reusing Anything that was there before and anything you're gonna be doing to the other tensors gonna be reflected in the other one So for example If you're gonna be loading some data from noon pie and you ported it inside torch if you change stuff in torch Also, the noon pie array are gonna be changed in the same way if you change things in the noon pie You'll get also the torch stuff changed There's a question over there. I think I'm besides you have to keep the same number of dimensions correct that is correct Because I just changed the shape right of this tensor. All right So here we have seen that by zero in a tensor that has been created from my previous tensor gets Creates, you know some this kind of problems. I will I call them problems because sometimes you forget about this That's why it's a very important and I Like to highlight this is very important to Make a clone anytime you'd like to store Some intermediate result if you don't do that, maybe you make a dictionary or whatever a list of several tensors During a process and then at the end you check all those tensors. All of them are the same tensor So if you'd like to spawn out like a new tensor, you need to use the clone Command so here I have s which is a clone of R Now let's feel s with ones, okay So now my s is gonna be a tensor the same size of the R But now filled with all ones if I actually execute this line here I'm expecting to get all previous zeros. We have seen before because we made a clone So everything there is a memory copy here and this implies there is some, you know Time spent over the memory copying part So that's why the memory copy is not default because we always want to minimize the time spent during computations And if you have memory copies everywhere things get very very slow right so far everyone is Up to speed They lose anyone. No, okay, so let's keep going Vectors what are vectors? They're simply 1d tensors We have also scalars scalars done it. I didn't put it here. Okay, let's go vectors So here in this case my vector V is a tensor of Which content is 1 2 3 and 4? So what are what is the dimension and what is the size of the first dimension? Who can guess? How many dimensions this tensor have it just has one dimension and the size of the specific dimension is 4 Because here you can see those four elements. So here we are iterating a vector. It's simply a one-dimensional tensor All right, so let's go on. So here we have for example w which is my weight matrix my weight vector Which is 1 0 2 0 If I'd like to perform a element wise multiplication I just use this simple star and if I pre if I type V star w I'm gonna get simply 1 0 and 3 times 2 Which is 6 and 0 again, right? So this is element wise multiplication. It's very commonly used for masking for example input instead if I'd like to perform a Scalar product wherever I'm basically computing the projection on one vector towards the other vector I just use the add operator and so the add operator if I execute this one It gives me a scalar tensor of dimension of value 7 This one is 1 times 1 plus 3 times 2, right? So this is 1 plus 6 and this is the scalar product of these two vectors we have seen before All right. Good. Nice. Why? Why do you have a semicolon in a So for example in the V is that syntax? Okay So the semicolon here allows me to write two different Python instruction on the same line So I could have simply put a press return and have two lines, but it takes more space So sometimes I prefer to just put in one line All right, so let's go on here and I'm gonna be showing you a random Vector of size 5 so the dimension is 1 sizes of the first dimension is 5 If I press this one, we are gonna see that the first number can be Addressed by putting square brackets and number zero and the last number can be Extracted by putting minus one in the square brackets All right, if I'd like to extract a sub tensor I can just say from which element for example element number one, which is seven up to Element number two, which is number five and I have to put a plus one because Python just considers the last element Excluded and so this one is the sub tensor seven and five again It's very important here if I'm gonna be zeroing this guy here You're gonna be basically writing a zero in this location here. Okay, there is no memory copy happening here Because Python considers The first element included the last element excluded. So here I'd like to be going from Okay, no, that was it. Sure. All right, so let's get back to my vector V My vector V was the one two three four one dimensional tensor I can actually reproduce the same result by using the a range going from one to four plus one so this is another way of writing a simple simple range Tensor range I can compute the power on this guy by simply using V dot pow Okay, so V dot pow Rise every element of my vector to the power of two Again, if I print V we see here that V is the same, right? We didn't change the content of V In order to change the content of V and rise every element of V to the square I mean to the to the number two. We just have to use the underscore here So if I execute this line here We're gonna see one four nine sixteen and the same here. We're gonna see one four nine sixteen So in this case we have replaced the content Again underscore after the name you're gonna be doing some destructive operation Right good. All right Next one matrices. So what are those guys? They are simply to the two dimensional tensors. I Can for example define them in this way by writing torch dot tensor and then I just provide a Two lists of numbers and here I show you this this guy So if I ask for dimensions, how many dimensions has this tensor number two, right? Because there are two dimensions. It's a matrix If I show you here the size of these two guys, they are First one is two. So there are two rows. There are four columns and overall the sizes are two and four How many elements does he have if I ask number of elements? We see there are eight number of nine nine eight elements So again this matrix can be also sought as a point in a eight dimensional space and again here you have a sequence of a List of how you can address different Elements for example, you can extract one specific scalar or you can extract the same exact scalar You can extract one column or you can actually preserve the dimensionality So I would say Yeah, they are not Too important, but you know at least here you have the whole Combination so here you extract a row and here you extract one dimensional take tensor. So in this case here when we use the Square bracket around the number you extract actually a still two-dimensional element from a two-dimensional tensor Otherwise you extract a sub-dimension All right almost finished So in here, I just create my V range going from one to four and I multiply it with a matrix So m was this matrix, right? I show you here the two five three seven and four two one nine and Here I perform the matrix times vector multiplication using the at operator and Here we get the forty nine forty seven Which is simply the scalar product of the first row and the vector and the second row and the vector That's classical Then there are all Operation defined as well on matrices like summation subtraction multiplication element wise multiplication and element wise division And then we have also the transposition that can be performed by typing dot t Or the same way as a fully writing the transpose Name But this was the matrix operation right The one with the at is the matrix multiplication Where is it here m times V, right? So I show you Yes m is a matrix and V is a vector And here I have matrix times vector And here I show you the result forty nine and forty seven is the result of the scalar product between the first row And the vector and the second row and the vector does it make sense? All right, so Constructor there are many many ways of creating those guys and for example, you can use the range You can use Lin space zeros one the i identity and you can also create random You can create random tensors by using random n for normal random data and Rand for just the uniform The the different tensors also carry a specific type. They may be float. They may be double They may be long. They may be integers and whatnot. So you can Cast different types of tensors as we have seen before we can type torch dot star Tensor to have the full list of this kind of format. So this is like the different Format of the underlying memory used by the different tensors I'll show you how to cast different things which is kind of Sorry Straightforward so here. I show you my matrix m And I can cast it as double. So in this case, it's gonna be a float 64 whereas before the matrix was simply a float 32 Underneath here. I can just cast it as bite. So here every time I'm casting the tensor in a different format I'm gonna actually make a memory copy of course because we have different different sizes of memory And if you cast back you will lose this or how it will work. Yes, I don't think it's not they're gonna be around Rounding is gonna be a ceiling usually when you have casting It's like see if you cast in C language, right? If you if you cast a double in in see you get simply the ceiling Okay Yes That is correct. Sorry my bad. You get the floor. Yeah, you get the floor whenever you're casting Yeah All right, so one nice thing is that if you have a CUDA Device music basically if you have a GPU on your machine You can also do very interesting things and for example, you can cast a tensor to CUDA which means you're actually copying your tensor from your RAM your machine to the memory on the device on the actual GPU. Okay, so in my case, I don't have the In my case, I don't have the CUDA so I cannot run around this line but otherwise if I would have a GPU I would be able to basically move the tensor from my own RAM to the Graphic graphic graphic card So in here in this case since we wrote That device is gonna be CUDA if there is CUDA. Otherwise in this case M is simply sent to CPU So this one does nothing You can type this line you're gonna be able to see whether you have CUDA on your machine or not If you're working on a Jupyter lab, you will have CUDA All right, so the other nice part is that you can Go back and forth between NumPy and Torch Without any issue. So in this case here, I'd like to have my M Tensor in a NumPy format so I can simply say M dot NumPy and I'm gonna be having my Metrics converted into a NumPy array. So if I execute this line We see that this guy now it's a NumPy element So you can go back and forth between Torch and NumPy without any memory coping, which is a very good thing For example here in my M NumPy, I set the element zero zero to minus one Okay, so if I execute this line, you can see the first element over here It's set to minus one now if I execute M, which is my tensor in Torch You're gonna be automatically seen The first element is set to minus one. Okay, so there is no memory copying once again Unless you cast things unless you copy them from CPU to GPU There are no There are no memory copies copies Finally here, I'm gonna say N dot N underscore NumPy equal a range from NumPy and then I say Torch Initialize basically from NumPy and then I print both of them and here doesn't work because I haven't Specified NumPy so I go above I do import import NumPy S P And then I can execute this line and here you can see that my first element is my NumPy array here And then I created the tensor from the NumPy So this is very very very very very useful whenever you have a data loader You have implemented for you whatever application in NumPy or tensorflow or whatever You can simply reuse it for in Torch and you can simply convert NumPy's arrays into Torch tensors without any Issue the point is that whenever you are in Torch You can also use those tensors on the GPU in order to perform accelerated computations NumPy has tensors too, but they don't run on the CPU on the GPU, right? Yeah, it's worth pointing out that Just about everything that you've just shown are NumPy features and it sounds like Torch is is For this part so far is a thin wrapper around NumPy With different names and such a lot of the names are different Yes Yes, so it's not a wrapper on NumPy although the names are now they are trying to actually make the names very to be very similar But those things are actually implemented with a different libraries So yes, but yes, they try to make it look like as NumPy as close as possible But yes Are we done in five minutes three minutes? Okay, all right Yes, all right. Yes. All right. So the last few lines and we are done here So in this case I have n which is my tensor and I multiply with the underscore by two So what's gonna happen here? My tensor was the tensor from zero to four And I multiply by two It's gonna be from of course zero to eight But also I'm gonna be replacing the content of the n dot and NP. We understand these two lines. What's happening? Yes, okay, very well. So here you can see the content of the NumPy array has been replaced By performing an operation on the tensor Oh Finally You can concatenate stuff together by just using torch dot cut. So here I create these two guys Which are two different roll vectors and here for example, I can cut concatenate those two on the zero dimension And on this line here, I can concatenate them on the other direction You can find more information if you go on the pi torch Page and you can click here on the link. Okay. So here you have the full description of all the API about tensors And this was simply a quick primer on tensor Stay tuned for listening about the next things if there are no other questions. I see you after the break