 In this practical, we will see an overview on Torch's tensor library. Torch is an easy-to-use and efficient scientific computing framework, which leverages luadjit, just-in-time compiler, and an underlying C and CUDA implementation, which makes every routine very and very fast. It provides an n-dimensional array data structure, an amazing interface to see via luadjit and FFI, the foreign function interface. It provides also a library for neural networks and energy-based models. It has a fast and efficient GPU support, and finally, it can be easily embeddable, and ports exist already for iOS, Android, and FPGA backends. The Torch goals are maximum flexibility, obtaining the highest speed, but with extreme simplicity. Let's see now an overview on the following tutorial. We will start with a generic help, and then a specific help for an item of the Torch library. The Torch type command, which tells us what kind of element we are dealing with. The Torch tensor, which is the main and fundamental part of the Torch library. The pound operator, the dimensionality, and the size of a given dimension. The apply function, which allows us to define a closure on the element of a given tensor. The tensor types, byte, character, short, and is. And the Torch set default tensor type, which is pretty useful when we would like to change the default type of the tensor used by Torch. Moreover, we'll see what a tensor and a storage are. We'll get familiar with the resize function. We'll also encounter a tensor type mismatch error, and how to fix it. We'll also learn the difference between the assignment and the clone operator. We will then start having a look to one-dimensional tensors, or vectors, and the star multiplication operation. Then we'll go and see the matrices, which are the two-dimensional tensor, and also here we'll see what the star operator will do. Moreover, as corner cases, row and column vectors are still two-dimensional tensors, where the row's dimensionality, or the column dimensionality, is equal to one. Then we will learn about how to slice these tensors with the square bracket curly bracket operator. Moreover, we will learn about some of the constructors of the tensors, like Torch range, lean space, logarithmic space, zeros, ones, the i-metrics uniform distribution with the rand, and the normal distribution with rand and. Moreover, we'll learn about casting different types of tensors. And finally, we'll have a look about how to visualize these tensors with new plot, and new plot plot and histogram. Moreover, we'll see how to multiply element-wise two different tensors, how to apply a transposition of a matrix, or a generic tensor with the transpose operator. Finally, we'll see how to concatenate multiple tensors, and we will see the distinction between the plus and star operator over their corresponding add and multiply. Finally, we'll see what resize, reshape, and view are. Although they perform a very similar operation, they are very distinct, and it's very important to keep in mind their differences. In the previous tutorial, we have learned about Lua syntax. In this tutorial, we'll see some of the Torch syntax. Specifically, you can find a write-up at the following address. Let's run our Torch triple by typing th. We can see that pressing the question mark can provide us some help to get started with. th is an enhanced interpreter, REPL, read-evaluate-print-loop for Torch 7 Lua. Main features. It has tab completion on nested namespaces. It can also tab complete disk file names when we open strings. If we don't want the system to print the result, we can stop it by typing a semicolon at the end of the instruction. If we would like to know some specific help for a given function, for example, Torch-rand-normal-distribution, we can type a question mark followed by the name of the unknown object. We can run shell commands by typing a dollar symbol, and then we can escape to the Unix command line. As we saw in the last video, we can also use the function who to list all the global variables present in the current scope. We can clear the screen with Ctrl-L. We can quit Torch with Ctrl-D. So Ctrl-L. Let's try with the help function. So we can type a question mark, then Torch tab to complete, SQR tab to complete. I remove the open parentheses and press enter. And here we have a summary of what the function SQRT of Torch does. If you would like to know the type of a specific element within the interface, we can type Torch, tab to complete again, dot, type. And then, for example, I can put a string high. The output will be string. If I put a number, I press up to recall the previous command. I will have number. If I put Torch itself, it's a table. As we saw in Lua, everything is quite a table. Objects are table. Dictionaries and lists are all tables. Torch is a multi-dimensional tensor library and provides mathematical operator over these. Additionally, it provides many utilities for accessing files, serializing objects, of arbitrary types and other useful functions. Let's start to see how to create a tensor. Ctrl-L to clear the screen. Let's define T equal Torch dot tensor of dimension 2 by 3 by 4. This tensor can be thought as two matrices overlap of three rows and four columns. If you would like to query the dimensionality of this tensor, we can type hash T. We see we have two channels, three rows and four columns. If we press T and enter, we can see that these numbers are not initialized. Let's try to initialize with a sequence of numbers from 1 to 2 times 3 times 4, 24. So let's define I is equal to 0. And then we take our tensor T, we apply a closure. We showed last time how to do so. So function of variable X, which is going to be the current value passed from the tensor, which actually we are not going to use so we can even remove this one. We can set now I equal I plus 1. We can return I and we can end our function. As we saw, since we didn't put the semicolon at the end of the instruction, the torch ripple print out the result and we have now an initialized tensor, with numbers going from 1 to 24. We can check now the type of this tensor. So we can type torch dot type, we send T and we can see it's a double tensor. Overall, there are these different tensors, containers. We can have byte tensor, character tensor, which are 8-bit tensors. Then we have short tensor, integer tensor, long, float, and double. If you don't specify otherwise, double tensor will be used by default. We will usually set the default tensor type to float tensor. In this way, we can speed up computations. Moreover, in deep learning, double precision is usually not required. To do so, we can type torch set default tensor type. I didn't type everything, I just type set d and tab. Then we can write torch float tensor. Now, if I write torch type of torch tensor 1, 2, 3, we will see the default tensor type has been changed to float tensor. If it would like to create a specific type of tensor, let's say a byte tensor, we can still type fully torch dot byte tensor and then specify our tensor. So, back to our tensor T. A tensor is simply a view on a specific underlying storage, which can be thought as an array on the C side. If it would like to create another tensor, R equal torch tensor on T. And then we'd put column resize as in 3, 8. So, now we had a problem. It didn't run nicely because our default tensor type is a float, whereas torch type T is actually a double. So, let's write the same example by specifying here double tensor. So, R now is the tensor with 3 rows and 8 columns, whereas T was 2 planes of 3 rows and 4 column matrices. Both R and T are using the same storage. This means that if we do R0, we haven't only erased the content of R, but as well the content of T's. If we assign S equal T, this is simply an assignment of the reference of the tensor T. So, if we check the dimensionality of T, we have 2, 3, 4. And the same with dimensionality of S is going to be 2, 3, 4. But if we do S column resize of 4 times 6. If we check again the dimensionality of T, we will see that it's 4 and 6. So, the assignments between tensors are simply a copy of the reference of the tensor. It's not a deep copy of the whole tensor. Therefore, if we would like to make a new tensor U, which we would like to have the same content of T, but a complete new storage, we have to issue column clone, which will perform a deep clone of the tensor T. Now U is going to be the 4 times 6 matrix. If we fill U with random values, we can check that T has been unchanged. Let's see now vectors, which are specific kinds of tensor, which means they are multidimensional tensor. Control L to clear the screen. So, let's have our V vector, torch tensor 1, 2, 3 and 4. If we print V now, we can see it is the vector 1, 2, 3, 4. The dimensionality of V is 4. If we would like to have exactly the number 4 as output of this query, we would type V size of first dimension, which is 4. If we have another vector W equal torch 1 of 4 elements, we can check that it is a vector of 4 elements. We can perform a scalar product simply by typing V times W. Which is 10, which is basically 1 plus 2 plus 3 plus 4. Let's create now a new vector. Let's say X equal torch tensor 2, 5, 7, 1, 9, 4. We can check that X has one dimension of 6. X dimensionality is 1, it's a one-dimensional tensor. And the size of that dimension is 6. If we would like to access, for example, the third element of X, we can do so as it would be a simple table. So we can type X square bracket 3 equal 7. In a similar way, we can access the second last element of the array by typing X minus 2, which is the second counting from the last element. And which is the number 9, as we can see from here. If we would like to extract, for example, this sub array from 5 to 1, we can do so by typing X square bracket curly bracket space curly bracket from the second element to the, we said, 2, 3, 4, 4th element, close curly bracket space and close square bracket. And here we have the sub vector 5, 7, 1 of size 3, still float tensor. Let's go back to our vector V, which was containing the numbers from 1 to 4. We could have also created V by simply typing V equal torch range from 1 to 4. And then we have again V. We can compute the square of all these values and replace them into V by typing V colon power of 2. And now V, if we type, is going to be 1, 4, 9, 16. If we would like to preserve the values of V, we can write. So let's redefine V as the range from 1 to 4. And we can write W equal torch dot power of V and 2. In this way, V is still the vector from 1 to 4 and W is the squared vector. In this way, we created a new tensor and a new storage and more memory is utilized. If we no longer need V after this operation, it's advised to perform the operation on the same tensor, especially if it's associated with a very big storage. We can now create a matrix M equal torch tensor curly bracket space curly bracket 9, 6, 3, 4. Close curly bracket comma. I go on the new line just because I like it so. It's not required. Second row 7, 2, 8, 1. Close curly bracket space close curly bracket. So now we have M is the torch float tensor of size 2 times 4. So if we check the dimensionality of M, it's 2. Meaning there are two dimensions. The size of the first dimension, it's 2. There are just two rows. And if I check the size of the second dimension, we have 4. Meaning there are 4 columns in our matrix. We can check a summary of all sizes by typing hash M where we can see that we have two rows four columns and overall we have two dimensions. We can now access the element on the second row and third column. By doing so we output the number 8. We can do the same by typing M square bracket curly bracket space. 2 and 3. So let's type M so we have it under our eyes. We can extract for example the first column by typing M square bracket curly bracket. Here we will say all rows and just the first column. In this way we have a column vector which is different from the vectors we saw before. Because in this case it has two dimensions, has two rows and one column. If we would have typed M square bracket curly bracket both rows and then simply the column number one. In this case we would have extracted the vector so the monodimensional tensor. Corresponding to the first column of the matrix. In a similar way we can extract the second row of the matrix M by typing M square bracket curly bracket. We can say second row and all columns and we have 7, 2, 8, 1 which is a row vector. Again it's a two-dimensional tensor it's not a one-dimensional vector. If you would like to extract a single-dimensional tensor we can do so by typing M square bracket curly bracket. We say second row without the curly bracket and then we say all columns. In this way we have extracted again the singular dimensional tensor which components are equal to the components of the second row of our matrix M. Let's clear the screen. So we can start with our matrix M and then a vector that is equal to torch tensor 1, 2, 3, 4. So 1, 2, 3, 4. So if we do M times V it's going to be 46 and 39. Where 46 is the result of the first row so M first row all columns of M multiplied by V 46 and 39 is the result of the second row all columns of M times V 39. Control L again for clearing the screen. We can see now some basic constructors for tensors. So the first one we can see is going to be torch range and from 3 to 8 and simply it's the tensor of size 6 which is going from 3 to 8. If you would like to set a different step for example from plus 3 to minus 4.2 with steps of minus 1.9 we can do so by typing torch range 3 to minus 4.2 with a step of minus 1.9 and there we go. In order to create a linearly space tensor we can type torch dot lean space for example from 3 to 8 like the first we saw but let's say with 15 different values and here it is it's too long to be seen on the full screen but we can see it's a tensor of size 50 which goes from 3 to 8. We can visualize this tensor we just created by requiring new plot and then new plot dot plot and we just let's copy we can check the result and we have our linearly spaced list of values going from 3 to 8 with 80 values. We can try something nice now new plot which is pronounced like new but it was already taken so they put new which is not the GNU project so it's not pronounced GNU but new plot plot torch log space again from 3 to 8 and still with 50 points and let's see how this differs there it is which goes from 10 to the power of 3 to 10 to the power of 8. If we would like to have the same range that we got with the lean space we can type new plot plot torch log space math log 10 of 3 math log 10 of 8 these functions are from the math library of lua and still 50 points and then if we check now it's going to be exactly from 3 to 8 with a logarithmic trend. Another way to construct a tensor is by using the zeros function for example we can do torch dot zeros and three row by five columns and there it is oh for example we can do the same with the ones torch ones we can have for example now three by two by five and then we have three planes of two by five matrices we can also construct the torch i and let's put three and we create the three by three identity matrix let's make a random tensor new plot a heist of torch random normal of 1000 values and here we have the representation on the chart and if we check here we have the representation in the plot if we increase the number of samples we will have a more smooth graph let's put one medium and here we have the classical bell functions in a similar way we can use a uniform distribution from zero to one i simply go up instead of having round random normals you can just type round and here we have the uniform distribution going from zero to one just for exercise let's check the definition of these functions so question mark torch round n and here we have that y equal torch dot round normal returns a one-dimensional tensor of size n filled with random numbers from a normal distribution with mean zero and variance one whereas if we type question mark torch round enter we have that y equal torch round of n returns a one-dimensional tensor of size n filled with random numbers from a uniform distribution on the interval from zero included and one excluded let's see now how we can cast different kind of tensors let's clear the screen so let's start with our matrix so two-dimensional tensor and dm2 which is a float tensor if you would like to have a double so let's call it dm we can type m colon double so if we do torch type dm it's going to be a double tensor torch type just m it's still the float tensor of before we can also do a torch type of m colon byte which is a byte tensor and if we display we see exactly the same because those numbers are representable as bytes we can also do torch type m cast into int and so on with all the types we saw before which were if we remember this one before we have seen how to multiply a matrix times another vector or another matrix for example if we have m multiply by the matrix torch round four times six we're gonna have the matrix of two rows and six columns let's say we'd like now to multiply every value of m times a random number from zero to one we can do so by typing m colon c multiply and then we have torch brand of the same size so two rows and four columns i forgot to unclose parentheses and now we have that every value that we saw before has been multiplied by a number between zero and one and these are the outputs the values of m have been replaced with these new values because we use the colon operator here if you would like to perform a transposition on the matrix m we can type m colon t and then we have the matrix transposed if you would like to transpose tensors of multiple dimensions we can use the transpose function for example let's have t equal torch dot range of from one to 24 precise as in three times four times two so t is three planes of four rows two column matrices we can transpose let's say the dimension one which is three with the dimension of the columns which is two by doing this t colon transpose and let's transpose the first one with the third one and now we have that in the first plane we had one and two across the columns and now which was the last dimension and now one and two would be across the planes so we have one and two then three was on the second row so we go here and row has been unchanged so three and four has been changed from columns to planes so we go three and four here in the other plane again five is going to be on the third row and here we can see it's still on the third row and six is going to be on the other plane on different planes we have one nine and seventeen and here we can see that one nine seventeen are now across the columns instead of the planes let's say now we would have simply wanted to change the shape of the tensor so let's start again with our tensor t and we can perform a resize as in two four three now we can see that the numbers are simply reshape so we go from three planes to two planes and the number of columns go from two columns to three columns and numbers are simply redistributed in this new shape control l again to clear the screen let's see now how we can concatenate different tensors let's say we have a tensor a which is a row vector so tensor one two three and four and then we have b equal torch tensor five six seven and eight so we have a is this one and b is this one so if you would like to stack those two guys one above each other we can do this with torch cut we say tensor a tensor b and we won't like to put this in the we would like to stack this in the first dimension so a is going to be the first row b is going to be the second row and here it is instead if you would like to concatenate these two row vector in a eight elements row vector we can do simply this by typing cut and we say a and b concatenate them along the columns and here we have the final output control l again to clear this game finally we can see how to perform operations between tensors and scalars let's start with a matrix m equal torch tensor of three rows and five columns filled with random numbers from one to ten so m is going to be this matrix we can do for example m times two and we are going to have the output the output is going to be every number multiplied by two we can do m plus one and then the output is going to be as we can see four plus one five plus one six plus one we can do also m divided by a number so let's put divided by minus one so all numbers are turn negative we can subtract two for example if we try to square the matrix we are going to have an error like we can see here if you would like to rise every element of m to the power of two so m was this one we have to do right torch power of the tensor m power of two and now every number it's squared but m is the same again if m is not necessary after this computation it's better to use the operand on itself so m colon power number two so this way the same storage is reutilized and we do not waste space the tensor documentation can be found at this link here whereas the api for the mathematical operations can be found here moreover this tutorial has been inspired by this torch for matlab user conversion table finally the torch website is the following in the last chapter of our tutorial we will see the difference between resize reshape and view let's start with a tensor a equal torch range 112 and here we have a let's have b equal to a reshape as in three four and this is b then we can have c equal a view four and three and if i print now a a is still the same and the last function is going to be a resize 112 so if we print a we have that the size has been changed so resize does change the size of the given tensor if i have now a first row the first six elements multiplied by two for example if i print a we are gonna have that the first six elements have been multiplied by two and if i print c which was the view of a we have that the first two rows in this case in the first six elements are also the double of their previous values if i print b instead b is still the same tensor b is a complete new tensor with a new storage whereas a and c share the same underlying storage if we fill b with random values from one to 12 for example and we print again a and c we see that the values of a and c have been unchanged this is because a and c share the same storage which is different from the storage of b a and c are simply two different views of the same storage