 Hello Can you hear me? Yeah, how are you? I'm hungry because I was nervous and I went through it Yeah So my presentation is about machine learning linear algebra NumPy a little bit of TensorFlow just one example Yeah, I will try to connect all of this Yeah The idea is like quite complex what I want to express So let's start Yeah, I will try to say my motivations At the beginning and also in the conclusions. I will talk about vectors in linear algebra sense and also with NumPy and we will explore some simple examples Okay As we know in order to understand very well machine learning We need to know linear algebra, but also we need to know calculus probability and the statistics Um, usually we find and mainly in the last years this kind of Tutorials from zero to here or do it in 30 days I'm not totally agree with these Tutorials at least that you are a mathematician or physician but Too many people that is entering into data science We have background on software development So in my case, I prefer to go for the hard path learn or enforce my skills in these Mathematics in mathematics Yeah, it's only my opinion Um, why because usually the good data science positions are gotten by Mathematicians or physicians and I say that So I want to get a Good position in the future not only this guy who is doing Data parsing the struggling with data But yeah, the kind of guy who can read the paper and implement So I Did the start of some time ago and I didn't start doing code like this This is real. I mean is in my github Actually, it's not Python is are and you can see that they there are two four loops and also some But indexing Yeah, I didn't start doing machine learning let's say Coding in these styles Because I didn't know too much about vectorization So for example in order to create a A Matrix distance matrix I was Looping around three to fourth so yeah, when I Rediscovered how to do this using vectorized code I was almost crying because it was an elegant solution and it was not Precisely this kind of automatically solution So yeah, I didn't start doing that so well, let's jump on the Concept of vectors But doesn't matter that much but We will find Differentation sometimes with an arrow or Bolt style but actually what it matters is that the vector is only a collection of numbers We can Have some operations that are important and that in my opinion we need to memorize not memorize but internalize because they become They can become useful When you are solving a new problem So for example this lane the lane of a vector is Actually very similar to or it's not very similar, but it is the Euclidean distance Yeah, also we can Have we have this formula the distance between vectors And yeah, they are still similar and an important one the dot product If you see here below The product of vector itself it is actually The length of the vector but squared so yeah, let's do something more Bitonic That when you see the Front yeah, which better over there one more. Yeah Okay So this is just Stupid visualization But we can do something else We can draw The very basic example, I mean the three four five example The Euclidean one Okay, let's imagine that we have one vector going from the origin Here should be laid out Me too No Okay, yep, sorry just one moment Yeah, that is a most basic example So it is good idea to think in vectors Doing this kind of visualizations. So when We want to Calculate The summation of these two vectors it is easy to see that it will be For it will go from this arrow to this one so in this case is the length is three and this Side is four It will be five. So yeah but That is very useful to internalize the formulas, but usually in Data science We find we find only scatter plots So but that is In my case useful to learn it was useful to internalize the formulas visualizing the vectors as physician and Then just The final of the vector As the points So yeah, in numpy there are some functions to Do these operations Yeah, I didn't mention One before I guess I did so one important one is the the norm of One operation is the norm With the norm as it is Quite similar to the magnitude The distance we can actually get Those ones easily So imagine that we have to use or we only have this implemented So this one is easy Which is the size of this vector B? Do you want to say something? Yeah, three So yeah, it's an issue one Then Let's use this same norm to calculate The distance between B and C So we just need to do this Okay, well, this is becoming boring so let's go for one more example But actually Let me go back You know that the winter is coming. I mean the artificial intelligence winter but Well, it's actually not coming but one curious Stuff is that in the past we already had some artificial intelligence winter and then For some time because of some paper published by means mince key and popper the research was cancelled almost in very important university so for 15 years or so We didn't have research on neural networks and that was called a artificial intelligence winter The people is saying that we will go for a second winter but actually no but What exactly these guys were Saying in the in this paper They claim that all day proof actually that It was not possible to solve all the sort problem with one perceptron Because we know that the perceptrons work They have linear separator But actually what I want to show today is that we can So the sort problem using one year and we will see this later Yeah, with one perceptron or more formally with one year Yeah, also we will check some example on linear regression just Multiplying vectors and matrices Yeah, let's do Well, this is the algebra behind Linear regression and this is the One in if we see with this summation symbol, but actually it's equivalent to this one In the matrix form I don't want to Spend time but you believe me because I am the speaker This is the solution so actually we just need to do one two three four five six seven operations with Two one matrices one matrix and one array We will get the minimum value here or One solution and then this is pretty fun because Yeah, if we don't know this maybe we will go Or we will try to solve The problem in this way using for loops as I did before So yeah, let me go for the code so actually What I did that I mean the Most important line is this one Is the same as this one, but this one is also important we need to get the X or The data set and we need to stack one column with only once This is because we have this term here the B In anis It has Some one here imaginary Yeah, do you know these plots? Or this data set Is they are very interesting Okay So these plots or these mini data sets are used to show correlation We can see that they are very different But actually the line that is drawn is the same so, yeah, what I did is just Get the X and Y and then call the function here these two and I get the Solution the minimum which is actually nothing but The intercept and the slope or the slope on the offset but in Matrix notation Yeah Is working so but we can do it also this you see in tensor flow Which is Good in this case we say in place holders these two variables the Data set and the labels Yeah, this is not Very elegant should be more elegant But actually the same Operations as before You see here this mat mode is actually the multiplication between these two matrices and It's nothing but the dot product but with matrices Yeah without using any Library or any function For modeling in tensor floor or noon pie We can solve Or we can implement a linear regression Because we know The linear algebra behind this Yeah Is working So here we get the we execute the session the tensor flow session and then we get the minimization and Just plot the line then The next example So do you think that it is possible to solve the sort problem using only one era who Well, I think so So I want to show this Generating some random points. I mean these points here are Just collocated for squares and then I played With more points around these ones and I get I will get yeah, this data set Just that sort problem We cannot solve this using one line, but we can solve this using one year So usually in The literature we use always sick my functions and The sick my functions. We know that they are monotonic They are always from the bottom to the top But what if we use a non monotonic function in the perceptron for example the cosine function This function is going up and down and then we come So this problem With one year so here is the code for The perceptron. I just modified the function and also The unit step, I mean I just did the subtraction here So, yeah, just run this oops Is here Actually, I'm getting the data from some files. Yeah, this is the cosine function but We have we can see the cosine function We can imagine as if we have since seems the Seems above So you can see that here is maybe going Down and He's going up So let's run this again. Oh Yeah here Let me try to ruin it again Okay, this seems much better So it's almost perfect since the beginning so what we can see here is that these ones are correctly classified and this one Here the blue ones. They are not classified in the border and yeah after three epochs the classification is is perfect so What else we can learn in linear algebra In order to to have a solid background, of course, we should go for the gradient descent algorithm that is the algorithm that everyone is using nowadays and it's very simple and It's actually just a lot of multiplication between matrices Also another concept that is quite popular is the idea in the composition Because the eigenvectors and eigenvalues are used To implement PCA or LDA I mean dimensionality reduction Another concern that we should Be aware is the numerical Instabilities sometimes we are dealing with very very big numbers or very very big matrices or Very small numbers So we should be careful With the functions that we are using and check the documentation Yeah Some good material to follow with this at least in my case I have been checking these these ones and they are very good There is a Specialization offered by Coursera With three courses to learn linear algebra Statistics I think so. I don't remember the other one but Yeah, it seems a very good specialization and This crazy guy is offering for free some course on YouTube I guess you know this guy, right? Who knows him? Okay, then it's good that you don't know him because this guy is amazing. It's like a comedian He's doing comedy, but also teaching machine learning and this series is pretty pretty nice Another material very very formal There is a chapter in the deep learning book. It is not like a Good material But it is bullet list so you can check Wish topics do you need to learn? Yeah, that's all from my part So do you have questions? I have time for questions so There is you have some reference that you put at the end For books that people should read What do you think is the hardest part when you being self-teaching when Migrating from more math background to more data science background I should say that maybe state Constant because there are some concern concepts that can be boring at the beginning or that You don't know why you are learning that But you need to learn it and then after some time you will realize that it was useful concept Yeah, okay Anyone has questions? Could you share your source code in a GitHub poetry so everyone could read it again? Oh, yeah, I will put it in the the slides in the Your Python page, okay, and then I will put the link there Okay, so I want to thank you all the speaker again