 Welcome Moises. Hello, how are you? Very intriguing, huh? Yeah, we'll see how it's going all back. You bring all the answers So we try. Oh yours Moises. Good Hello to everybody. We are going to try to explain a bit How we can explain artificial intelligence and this is something about me if you want to contact me You can write me an email send me a message or anything related with artificial intelligence or data in one of my social networks and Okay, we are going to start but first I'm going to remember what really really really interesting site from something that's someone that is really really important right now This sentence was for Sunja I picture the sea of Google in which he said that the artificial Intelligent will be more important that fire and Electricity or maybe the internet will be the next step in our evolution Okay, but what about is this talk? It's about explainable or unexplainable and this is the most important question that we are going to try to answer in this talk But why it's important Explained the artificial intelligence system Okay, first we are going to try to start why we need this explanation And after that we are going to try to describe Who we can generate that explanation to know what is happening with of artificial intelligence system Because maybe they can rebel but we will know before of that Okay, good The first problem that we have right now with the artificial intelligence is that it's a starting to be in our life in many things that we use Commonly for example the cars in some years We will have cars that are driving Totally alone using artificial intelligence, but more important thing we are using artificial intelligence in medicine to try to detect Some disease and we will sure that this information is correct, but there are more things Sometimes we have to understand we have the decisions that are taking complex system Based it on artificial intelligence because sometimes done decisions are not really good for us But are correct and we need to know that sometimes machine are better than us solving some problems Okay, then it's important to know why they phone that kind of answer But there are more okay. I guess that everybody answer about this Do you know why Netflix say you you have to watch this serial or this TV show or this movie and you have an 94% of that okay I want to know every day that I open my Netflix, but no one say me and I want to know why because sometimes that Suggestion it's horrible and most important now that we are close to the black Fridays Okay, we need to understand why we have recommendation about some products It's that's really true. It's something that I need It's something that I want or it's something that maybe the artificial intelligence Manipulating to make us buy that products But again, we didn't have any explanation about that and it's something important But more important thing about that it's medicine now We are starting to use artificial intelligence to try to identify disease or something that happened to the human and Okay, we are totally sure that artificial intelligence is capable to see what happened there better that medical doctors We have to be sure and understand why they identify something that we can't But this is something that is a start to happen for many other reasons because it's gonna be the law in the future Now in the European Union, they are preparing a new regulation for artificial intelligence And we are dividing the AS system in different categories And there is one the highest one in which we need to know what is happened there And why we get that answer when we talk with an artificial intelligence and more important thing We need to know that there is an artificial intelligence system Interacting with us and this is not something this is not something that is happening in Europe It's something that it's in many countries like United States Australia Canada and other countries like Japan or other of them not in China in which artificial intelligence It's commanding everything but here in Europe or in different Western countries. This is happening We have to regulate the artificial intelligence and to do that we need to know what is happening inside and Okay, now we have why we need this planation now We are going to try to understand who is working an artificial intelligence system and it and try to see if there is a possibility to try to explain what is happening there, okay? When we are working with an artificial intelligence system if we are lucky and we know that there is something Artificial there we have something like that. We have the AS system which is taking information from different people The internet or many different sources and we have personal information or user information and after all of this information if we send to the system we get an answer and Sometimes it's important to know what happening with this answer and this is not only Something for the user. It's something for all the people around of the system We have for example people that is creating the system like the data scientists or the IT people that need to know How to monitoring the answers of the system or they need to know if the answer that we are generating that prediction It's correct. It's right and sometimes it's impossible to know if the model is extremely difficult or the I system It's super complex But more if we are for example the customer people of the company We need to know if the answer are correct if everything is okay, or if the information that we are generating that answer Is not breaking any law or there is not any kind of Problem with the data that we are using When we try when we try to generate this kind of AS system Okay, then there is a way to generate all of this information But unfortunately now there is not this is the typical artificial intelligence system that we are creating We have the data which is the input we have that magic black box in which we have the model We have the rules we have anything and usually maybe we have like a wrapper that is around of the application To simplify many things related with the AI and once we put the information in the input We get an answer and now the answer is Unesplainable there is not information that we can use to say this is right. This is correct. There's an error This information is okay. It's the information. It's okay for us. We say all good This is working very well, but if not, we say, okay, it's artificial intelligence. This is not working really well This is something that it's only for commercials It's only for try to say that we are doing something special But now we are trying to do this in a different way Something like this in which the explanation. It's the most important thing now the artificial intelligence system It's explainable then there is a way to get the information From the answer I'll try to get an explanation and this explanation must be clear for the user This means that the user need to understand the explanation of the answer Because the user can generate a feedback that could be useful for the system and the information of the user Can be monitoring and now we have something that is perfect because the answer of the system It's explainable. Maybe it's not a perfect explanation But there is more than nothing and we try to find a way to do that in the system that we have Currently, okay, then why we can do all of these with the system that you usually use Here we can see different kind of artificial intelligence system Which are sorted about the complexity of the models or the rules that are generating The first one is the easy one the typical rules-based system in which we are putting normal rules That can be generating manually or maybe with an automatic system after that We have something really typical right now like the random forest linear models Which is really really useful, but now they are starting to be a bit obsolete After that we have the new revolution the deep learning and disable methods And finally we have something that is experimental the meta learning system But once we are using all of this system the complexity of the model of the eye system that they are generating It's more complex then if we go to the meta learning system The model is extremely complex and it's really difficult to explain then the complexity follow the problem to explainable This is something that's happening usually if the model if the model it's really really complex Desplanation it's more complex than that. That is something bad because we need to generate Really complex system for explanation for really complex model, but it's possible And now we are going to try to see how we can explain the machine learning models Why machine learning model because it's the most common thing that we are using right now to generate artificial intelligence in the industry in the different companies and Maybe in the rush hour area But this is the common thing and it's the thing that the companies are doing to try to offer services to the normal people Okay, then this is happening commonly when we are using artificial intelligence Okay, we have a system that try to identify something for a picture and sometimes it's confused Okay, in one side we have a puppy or we have fried chicken. We are not sure Both are quite similar And now we don't know why there is a puppy or why there is a fried chicken because there is no information There is not any kind of explanation about that That happened exactly the same if we are trying to create a model to try to identify That kind of dog that is quite similar to a muffin Then again, we have an answer, but we don't have an explanation As I said before, this is the common thing that we have right now This thing is the typical supervised machine learning system in which there are a black box That is where the magic happening And we have an answer which is the prediction But what is inside? Okay inside we have two typical process The first one is the training one in which we are using the answers and the data And we are going to generate a model. Maybe it's a good model. Maybe it's a math model I don't care about that. It's a model. It's an artificial intelligence And after that if we want to put that model in a system, we have to make an inference process Which is the phase in which we use the model And we put information from the user and we get an answer And again that answer it's only an answer. We don't know if it's right. This is wrong. It's okay We only have that Maybe it's not the answer that we want, but we don't know exactly why we had that answer And this is an important thing Then how can I explain the answer? Right now impossible because the system Doesn't have an explanation system to try to understand what is happening inside of the black box but now We can to try to generate an answer In a machine learning model and the question is how can I do that? How can I create that explanation? Okay, this is easy We have to do something like that now we have to put an interpretation system That taking the inference answer and after that is generating an explanation And the answer and now the user that send the data To the system now know what is the answer and have an explanation And then the user can start to think about this is okay. This is not okay The answer it's okay because the explanation is really good and it's clear or not And now we know that the artificial intelligence it's working Maybe not in a well way, but it's working because we know what is happening there Okay, good and Which are the techniques that we usually use in machine learning to create these explainables Okay, it's possible to make explainability in machine learning. Yeah, the answer is yes We have different techniques that allow us to do something like that the pictures so in this case an image and some Small areas in green which has the important things that the machine learning model use it to know that there is a dog there and How to do that is to use in different kind of algorithm that allow us to do that We have three basic algorithm Okay, there are some more but with these are the most important ones. We have the best Algorithm and the first one is the sampled Algorithm in which we have to sampled supply System we have the second one which is the integrated gradients algorithm in which we are Introducing more complexity and finally we have the best algorithm that we have right now for picture Which is the x-ray algorithm that is the one that created that picture that we can see in the slide In which we know which are the areas of the picture that are important to understand that that is a dog Okay, but now we are going to see how is working this algorithm because each one of the algorithm It's only useful for one kind of models the first one the sampled selling Is the one that we use when we have to analyze Tabular information or creating models based on tabular data This is typical classification or regression models for traditional machine learning system The second one the integrated gradients. It's an evolution of this one Focus in neural networks in which we are working with really big Data bit amount of data in which we have a large number of attributes Which in the case of the first one is not working really well And even if we want we can try to use x-ray image which are Really simple image that working really well with this and finally we have the most advanced one, which is the x-ray The one that we are going to describe in detail in some minutes and this is the new evolution of the Second one in which we are trying to extract information From the picture to try to identify The explanation over the answer that we are generating when we create a machine learning model to identify pictures or image Okay, well, who is working any of this algorithm? The first one the sampled simply is the easy one is the first one that we use it to try to explain What is happening there and it's based in in the game theory algorithms in which we are doing something like giving a regular or a score or some punctuation some information to any player of the game And everybody think okay. We are talking about machine learning. There is no player here. What's happening? Okay, in this case the players are the attributes and we are giving some score to each attribute to try to understand What is the importance of the attribute in the definition? Okay to do that we have to use that Large equation, which i'm not going to explain but i'm going to give you an example to try to understand what happened there Okay, good. This is the example that we are going to use imagine that we want to create A regression model in which we want to predict the price of a house Okay, and we are going to use four attributes If we are close to a park, okay, which is something important right now the floor And how big is the? Is the building and if the building? Allow pets inside That's it's something important for some people and using these four arguments. Okay these four attributes We are going to try to predict a price The information that we can see okay Which is the different step that we need to follow to use samples ugly? Okay We have to do something like this. We are going to try to generate all the possibilities Between the combination of the different attributes if we are lucky all the possibilities is going to be in the training data But this is not going to happen Then we have to work with the information that we have and once we have all of this information Okay, all the possibilities that are available We are going to compute something like a number, which is the sample separate number It's like a magic number that allow us to know How important it's an attribute for each example And this is the way that we are going to use to try to understand if an attribute it's important or not for the prediction for example, if the sampling number for The pets it's always zero. This means that that is not important for the price Of the building or in this case of the house and then we can do two things we can prune the attribute or We don't care about that And this is the way in which sample sampling it's working And now we are going to see who we can use sample sampling to generate an integrated gradient Because this kind of algorithm is not working really well with neural networks and deep learning It's based on neural networks Okay, the in the integrate gradients is an evolution of the sample sampling in which we are using not the attributes We are using the gradients and what is that? What is the gradients? Okay, this is something that is inside of the neural network process when we are training a neural network And we are trying to retrain After every examples We are computing something that we call the gradient And this gradient is the number that we usually use to update the structure Not the structure the values of the neural networks And then we can try to improve The model that we are generating into the neural network that is really good But okay who we can use that? In the inference because we are training right now And if you remember the training was before Of the model generation and the inference is the ones in which we need explanation Okay, then we try to compute the gradients in the inference true that is not something common But now if we do that we can try to get the gradient value and compute something similar To know if the attribute is really important or not That is good because we can generate something like a graph that we can see in the slide In which we know if the attribute is really important for the prediction or not If the attribute has a zero is that that attribute is not important And this is really good because we can prune it it's not important for the model We don't know what's happened with it, but in this planation nothing it's happening Good, but if we have other kind of attributes We can to see if this is important or not in that case in the example We can see that there are some attributes that have a negative value That means that are not really good for the prediction And in other case we have some attributes that have a positive value And in that case means that it's really really important for the prediction Good, then we can show something like that to the user And the user can know if something's happening In the prediction can see which is the attribute the information that the user is sending Or the algorithm is using to make the prediction It is important or not for it Good, now we know how it's working integrated gradients But we go more that is the important thing which is the thing that we need to know how it's working the x-ray algorithm This is an evolution of the previous one that try to do exactly the same Okay, not exactly with some chains over image And now when we are working with image, we don't have attribute That's totally true But the pixels of the image are the new attributes that we have In the x-ray algorithm And how is working the x-ray algorithm? Okay, imagine that we have that picture that you can see The first thing that we have to do is to try to execute over the picture something like the integrate gradients Okay, we run this algorithm in a similar way that we explain Over the picture, but in this case we are using like a black and white model Over the picture. Why? Okay, because we are ranging the values of the pixel Which are between the black and white colors And now once we have that information which is something like the Okay, the value or the accuracy or the importance of every pixel Over the prediction we have to do another thing. We introduce a new kind of algorithm That is not something new. It's something really all That's one one of the algorithms that we usually 30 or 40 years ago to try to do Artificial vision to extract information of the picture now We are making a segmentation of the different areas of the picture And now once we have these two information We are going to combine it in a process in which we own the importance of the pixel Over the segments the different areas that we detect with the other algorithm Now we have that color areas in yellow in green over the picture that we are analyzing Or the picture and we are trying to get an explanation Okay, and this information the color that we have here It's really really important if we are close to the yellow That means that the pixel in that area are really really important for the prediction They are the important part of the pictures that are good to know what is there And if we are close to the purple or green or purple or blue Is that that information is not really good for the prediction? It's not important. We don't care about that There is no information about our model Okay, that's good Then we have like a rank in which we know which are the good pixel and which are the bad pixel to know what is happening there And after we have that we can start to try to see what is happening there Finally the thing that we do it's something like that We can take it those areas that are Close to the yellow and that is the information that we are using to explain what is happened there in this case we have Some bears or a butterfly and that is the class that we are going to generate Okay, then now the user have this information to know why the model say that There is that kind of animal or insect inside of the picture Okay, good and That is perfect that is war but if I want to do that for my models How can I do that? There is any tool there is any algorithm that I can use There is any way to do that in my current machine learning models Yeah, that is but there is only in one cloud Currently the x-ray algorithm. It's only available in the explainable a assistant that google cloud has there and Okay, this is a shame because if you are using all that kind of cloud you can use it But maybe you can take the algorithm and try to run into the system But now the best option is to use google cloud and where is this algorithm? It's inside of vertex ai which is the new tool For training deploying and monitoring machine learning algorithm in which there is an special box that we call explainable ai and to do that we have to do the next. Okay, sorry. There is a mistake here This is the first part. We have to install the libraries that we need of course We need tensor flow But we need the sdk for explainable ai that google has available That can be installed that we can see in the picture and after that we can do the same We have to generate the different function that we need to define the input and the output Of the picture that we are going to use Because this model is going to be in the cloud and we have to define an standard input To process any picture or image that we can use to try to get the explainable and the prediction Good after we define all the function that we need all this function for prepositing for Explanation for everything we can start to defining our methods And now we have to define who is gonna work or machine learning model with explanation And if we want to put into the cloud we have to do something like that Okay, if you can see The last one parameter not the last one the second if you are starting at the end Okay It's a definition of the explainable metal that you can that you want to use if you put if you upload This machine learning model to the cloud and there you can put x-ray or if you want you can put integrated System or even if you want you can choose the sampled sampling method Now when you finish this process your model is ready You can start it to use it and the way to use it is this way This is the way in which we can send an example an instance And we can try to get a prediction and an explainable And okay good. How is that explainable? But it's something like this Do you know what is this? Maybe it's a lion Maybe it's a cat Maybe it's a beaver I don't know Okay, we are going to try to see the information the raw information that x-ray give to us Because this information is the ones and we only choose That special areas that are really really important to explain Okay No, it's not a cat. It's not a beaver. It's not a dog. It's not a lion It's a raccoon and our model Say that it's a raccoon with an score of Eighty four percent. Okay, it's a baby raccoon. It's cute And we have an explanation about what happened there and why the machine learning models say that that is a raccoon Good, but we can try to do more. For example, we can try to try to identify things like that In this case, we don't know what is happening there the explanation It's really really complex and the human can understand what is there, but the machine can Imagine what is going to happen in 20 years where the machines only use some information of the big picture Or the long videos to identify anything that are in the picture And as I say at the beginning Sometimes machines are better than humans and we need to understand why they choose one answer Even if we don't understand really well Okay to finish the last thing that I want to say is that in paradigma digital We are hiring and if you want to join to our team to do things like this You can send me an email or contact with our people team And cool. There is no more things and please if you have any question Let me know Okay. Wow. Thank you so much. Moises My head is about to explode, but apart from that we're fine Wow Thank you. Take your time because that was non-stop. We have a few questions, but not much time So they ask you and take your time actually because they ask you can you use the explainability models for chatbots? Yeah, maybe you can use if the chatbot has a machine learning model inside that is possible And usually mostly of the chatbot are using a machine learning model to generate the answer Then that is possible, but we can see depending of how the chatbot has been generated Okay Totally agree. I totally agree with him. Of course. They also ask you There is also a very similar library for IBM for xa xai ai. Sorry Do you have any comment about the similarities differences any preference preference? Okay, mostly of the libraries for explanation Have mostly of these algorithms, but in the case of the x-ray It's only available now in the google cloud platform Because it's someone it's something that they develop it inside of their platform But I suppose that in the future this algorithm is going to move to the different platform and in another kind of libraries, but the last version it's only there Okay Working in ai in a way is like having a magic crystal ball and predicting the future So it's a mass question. What uh, is the next big thing as this is the big things conference What is the next big thing that you are looking forward to or do you expecting or? Okay first I think that artificial intelligence is not magic. It's only maths and number third. There is some formulation There is some linear and non linear functional equation and there is no magic But it's a bit complex and we can't understand all of them But I think that the next step is to try to move these explainable algorithms to other kind of artificial intelligence Which are more complex than much learning for example heuristic church automated planning or other kind of artificial intelligence tools And maybe that is the next step Or maybe another kind of artificial intelligence that maybe we can see in the next years We'll see in the next bit things Well, we will we will stay tuned Being from paradigma digital of you you're part of the family big thing conference When he was coming in for his talk Moses was saying oh, how good was ruben? No from neutral. He's uh, if you haven't seen the talk He just came for for moises. Please try to watch that in youtube later on What are you looking forward in this next big conference? Next big things conference 2021 the ruben which is obviously from what we've seen Is there anything that shocked you you were expecting or not and tell our viewers for Tomorrow because we only have one last keynote this afternoon. What should they shouldn't miss obviously nothing at all, but Okay, depending of where is your area? Okay, we have talks for everybody not only for artificial intelligence There are talks for people that is working in the different areas of the competition Then if you want to learn if you want to discover something new You can to watch the next day all the talks of some of them in the big things Then it's something in which in where you can discover new things that can be useful for you or only to Increase your curiosity about other things like this for example Well, this is a big change from last year because from a year Uh, a lot of new things have come. I was expecting the jupitan notes and uh But every year there is something new. Okay, we'll see what is happening in the next year But i'm totally sure that it's gonna be something great Well, give us something give us a bit more ruben. Come on. I mean moises give us. Uh, what are you working on now apart from? What what is come on come on. Okay. For example, we are trying to working with Federated machine learning It's something that we told last year and now it's starting to work more Because there are many devices in the world and now For example the personal information is starting to be really really important Then maybe in the next year Yeah artificial intelligence is going to be more personal more in your device more only for you And it's going to be combination with other people But without sending your personal information because I don't know but I don't want my pictures in any place in the world Training new models and in the future that is going to happen because privacy It's going to be super important in artificial intelligence. This was mentioned this morning During the the first half of the day about the the importance of personalizing the AI But you started your talk talking about Netflix and other similar platforms. Let's say and they all Suggest but they are not so good Why what's what's what's missing because they are taking your information Putting in the server and they know everything about us and maybe we don't know we want control What kind of information extending and if the explanation that we have for the artificial intelligence? It's okay with us because maybe The explanation and the answer are not only for us are the answer that the company wants and maybe that is not good for us We have to we have to sort that actually uh sandra timon from microsoft and anne they were mentioning When we want to go to a restaurant. I don't even want to read the the menu anymore. Just tell me what I want Sometimes we do actually want them to tell what we want, but according to our needs and what we really Like I don't want to read the menu, but I don't want that my information Were there in the world then we have to find a balance exactly between which information we want to share And we know but of course we want artificial intelligence to improve our style of life style personalization and privacy are It's possible. Is it possible? Of course We can try to do with all the kind of tools that we have like federated machine learning for example That is a starting to grow during the COVID Federated machine learning, you know federated machine learning just one last thing Moises we mentioned a few books. We're going to talk about richer benjamin's book that we actually have amongst our prices if you are you know Accumulating big points give us give us a title something that you're studying Even if it's a novel a romantic one one one is the most Good for me. There are two Big books one a futuristic church and the other is the one that Peter know this world about artificial intelligence And the fundamentals of all of this for me are the best in that but depends on the people Okay. Well, there you go. Uh, I got so much information so interesting We thank so much moises moises martinez from paradigma digital He's coming for sure next year with a lot of of new stuff an old one that he's working on. So, thank you so much moises