 Okay, so let's start, if I find the thing to pass. Okay, I'll do it from here. So as I was saying, we talk about the intersection between data and data. First of all, I wanted to share with you a couple of facts. There are so many facts on artificial intelligence now on the news and everywhere, but there are two of them that to me, they caught my attention because I think they show well the truth that we still have a way ahead to make these systems work really well and especially really well for humans. So one of them is this very famous quote by Pedro Domingos. This book was, I think it's been out for three years or more. It's very good, I recommend it. And he says that people worry that computers will get too smart and take over the world, but the real problem is that they are too stupid and they have already taken over the world. Okay, so this, I'm not gonna enter into the whole philosophy of this phrase, but I want to point out the fact that he's showing us that maybe we are focusing on the wrong direction when we're looking at the problems that AI could bring, okay? Second, which I think is kind of funny, is the fact that whenever we are on the internet, we need to prove all the time to all the machines that we are not machines. So I mean, it's kind of funny because it's like they were doing a Turing test, reverse Turing test to us all the time, so we prove that we are humans. And here the point is not that much that the systems don't trust us. The thing is that the systems don't trust all the machines that could be manipulating them. Okay, so here the learning for me is more like, okay, are we setting the AI systems with the right goals? If we have to do this kind of stuff to prove that we are humans, are we on the right goals? So with all this, and I could go on with other examples, but with all this, if I'm trying to go to the center of the problem here, is that whenever we are developing this type of machine learning or AI systems, the core of the thing to me is that we are jumping from a concept that was, you know, the way we developed digital products in the past that was more related to how we do human-computer interaction. So we have software programs that perform well for certain tasks and we have humans using them and then they work and that's it. And then we're jumping to a complete new concept which is human-computer relationship. And this means that we are interacting with systems that learn from us, that learn from the environment, and that adapt to that. Okay, and this is really, really powerful, but it's also really risky because we don't know always how this is going to develop. And also it opens the door to, for instance, very, very personalized services and whenever you get personal, the potential to fail is even bigger. So I wanted to share with you a few ideas on these three concepts here, human-centered design, co-evolution, and design for trust that are very much interrelated between them. I mean, there are a lot of overlapings, but this is the way it works for me to share it with you. So we will start with human-centered design. This is something that is pretty old already. It's the whole idea of how we design things that are really focused on humans but at the same time work in real life. So IDEO, the famous design company from the valley, came up with this concept of where is innovation? Innovation, viable and sustainable innovation. And this is, the idea is that it's in the middle of these three concepts, this ability, so human perspective. This is something that humans want. Feasibility, so it's just technically possible and then viability. So is it something that will, I mean, is sustainable from a business perspective. So in the middle of this, innovation happens. And IDEO, where they were very good explaining us why we should always start by the human need, okay? And then create a process, an iterative process, no? Until we find this sweet spot of innovation, okay? This iteration is always very summarizing these two concepts of, first, we just speculate, second, you critique, then you speculate, then you critique and then you continue until you come up with something that is worth it. Okay, so my point here is that whenever we are, I mean, this is completely valid and I agree with this. But my point is that whenever you have a machine learning in the loop, you need to start in a different place. And this place is the intersection between design and data science. Why is this? It's because data and data science is just, it's not just a way to make an idea possible. Data can be the idea itself. So I think it's very, very powerful if we have these two perspectives which are very different disciplines, very different mindsets that are put together, I think are the way to create good products. And I think this is behind many of the very successful digital products based on machine learning that we see on the market. So one example of this is, for instance, this, you probably know this, some of you will know this product from Spotify, Discover Weekly. Anybody using Spotify that knows Discover Weekly? Okay, so as some of you know, this is a feature or a product within Spotify that discovers you music that you might like but that you don't know yet or you haven't listened yet. And this is based on everything that you listened during the previous week. So the way I imagine that this was created is that first of all, somebody looked at the data and said, hmm, we have, I don't know, today they have around 85 million paying customers. We have enough data to think of this idea of making people discover stuff, okay? So let's try to do it. So they created an algorithm and then whenever they had something that worked, then probably they sat with a designer who came up with this idea of, okay, should we just throw a lot of new stuff to people? Should we put it during the music they listened during their whole life? And probably they came up with this idea of let's do it for a week, for past week and let's see if it works. So they put it on an experience, they design an experience to surround this algorithm and then they put it on the market and then they start seeing if people use it, how much they use it, if the songs they are recommended, they listen to them to the end, they add them to their favorite list or not. And then they fine tune the algorithm, the experience until they find something that really works. And then from this idea, they created other products that are not that much based on last week. They are based on the different genders that you listen to. So I'm sure that behind this type of products and behind Netflix dashboard, when you get into Netflix, those are products that are designed with data science and design in the same room. This is my hypothesis, okay? So my whole point here is that we live in a world where basically all the digital services or many digital services have one single goal, which is our attention. So all the internet services that are based on advertising, their main goal is to focus our attention on them, okay? So we think that this should be complementary. We're focusing on some human goals at the same time. So I'll show you some examples now but we can design for discovery as Spotify did. Spotify wants us to be in front of the application, of course, but they are helping us discover new stuff which is useful for us. We can design for uncertainty and for decision-making. We can design for awareness, for time well spent, for peace of mind to remove friction from our lives and also to save, okay? And this is a post that I strongly recommend. It was written by my friend, Fabian Jardim, some time ago, but I think it's very, very, you know, it's still very valuable the concepts explained there because I don't see that much of this happening at the company level, step of the internet giants, okay? So some examples of this, I mean, this is just randomly picked now, but for instance, we have these thermostates that learn from the way you interact with your house and with your heating system and will create a model so it adapts and it turns on and off whenever they save more, okay? Given the way you live in your house. We have also Kayak, the travel portal where they have these small green there, these small pieces of information letting you know if the price is supposed to go up or down. So they give you the probability of the price of the ticket that you're allowed to buy might go up or down. So they give you information in an uncertainty moment so you make a better choice. Of course we have Amazon Go probably well, this shop that you just go out without going to any cashier to pay. Everything happens with cameras that are looking at you and taking all the information on what you are purchasing. Then we also have Google Clips which still is to be seen if it's as useful as they claim, this is a camera that is supposed to solve this problem of you whenever you are in an important moment, like for instance, if you have kids, you have for our experience the fact that you always look at your kids when they are at school and at performance or whatever, you see your kids through the screen. You don't see them in real life. I mean, you're missing the very, very big moment because you are just looking through the screen. So with this camera, you place it somewhere, you train the camera to find the person that you are interested in and the camera will take care of taking the good pictures. This is still to be seen. Then we have for instance this feature from Capital One, the American Bank, that to me is a good example of peace of mind because they send you an email with any suspicious activity on your card and you just have to say, no, no, everything is okay but pressing the blue button. But if you don't recognize any of the charges or you are not sure, you just click the red one and they will take care of everything. So to me that gives you reassurance and peace of mind. And then you can of course design for awareness like this funny data detox kit that helps you with different recommendations and apps to get a detox treatment from data to understand really what is going on with your data online and to help you get less dependent on certain applications. So just a few examples. Second area which I think is key is co-evolution. So where systems and humans learn from each other. Call Gerardo. Name unknown. Gerardu. Calling Gerardo. Hey Gerardo, did you call? Yeah, are we still on the coffee? Yeah, I will. Not in a little late, but we'll be there by here. Gerardu. Calling Gerardu. Sorry, I accidentally hung up. That's fine, so are we meeting at Howard Park? Yeah, let's meet in like half an hour. Okay. Okay, so should you see how she learns how to interact with the machine so it works? Okay, so it goes in both senses. So this co-evolution is probably more important than we are thinking of at the moment. And here the things or the tasks that we from the product development side should take into consideration is that of course we need to understand humans and we need to work there, but we need also to help humans understand how the machine works. So a couple of ideas here and there are many, many things. We could spend the full day talking about this, but to understand you humans you have to set up systems and the product that you develop is not something that begins and ends and then you put it in the market and you forget. You have to continuously test and learn and a machine learning product is a product that is never finished. And this is something not that easy to understand in certain areas of big organizations because they are more used to have product development here than operations here and completely separated. And in machine learning product it's just basically don't work. Then of course we need to keep on the feedback loop permanently open both implicit feedback so looking at the data and looking at how people use the products as people from Spotify did when they created Discover Weekly and also the explicit feedback. So take advantage of specific moments where we can ask people if the things they are using are really useful for them or not. And then something that might seem a little bit more technical on the algorithm performance but understanding the trade of between coverage and recall is also very important because sometimes I've been in front of some business areas that ask you okay, but your predictive algorithm what is the error rate? And then I ask okay, but what do you want to use it for? No, no, but what is the error rate? No, it depends on how you are gonna use it. Depends what is the coverage that you want. If you want it for six million people maybe the precision will be different if you want it only for 100,000. So it's important here. And then in the helping humans understand the machines a few ideas here is to be more clear with your goal. So explain what you are trying to achieve and how are you measuring what you are trying to achieve which is normally quite hidden today in the product. Then we need to launch progressive functionality so we don't go to the most sophisticated feature from the very beginning because in any other aspect of life trust takes time. So we are not gonna be able to jump from a situation where we're just for instance trying to sell people stuff. We're not gonna jump to chatbot that the person is gonna trust 100%. We need to go step by step and you know, all functionality as we are creating this trust. Be careful with over humanization. I think this is also dangerous and we are seeing many systems that trying to mimic humans. And I think that this might give the impression to people that they are humans not only in the way they interact because for instance now Alexa is great in interaction but it's not great in reasoning. So we can confuse people because they will think that if they interact so well as humans they will think and reason as well as we do and they don't, this will take time. And then try to explain how things work. I think this might be challenging but I don't see much of explanations there on how recommendation system works for instance. And in this show how it works I like very much this example that maybe some of you know. It's called Teachable Machine. It's a website where you can just understand how machine learning works by training a system with your face. So you can do gestures and put in different faces. You can train the system and whenever I do like this then some picture of a little cat or a sound will appear. So you can train yourself the system as you can see here with this lady. Very funny, very interesting. This works really well with kids for instance. I tried it myself with my kids. And then last, this is something I have to confess that I bought an Alexa 10 days ago when it came out in Spanish. And I was trying to find, after a few hours I was trying to find funny stuff to do with Alexa for my kids and then this video came out and I couldn't help sharing it with you today. So. Can you talk to play wheels? You want to hear a station for porn detected. No, no, no, no. No, no, no, no. Alexa stop. Okay, so I didn't show my kids this one. I know what to do now. But yeah, we need to learn how to interact with these systems. So they can work in our benefit. Last of the three topics is design for trust. We've been talking about trust a little bit before but here the point here is that it's not just trusting on the fact that the things will work and that the things will be secure, will be safe. It's the trust that you need to have on something, on somebody that is working on your benefit and it doesn't have any hidden goal that I cannot see but that it might damage me eventually. Okay, so how can we make these systems so they are trusted by people because we've seen so many cases now on the news, maybe too many compared to the real applications of AI where in many cases they are being very, very good and very efficient. This is happening. So even with very good intentions, it's not that we are very bad people there. It's people that have good intentions but we find data-driven systems that can violate privacy, that can have negative, that can do negative profiling, can limit access to woods, can reduce the quality of life of people and very importantly can destroy trust and run equity. We've seen in many situations also. So the point here for me is that, okay, let's try to see what is this matrix between algorithm intention here in the vertical and the result of your intentions and by the way this is a two by two matrix where I try to explain everything with this type of matrices because I think that if you cannot explain something in two by two then it's too complex so it's not worth it. So if the world was this simple, we could say okay, if we have good intention, good result, we are okay. Let's continue with the product, with the system. If we have bad intentions and the result is bad, then we'll catch the guy, so it goes to jail, no problem. If we have good intentions but the result is bad and we've seen many cases like this, we can work on certain aspects like for instance work on the data, if it's biased or not. I'm not saying this is easy but we have something to do. We can work on the statistics, on the maths that we've developed, we can work on the implementation but sometimes it's not the maths, it's not the data, it's the implementation or the unexpected usages of the different technologies as famously happened in the case of the chatbot from Microsoft, if you remember, Ty. And then, well, I think that having bad intentions ending up in something good is very, very, very little chance of this happening. So the problem is that the world is not this simple and the reality is that things that are purely in one of these four quadrants are very, very little and the world is more like in the middle because we have many, many systems that start performing well and then we don't have enough probably attention on how they develop, how they learn, so they end up performing wrong or bad. Okay, so here, world is not simple. I was also looking at the Alexa stories last week. I found out one that was I think also funny of a couple that were discussing at home and then they were talking about a friend. They mentioned the name of the friend and Alexa called the friend and the friend here, the whole conversation of the couple. So these people, the owners of Alexa complained to Alexa and then it came to the press and everything. This is a case where the intention is pretty good but at the end because they try to facilitate when you call people but it ended badly. So what can we do here? And here we've seen a lot of initiatives lately which make me very happy around this idea of responsible data usages and AI responsible usage checklists. Here I just have a couple of ideas. There are many more and probably the most complete piece of work around this is the recently launched Asilomark principles. I think Nuri Oliver talked a little bit about this before. Basically, of course you can always, you should always challenge your own math. So don't be happy with the first result that complies with what you're thinking was okay. As I said before and I'm repeating myself a lot on this but we need to permanent test and learn from how the systems perform. We need to look for biases, data biases of course but not only data biases or also usage because if we think of bias which is very, in fashion now, there's been technological biases all the time since the very beginning of time. For instance, I am left-handed and the CSOR is a technology that you used to cut and they are designed for left-handed for right-handed people. So I had to adapt myself during my whole life to things that were designed for right-handed people. Now with these systems that are digital, if you find these type of minorities, you can try to include them in the way your models work. So it's not only data, it's also the way people use stuff. Also how critical is the problem and so it's not the same to recommend a song as to recommend a pension fund, okay. It's not equal, it's very much more important one decision than another and it's not the same also to recommend a movie than to put autonomous car on the street. So we need to think of that also. This question, who are we empowering is very important especially when we are deploying systems that have a for-profit intention as most of them are. So we need to think if we are just empowering the company and the business and we are empowering the human or not. It seems simple but the answer is not always yes. One very funny thing that we like to do a lot also is to try to think of what is the worst thing you could do with this. Because at the end of the day we are developing tools and tools have always two sides. It can be used for the good or for the bad. These systems are especially powerful in this field. And last, and I could continue, if the variables that the algorithm is using are they actionable for the subject? Whenever we are deciding something for somebody can people action on those variables anyhow? So for instance, if you come for a loan and you are of whatever race and I say no, can you action on your race? So these type of questions. So four fields of research that we are developing at the moment at BBBA on this idea of trust is fairness and everything related especially to dynamic pricing and unfair discrimination through pricing and how we can implement dynamic pricing models that take into consideration fairness concepts. Second is also diversity and this is about this idea of being much more easy to recommend things to people that are on the line of what they have already consumed in the past. This is very visible in YouTube. If you enter into your YouTube video, they will try to offer you something that is related to what you've seen but maybe a little bit more extreme in whatever field you are watching. So at the end of the day, it radicalized a little bit. Your opinion, the way you use stuff, et cetera. So you can introduce in the algorithms diversity. So people can be more open depending on the field. For instance, when we can recommend people where to buy with their credit card next, we can do it for the, okay, go to Sarah or to Accord English so the algorithm will perform well but if we want people to discover small shops, maybe we should manually introduce these other options. Then transparency and this is related basically to the goals of the systems. How transparent are we when we deploy these systems? Why are we doing them? What are they trying to optimize? And last, and I know that we could talk a lot about this but we don't have much time. Interpretability, no? Try to understand, okay, how this thing is working. How is it deciding? This is especially important in deep learning systems. By the way, I didn't mention but if you are interested on this topic of human center AI and designers and data scientists working together, there is a talk later today at four, I think, by my colleagues, Marcelo Soria and Alex Vidal from BBBA Data and Analytics that will give a much detail and deeper talk on the topic. This is just a paper that we released a couple of months ago on this topic of reinforcement learning for fair dynamic pricing. If you are interested on the topic, I recommend you to have a look at this. So in BBBA, we are trying to apply these concepts. Of course, we are learning. We don't master this, not at all, but we are trying to do stuff that goes in this line. And I think I'm gonna show you some examples that I think prove that we really built all this that I've said, I think it's clear that we believe on it. So the way we like to show the things that we do on AI within BBBA is everything that goes below the glass in the operational world, everything related to automation, efficiency, internal decision making, everything that happens inside the company and then what happens above the glass. With customer-facing applications. And here we're talking about improved experiences, personalization, making this at scale, and being relevant for people. Okay, so I'll show you a couple of examples. And then if you have questions, I'll be happy to answer. So first of all, for individuals, we have all this idea, all this concept around financial advisory, automatic financial advisory. And here we started a few years ago already with a problem that might seem simple, but it's not, and it's very, very basic to start to build all the more sophisticated stuff, which is the automatic categorization of transactions. So the idea here is that you, and this is mainstream now, I know, but what is not mainstream is 100% accuracy here. So the idea is that whatever a transaction that comes into your account or your card, it gets automatically categorized into something that is understandable for you. And once you have that, you can start playing around with understanding if you're spending more money on gas than last year, also having goals or having alerts. Whenever I spend on bars more than, I don't know, whatever, euros per month, I get an alert, et cetera. So this is like the first step. Once you have that, we have things like, for instance, all this is, by the way, live on the VBA apps. This is a comparison engine. So here what you can do is to compare yourself with other people like you. And this, like you, is something that you can modulate, for instance, your same level of income, your city and zip code, your age, your gender. You compare yourself with other people and how other people spend in different categories compared to yourself. And the funny part here, or the interesting part, is that you can also simulate yourself in other cities. Okay, so I can put somebody like me, but that lives in whatever, Valencia. And you can compare if they spend or how they spend compared to you. Just thinking of moving to another city. Another thing that we can do once we have the things categorized is this type of financial health indexes. So here, we analyze the transactions, we see how many fixed costs, variable costs, and income people have. So we can tell them, look, your financial health is whatever number and you are evolving this or that direction. And the most important part, we can give recommendations to people to have a more healthier cost and income structure. This one is a set of apps that we are going to launch. This is already live, more vertical, thinking of specific life events. So here, taking all these categories and transactions and comparing that with how much is it to have a baby. So we try to recommend people if they're gonna do well or not and how much they're gonna have to spend when the happy moment comes. Then, apart from having a lot of analysis from the current transactions, we can also take a step further and try to forecast what is gonna happen in the people's account before things happen. Here, this is already live for more than a year ago now. So what we have here is the capacity to put in a calendar what is going to happen in your account two months before it happens. So it's not just that we forecast your electricity bill in whatever month which we do and we take into consideration your pattern over the past two years or that we know that your mortgage payment which is very easy to infer will happen on the fifth and it will be that this amount of money. We also do it with the transactions that we cannot say the day, the specific day, but we can forecast the end of month total expenditure. So for instance, supermarkets, cash withdrawals, dinners out or whatever we can try to, and we, I mean, the accuracy is quite high. We can try to say, okay, by the end of the month, this is our prediction for you this month. And here the important part is not that much to make a good prediction and to always write. It's the fact that whenever the prediction doesn't match the reality, there's something that happened there. Okay, and we are working now on a set of alerts and messages to let people know whenever things didn't go as they were expected. Because maybe your water bill came twice as bigger as it should have because there's something wrong in your installation at home or whatever. Maybe it's okay and then write, but if it's not, I mean, today nobody's sending me alerts whenever I have a non-typical behavior in whatever utility. Another application is this has been also out for a couple of years now and this idea is whenever you're going to buy a house, we give you the opportunity to see the price of any house of Spain, even though it's not at sale at the moment. This is BBBA Valora. And what we have introduced a couple of months ago is this augmented reality capacity so you can walk on the street, you can look at the buildings and then if there is something that is on sale, it will appear, but you can also click on a building and find out the price of this specific house that you are seeing in front of you. So this functionality, this set of functionality that we are trying to evolve much further to make it much more actionable than it is today is behind this, which I feel proud of is this success of BBBA having been appointed as the best mobile banking app in the world by Forester two years in a row. And something that we are trying to do now is of course grow functionality, but also make it more adaptive, make the interface more adaptive to the different usages that people might have because if I am not looking for a house, maybe I don't need to have Valora in my dashboards. And this is again a place where design and machine learning have to work together to make these interfaces more adapted to each and every person. We also have some applications for not only for people but also for businesses. This application Comments360 is based on the payment data and helps the retailers understand how they are doing compared to their context in their neighborhood. So you have this type of metrics where you see, okay, your competitors in your area are selling a lot on Sundays and you are not doing that well. Something is happening because people are coming, you are not selling, where people come from, this classical question from the ideas of the world that as your zip code when you go out, this is something that we can calculate for everybody, for every type of retailer with a credit card information. By the way, this of course is anonymized, aggregated information. Yeah, well, we have some translation to natural language of all these graphs because a lot of retailers don't like the numbers, so they prefer to read and it's a more gently way to get into these type of apps. And also we've launched, nothing last week or two weeks ago, this idea of aggregating companies' information from different banks in one single place. And not only that is that this way we aggregate is already compliant with the plan general contabler, with the official accountancy that the company has to make. So last, a little ad on our website, we have a lot of information on what I have explained and the things that we do and out of papers also. So visit our blog, dbbaidata.com, if you are interested. I think this is it. Yeah, so thank you so much. And thank you for being here at free. Okay. Thanks to you because it's been a pleasure, Elena, and it's time for the question. I'm not sure if I want you to make them, but okay. If no question, I can make question. Okay, any questions from you? Please, please make question. Please, please. After the two of us, you deserve her a question. And you are the one who make all these questions. Or maybe it's not you yet. Anyone? Any question? Three, two, one. What do you gonna think that is gonna be the future for your bank and for all the information that, in future, what's gonna happen from five year or to 10 years? I don't know. I mean, I've learned from working in prediction that I shouldn't make predictions like that. But I can tell you what we want to happen. Kind of in banking at least. If you can answer me in two ways. What the bank want that happens and what you suppose by yourself that happened. What will happen and what I think. That's it. What the bank want and what you want. They match, they match. Okay, great. Okay, so I mean, I think if you've seen the applications that we have been developing, we are going to continue in that line trying to make things much more useful and much more meaningful for people and take the financial products that have been very, very close to the financial world, not that much to people's lives. So whenever you need a mortgage, we just take care of the mortgage and we don't care much about you changing homes, all the things that you have to take care of. What happens after is just, we sell the product and buy, buy, no? So we try to, yeah, I mean, at the end, money is behind everything and we have products that help you move money around, have money whenever you need it. But it's not close to what happens with the money. He knows. He has 10 euros for free. He made with the 10 euros and that's for free. So that is from one side and then another thing that we would like to do better also is to introduce, similar to the concept of driven cars, autonomous driving, we think that autonomous banking can happen in certain fields. Whenever we get some more trust from people, some stuff that is really boring from financing, taking control, taking bills and everything, I think we think all that could be automated and give people more time instead of getting people more attention. Great. Thank you for solving the question. Any other question? Okay. Five, four, three, two, one. Big applause for her. Big applause for Elena. Thank you so much.