 Our first speaker will be Oscar Mendez. Oscar is the CEO and founder of Estratio and he's the founder of Estratio, but he's too the president of the place we are today, the big things conference. So, let me begin with you, give us some views, give us some light, how a company like some of them will be watching now, when they are thinking in the beginning with these amazing projects of artificial intelligence, what kind of problems related to ethics are important to take into account. Maybe from the beginning of the project to the delivery of the project, what kind of factors are important? N2M to understand the problem. The problem to do or to use AI with an ethical behavior is coming from the data. So N2M, when you are asking for some customers about the data, entitlements what customers are giving their data, they say you can use your data for this. And this is the beginning of the problem. Okay, because I am giving you my data to use it to do, propensity model or fraud or just reporting or improve the services you are giving me. So it comes when you get the data, you have entitlements from the customer and where after that you are using the data. And here the companies what we are seeing in the market are completely lost. They are not able to use the data to do a model, AI model, could be anything, and then to assure that they are using the data of the customers just with the entitlements that those customers gave at the beginning of the process. So this is one of the worst problems about data ethics. Very few companies are able to assure that the ethical behavior, the use of the data is being used for the things the customers allow the data to be used. And this is a huge problem because they are not able even to do it from the beginning data acquisition to the model building and to the use of the algorithms. And this is a thing, it doesn't matter the market, sector, the size of the company. Companies do not have the technology for the organization to fulfill ethical behavior or ethical data behavior. And this is a huge problem in our opinion that we are trying to solve with most of these big companies. They don't know how to solve this huge problem. So let me switch to London. Sara Fernandez from Tableau is there. It's in London now. Hi Sara, how are you doing? Hello, hi. Everything fine in London? Everything good. Well, in my flat at least, outside not that good. I like to know your opinion. Tableau is working, your customers are working with data always. It's the core of your projects. What kind of aspects are relevant, are important when talking about ethics in these data projects that you are implementing? Yes. In the last few years, not only in the Tableau world worldwide, we all have seen projects that were a fail on an ethical side, mainly because of one of the following areas. Actually, Oscar talked about the data collection part that is a very important one. When collecting personal data, do individuals know what I'm collecting, what level of detail, can it be limited? Here GDPR attempts tried to solve, but data ethics isn't restricted to data collection or even data governance. It also applies to how data is interpreted and acted upon. So the first area I mentioned and Oscar also covered in more detail is data collection. The second area of concern is data governance. Aspects such as who owns this data. If it's public information, do I make it accessible while protecting identities enough? Also it's very important that transparency about the origins of this data and its limitations. The third area that is very hot topic at the moment is data sharing. Here, let's consider biases and whether or not facts are being presented clearly. Ethical visual best practices must be applied, avoiding perpetuating biases. Let me give you an example. During the COVID, we've seen a lot of visualizations of data and red was commonly used to represent the number of cases and the increase in cases. Red in some cultures is associated with danger and extremes and this could incite some sort of panic in the audience. So this is just one example where we should be careful. And another important point is, is the data I am presenting relevance to the audience that I am presenting. The last point, as I mentioned, data collection governance data sharing is data decision making. Consider how we are presenting the data. Are the limits of the data understood? And does it fit the question? Sometimes data can be quite deceptive. So in conclusion, both ethics and philosophy provide a process for looking into potential value conflicts. And most companies already have in place ethical codes that regulate how they operate their business. These ethical codes must be and are being extended to data. And here technology can be an ally on extending and implementing ethical rules to data. Thanks a lot. It's important for all of you to keep the microphone open all the time. So the people from production will be using this, but it's important to keep the microphone always open. One of you, I didn't say who, it's closed. So please open it. Let's continue the conversation. Jose Luis Florent was here yesterday giving an amazing presentation. I'd like to go a little bit beyond the data, because in an artificial intelligence project there's much more things than data. And the impact can go beyond. And they have an impact in the future of employment and in very different factors. From your perspective, companies are beginning to be aware of these data ethics, but are they aware of these no data related topics that are happening in ethics? Like many different roles. What's your opinion? Yes, I think they start to be aware of that because of the news basically, because we have news about problems in terms of bias, in terms of fairness in algorithms created and used by very big companies like Amazon, like Google and so. So it has quite a big mediatic impact. And as a consequence, yes, I think they are more concerned and more aware about what are the dangers and the problems with the use of AI on data. I think one of the problems we have is there is not, let's say a common methodology, a systematic methodology that is followed by all the data science teams in all the different companies. Even in the same companies, there probably is not the same way they approach the problems now as the way they approach the problem five years ago. And it's quite difficult to keep a governance, to keep an accountability on the processors. So I would say that there are some challenges that we have to cover if we want to have some degree of control over the result of the AI. But I think the executives start realizing the potential threats and problems that they can face with the AI. I think it's important, this vision. I'd like to ask Lubert Guelcahanan. I don't know if I pronounced your name properly. Is Lubert Sontelé dead? Yes, so the first name is Lubert. What's your opinion on this? It's changing the vision that companies have about these topics related to ethics. What's your vision? Yes, so to add on what was just said, one of the themes that I think of and many people in my company think of is to be able to think about this in a broader context. So from my company's perspective, we're thinking about two topics. So verification and validation for AI, which entails considering the topics of and researching the topics of interpretability, explainability, robustness and certification. We have for some of our customers that may relate to their models possibly versus their data. But speaking of data, also taking into consideration data privacy and also the amount of rigor and trust that's applied to many of those topics, whether it's the models or the data. There's also another thing that my company considers it's responsible AI and connected to that you have the topics of fairness and bias. So one of the key areas where math works hears about responsible AI is mainly in computational finance where this tends to arise where decisions are made related to human beings related to people. So in terms of these two categories that my company is thinking of, they definitely fall under verification and validation for AI as well as looking into responsible AI. Because in general, we focus on engineering and science, which means customers of ours are typically focused on a number of applications that don't involve dealing with necessarily people where ethics would arise. Overall, there are some related applications, but that's not typically our focus. I think one important point today, and it's a debate that it's open, it's how this should impact in regulation. If we need new regulations, if we need more regulation, even if we need some regulation. So I'd like to ask to Nicolas Mayard, he works for Databricks, what's your opinion on this debate about the aspects that companies are taking into account and about if this will have an impact in future regulation? Okay, so in terms of regulation, I guess the place we can start is regulations on using data and using data to make decisions predate any notion of machine learning or artificial intelligence. There have been a number of regulations in different verticals and companies and industries that have ranged from everything to decision making to how to apply it. And now we're looking into artificial intelligence. Now, where that is important is because now the amount of data and the speed at which we can consume it and we can correlate it to make decisions, make those decisions incredibly impactful and incredibly rapid. So the notion of regulations, as we see it, have evolved with our usage of our technologies and of our data. We've seen things like the CAP Act or the GDPR Act in Europe really trail-braised that notion of who is an owner and who is a consumer and what are their rights and what are their responsibilities towards data. But as we go forward, that really very much focuses on the data itself, not necessarily a data-built system or a model or something that makes a decision off of the case, an obfuscated data. And we already have a lot of industries in the background, for example, where you need to be able to show auditors and explain how your decision making was made to be. Was it impact by advice? Were you able to recreate the environment in which it was built? Can you pre-prove that it was fair? It was an ethical and the way it made the decision and the impact it will have on the society and people consuming? And that's incredibly important in terms of regulation, but regulations at the end of the day only regulate the moral obligations we have to each other as a human society. And what I mean by that is regulations can only go so far as an ocean of trust and of sharing of information between organizations that offer solutions and organizations or people that consume these solutions. So it is both on the offer and the servicer and the consumer to actually share those regulations. I don't know if that makes sense, but we have to take that into a slightly broader context. Thank you. Thank you. I'd like to go back to Oscar and asking you about the impact this ethical aspect has on the diet impact that they have in business. Because at the end of the day these kind of projects are involved in most of the time, final customers, final clients, users. Do you think there is a real business impact of these ethical aspects we are considering we are debating today? Yes, for sure. I mean, customers and people, every day they are, they care more about how data is used, ethical behavior and well, the focus of this event is technology for good. So is my data being used for something good? Is my data used for the things I give the entitlement and the permission to do it or to use it? And the answer could be yes, okay, they can fulfill an ethical behavior and then I can check it and see what they are doing with my data and then I trust the company. But if there is any lack of ethical behavior, any scandal, any thing that happened and then customers, people, they really valued all this every day a lot more. And then if this company has an ethical behavior and is doing with my data what I want for my data to be used. So no, I will go from this company to the company or the services of the product, so the business impact is really, really very big. With social media, social networks and all this, it is something strange. It's just we are giving entitlement without reading what the data is going to be used. GDPR is helping in Europe, but in reality they are using different ways to go and to take the entitlement in a not very clear way. So we could talk about social networks and how the Google and the Facebooks are doing it in a different way because we are giving permission without reading anything. But for normal companies, banks, insurance, telecom, health and everything, not digital social networks or big ones, ethical behavior is key for the revenues, the chance, the fidelity of customers and so on. Jose Luis, yesterday in your presentation, in your keynote, you opened this debate between privacy and freedom and health and all this stuff. From a customer perspective, there's not that different debate because we allow the use of our data as customers by all these social networks that we are commenting. But after this, we all have these concerns. How do you feel about this debate from a customer perspective? Giving our data, not giving our data, it's privacy versus amazing experience versus personalization. How do you perceive this debate from a customer perspective? Well, I think if you think in us as customers, normally if we perceive that there is a benefit for us in the very short term, something very clear, then the trade off is clear. It is not a problem for us to share our data. If you are going to give me what's the best route to arrive home driving them, well, it's a way to have this information. I can save 15 minutes, 20 minutes even more and it's not a problem if you are tracking my route. So I think it has to do with the benefit in the short term and it's quite psychological and quite human behavior. When the benefit is no or the reward is not so clear, then just start saying, well, why do you need this information from me? And this is what has happened with this rather COVID and contact tracing apps, that we don't see the benefit as close because it's not so evident. And then we start making questions and so on. So I think it basically has to do with that, with the trade off. Anyway, I think the trend is having the customer or the citizen to prefer in the center, being the core of the system. And I think the trend is giving the power to have the ownership of the data to the citizen, to the consumer. So if we think in the new generation of recommendation engines, if you're seeing in the new generation, even of faster recognition systems and so on, all of these systems and solutions happen in common, that they start to tweet the data of the user. Something that eventually can belong to the user and it's a user who has to give permission at any step of the process to be used for different purposes. I think that's one of the main trends. And I'd say that is, I think it's something that Oscar also said, I think is not only important, but it's even relevant from an economic perspective. I think the companies who are able to introduce this customer in the core perspective, given the ownership to the customer and so, are going to achieve much better results. Sara, in your opinion, how is this process of putting customer in the center of these data projects, which is the impact, which is the business impact of these concerns that we as customers have? Well, my colleagues have covered it all. Consumers care about ethical codes and they will stop buying from companies if they don't agree with their ethical codes, or if they don't have visibility. So I think, and I speak for myself, I spend hours looking for company values and their ethical codes and if I don't find anything, I rather don't go ahead with my purchase. So I think this is more and more important, companies to provide visibility of their human rights policies, sustainability values, company values, etc, etc. But I'm fully in line with what has been said previously. Let's try with Ross, Ross, Ross Perez, there's no flex, I don't know if you can hear me. Now, now, Ross Perez, Ross, can you hear me? Well, I don't think so. We have some kind of problem with the audio of Ross, because I can hear him, but I think he is not hearing me. No? Well, I go back to Loubert about these topics that we're commenting, about these customers involving problems that we were commenting. In your opinion, in your vision, what can we do now, not as a customer, as a company, to give customers answers about these consults? One of the biggest items that I'd like to point out first, before I even fully answer, is from a mathematics perspective, we provide tools for customers to build AI systems. And one of the biggest themes that I'd like to point out is we help support customers with their various AI-related applications. And in regards to ethics and being involved as a company, one of the biggest themes that I've seen emerge is, you know, we are also in essence seeking guidance. So often these general guidelines or frameworks are being discussed in terms of regulation from industry bodies, such as in finance that was previously mentioned, as well as the FDA for medical FAA, and then yet to be determined for autonomous vehicles and many other applications. So we're looking to see in the kind of medium term or short term what's happening in terms of these industry bodies and what they're recommending, but in the longer term to see what happens in terms of policies that are very similar to what we're seeing with data privacy regulation in the EU and to see what happens in many other parts of the world to help get this information for a framework that we can also use. And that's not just in terms of what the customers are building in terms of AI systems, but also in relation to ethics as well. Also looking to see what general ethics standards we can also help customers implement and hopefully test as well as a part of their applications. So thank you. I think now, Ross, can you hear me? Hi, Ross. Hello, I think we have a real problem with the sound there. Let's make another try. Now, Ross. I can hear you. A little bit better now. We lost you in the conversation, so I'd like to know your opinion about this topic we were covering just now, which is the real impact that these ethical related problems have in real business and in our relations with customers and clients. I'm sorry, I can't hear you. You can't hear me. I can't hear you, but... Can you hear me now? No, he's not hearing me. We're trying to solve it. I'll go for the next question because we don't have that much time, so we'll do our best to recover you in a few minutes because I hear you, but I have the feeling that you don't hear me. No. No. No. To Oscar. Oscar, can you hear me? Oscar? I think he can't hear me, but I can hear him now. Oscar? Yes. Okay, okay. I like to go beyond in the debate and in the conversation and going for the next topic because I think it's something that is overall this conversation that we are having because there is a real debate today, not only in companies. I think it's a debate that we have as a society about if technology is or not neutral. I have my opinion, but I'd like to know yours. Is technology neutral? Are companies, and not only companies, governments, they are working with moments too, are they at work that technology is no longer neutral and the decisions they take have a real impact in our lives? I think the topic of these years, artificial intelligence for good is very related to this. Do you think that the behavior we have as people has a different impact in all this stuff we are discussing? Well, in reality, you could say technology by itself or per se is neutral, but technology used is not neutral at all. And this is something that we could talk about the different blocks about countries, for example, US, Asia, China, or the European Community technology. Can we trust technology? And the answer, in my opinion, is not at all. So, I will try to be very clear, public clouds. Can you trust the data in the public clouds is not used with a political interest or national interest or intelligence service interest and it will not be open, well, we have had several news about how the NSA, the CIA, is using data from social networks, computers, printers, whatever. So, is technology neutral? Technology talking about innovation, yes. Technology talking about how it is used, not at all. The thing that it is neutral is completely naive. In fact, in Europe, we are not investing technology as Europe, as a block. Public clouds we are lacking behind. All data in the public clouds have serious problems about the use of the governments that those public clouds belongs to, for example, talking about China or US from that data. And Europe is completely lost about this. About technology investment, public clouds, cryptography, where is your data, social networks. So, this is a big, big, big problem. And we are defending ourselves with normal weapons against other countries and data, cybersecurity, technology. The third worldwide war will be a fight using technology. And we are relying on technology that does not belong to Europe. And this is a big, big problem, talking in a very global way. Well, I like to ask to Ross Perez. We have a problem with the audio. Probably is the return. But we are doing our best with Omni-Channel. Yeah, yeah, go ahead. Sorry for interrupting you. No, yeah, yeah, no problem. I still can't hear anything that you're saying. But I can definitely talk about the application of ethics on the business and the relationship that they have with customers. And, you know, I think one of the bigger questions that we see as snowflake when our customers are dealing with this is across a couple of different questions. Of course, how are we looking at the way that information is collected and the bias that might be applied as we're looking to, you know, use that data in analytics going forward. You know, you can definitely see that the way that you talk to your customers and the way that you communicate with them is incredibly important. So sharing data can definitely be a way that, you know, we can bring up a lot of ethical considerations with sharing data because obviously there's certain regulations around the way that we share data and particularly with partners and with customers and back to them. So, you know, we really have to be thoughtful about the way that, you know, we're sharing data and are we using it in a way where or does our technology enable us to share data in a way where we can properly, you know, obfuscate the right fields and provide that data in a really sensitive way. So there's definitely some complications with all of that, but it's something that I think most organizations are finding their way around and as it becomes easier to share data between organizations, you know, the relationship with the customer is going to become, you know, even better and I think, you know, the ethical application of that sharing of data is incredibly important and it's all about having the right tools and also being thoughtful about the way that we apply them. So thank you. Thank you, Ross. Let me tell you a secret. We have a problem with the audio of Ross, so I'm writing him the questions in WhatsApp. So this sound that was over there is the result of this. We have only five more minutes, but I'd like to ask Sarah about going back to this question of technology neutrality. What's your vision on this? Is technology neutral any longer? This is something from the past. In the future, there will be no death. So I'd like to know your opinion on this. Data technology in specific is a force multiplier and a great example is that insights-driven organization will grow at least seven times faster than global GDP, but equally technology can be misused. And of course, Oscar was saying this before, but people play a big role here. And let's remember ourselves. So today, we have so, so much information that the challenge is to identify what is fake and what is true. And let me give you an example very relevant in these COVID times. Johns Hopkins posts frequently updated COVID cases data. And Tableau back in March launched a COVID-19 resource hub with the same data, but reshaped for use in Tableau. These public data sets are very useful for public health professionals, authorities, et cetera. They make data from multiple data sources, easy to use, and can enable quick development of visualizations of local cases, et cetera. But at the same time, let's be honest, the stakes are high around how we communicate about this epidemic to the wider public. Visualizations are powerful for communicating, but also can mislead, misinform, and create panic. We are in the middle of a complete information overload with hourly cases, updates, and endless streams of information. And epidemic data isn't a data set to play with, just to have something to show off on Twitter, right? For this reason, Tableau had the need and decided, it was a people decision, to create a list of considerations for the users of this data set. We ask the users to consider if what they are creating serves an actual information need to the public. Does it add value to the audience and then covers new information? If not, perhaps the analysis created should be for their own individual usage. So, thank you. I will close this panel with Nicolas with your vision on this, on this neutrality, but thinking a little bit more in the future. So, in your opinion, I think it's not easy to give answers about the future, but you are closing the panel, so the future is yours. What do you expect from the future and from this neutrality and this ethical expert we are talking about? All right, so thank you for trusting me with the future. I'll try to take good care of it as much as I can. I think everybody has said it, technology by itself, is neutral in the sense that your previous speaker and the previous thought that I was watching out from JPMorgan expressed it. Technology is a way to enhance humans' decisions, but it needs verification, continuously predicting and getting feedback from humans to make better decisions, to make better usage. So, the technology is neutral. The usage is very impactful and hence not neutral, at least as un-neutral as who the people are that are using it for whatever end. Then, since you mentioned the future, not being neutral, being impactful, can also be useful. It can also be really geared towards beneficial usage for the community or for the society. In that perspective, just last weekend, two days ago, Saturday, we were doing with Databricks a very big climathon, hackathon, with a number of insurance companies and a number of climate ONGs all around Europe to try and get a better understanding of how these different models would impact, how the different climate will evolve, what are different actions that society could be taking or different ways that we could model and better understand how this would go. This is a very simple example. Of course, it is simplistic and meant to be very naive in nature in the way I express it, but from that neutrality perspective, just to take the other side, where I agree that technology is always a competitive advantage and it could be used in the wrong way. It could also be very much used to help us get a better grasp of the understanding of our context and maybe shaping the way in which our society is going to develop itself and also organize itself going forward by giving us a way to not only make decisions based on a limited number of events that we, as humans, can take in, but probably a much larger number of events that machines can probably help us change, digest much better to try and take decisions that are much more complete and from that perspective, maybe even more impactful. So I'll try and leave you with a note of something as positive of the use cases I can find for technology. Good job, Nicholas, because it was not an easy question. So thank you to the six of you. Sorry, sorry, because we have a small problem with the audio, so sure, next time it will be much better. So thanks to the six of you and for the rest of the people, we'll be back in five minutes.