 So before you can send your questions and we start the conversation, we have a question for you, the audience, which you can answer on menti.com, typing the code that will appear on the screen. So the question is, which area do you think artificial intelligence will transform the most? Employment, health, mobility, new areas, public policy, or other areas. Yes, directly on the box. Oh, okay. And so while we went, we wait for the result, which would be your answer to this question. So it looks like health is starting to get an early lead. I guess the people can see this on the screen at home, no? Yes. So, up to nine now. What, what do you think? Sorry, I was thinking. Yeah, no, I was just wondering about that question because imagine that when electricity became available for many without all the people at that time, you know, what do you think electricity will transform the most would have been very hard to predict In fact, even in hindsight right now, if you ask me what electricity transform the most, I'm not even able to answer that. And I think AI is comparable to electricity in the kind of transformation it's going to, it's going to allow. So yeah, I'm not going to make any back here. I have a question by by five and Montego, he says, over the past few years we have been observing a lot of changes in the global scene disrupting supply chains. How can I contribute to mitigate these issues and help us optimize supply chain management so that's true I remember like a few years ago, and I did a project for steel case. For those of you who might not know steel case are the number one manufacturer of furniture office furniture in the United States. Okay. So they're based in Grand Rapids, Michigan, and they manufacture like the office furniture they will see all over the US actually most of it come from them. And they have just in time manufacturing. No, so if you guys are not familiar with the idea of just in time manufacturers that a steel case doesn't start manufacturing anything until after they sell something. Okay, so you order a chair and after you order a chair they start manufacturing the chair. So, this is people that, of course, have been in the area of Michigan for a long time, their life has been supplied chains you know Michigan has been a supply chain intensive economy for a long time is the is the car economy and so forth. And they have you know this just in time manufacturing, you know, and what they worry the most about is about like what they call a splits, which is, you know, you put an order. If the order doesn't leave all in the same shipment, they lose money. If the order is able to leave all in the same shipment. That's how they make a profit. No, so it's all about that logistics you know it's not about the materials is not about any of those things. And this type of just in time manufacturing got severely disrupted, you know, during COVID you know I'm doing all of the traffic jumps that we've seen in the recent years. Now, even in their case that they are a very advanced registered company. At that moment they have big information problems about being able to foresee, you know, constraints in their supply chain that would emerge when they would have unusually large orders of an item that was not very commonly no order because the problem is that they're able to know their supply chain, but they would not know a like constraints that might happen to or three links, you know, upstream from them. Okay, so they might order certain pieces from China they usually arrive in time, but that's because they're all in 1020 at a time, if they put an order of 500, all of a sudden, they're not going to be able to send them, not because they cannot but because that person also has other barriers that then become constrained to provide some of the so in that word you could think you know that given enough data on the history of the supply chains you might start training those models that be able to provide alerts when that purchase is going to happen that triggers you know that there's a purchase that maybe we're not going to be able to fulfill that's going to cause a delay of the factory, you know, and that information can also be then sent to the people that eventually you know their lives are going to become hell. Two or three weeks down the line when they have to deliver the order and they cannot do it. So I think that's one place where I think can support like supply chain business is that it's a big intensive business. You know SAP, the video software company in Germany, I think maybe in Europe, you know, is, you know, on the supply chain business, but I think that other places and just to give you like a brief idea like we are working in my group with them. Next in the United States the National Institute of Standards and Technology, in part because despite the fact that supply chains are so important we have very limited data about them at a global scale everybody knows the wrong thing. We're using machine learning, you know, on international trade data to try to infer supply chains so that we can provide these maps that we can use to strategize and be resilient you know we can strategize when a natural disaster happens when a shipping route gets interrupted. So what are the other places that can provide what we need and that we could pivot towards, well, you know, to make those strategic decisions you need to have those maps, and I think with AI, and with the data that we have we can start building those maps. Thank you. Okay, so we have another question. So the other question is, what would be the approximate time years for artificial intelligence based public utilities to reach economies of scale, where developing countries could also make use of them for optimal standard of living for the masses at large. One question is, when will Utopia happen? But I don't think Utopia is a bad idea, not because it might come true, but because you want to know where you want to go. Yeah, and in some sense, you know, like what Utopia means changes over time and people have different ideas of what they would like their world to be like. And I do think that there's an imagination which maybe technology is going to be able to solve some of, you know, our problems like problems that involve like poverty, lack of housing, inadequate sanitation, you know, and one of the places where I am kind of hopeful, you know, and also doesn't get discussed that much, you know, is the use of artificial intelligence and robotics in construction, you know, you don't see too much here. I think probably China is going to be one of the ones that is going to get there first, but with, you know, prefabricated parts and with large construction machinery, you don't get to a point in the future in which you can have crews that are almost autonomous, you know, performing construction and those could be extremely useful in places that have housing shortages and there's big parts of the world with big stocks of housing are inadequate, you know, or that built with light materials, you know, or that built in places, you know, that were not pre urbanized and therefore the sanitation conditions are really bad and you cannot build sewers. Once the house is there, you know, it's easier to build the sewers the street, you know, all of those are utilities before. So for those large big infrastructure projects that maybe in partially the United States and Europe, we don't think we need because this is an infrastructure here, but in our parts of the world we do this type of technologies might make a big difference, you know, and I do think that a word in which construction machinery can have a big component that is automated my a lot some possibilities of planning and design that at the moment are not available. Okay, thank you. So, do we have other questions at the moment. So, so I have a question to see or just to follow on this topic so we saw that in the survey, just one person said that one that artificial intelligence is going to transform public policy and governance but the discussion that we have up to now we can see that actually, this is going to have a big impact on governance and economic policies. So I do see a few places, you know, that are very different where artificial intelligence can affect, you know, and the public sector on the one hand on the executive side. I do see an opportunity for artificial intelligence to have an impact and it's already having some of that impact and we've done projects on that space. What does it mean by on the executive side. So on the executive side basically, you know, you have budgets that you have to execute, you know, in agreement, you know, to a like like public policy but you have a lot of this question on how you allocate those budgets. So for instance, recently we did a project in a country of Latin America, in which we grab data about, you know, health and about hospital infrastructure to try to identify gaps in public investment in hospitals, so that they could better allocate, you know, their public investment. So this is a simple machine learning type of approach or algorithm, but it's the idea that when you're dealing at a country scale and you're dealing with so many people and infrastructure that is so large, and that no work that is able to know, you need this type of implementation tools to basically identify those those gaps that you might not otherwise, you know, be aware of, and to be able to do those budget allocations. Similarly, when it comes to innovation, you know, and economic development and regional development approach, Europe now requires all regions to provide a smart specialization strategy that they have to produce in the three years, and they have to produce that report, but doesn't tell them how to do it. So what is happening now is that across Europe, many people are starting to use the tools of relatedness and economic complexity to create that smart specialization strategy because that's a way to basically grab the regional specialization of, you know, that that economy, and use it to anticipate what's the probability that you would be likely to succeed at certain activity, what is the potential value of that activity. And that's again, I recommend the system like the one that Dana was telling us about, you know, in the case of Netflix, but applied to regional economic diversification and development. But again, you know, these are not democratic processes these are executive decisions that in the context that an executive has, you know, basically to distribute some resource, you know, in a way that is much more granular than the one that is told how to, you know, distribute. Now, there is another space, you know, which we can talk about later everyone but would be on the use of artificial intelligence now on democratic institutions. And that's very different than the one for the executive I think the executive side, it might happen even without people knowing about it. The other one, you know, I think it's much more controversial, but also much more interesting. Okay, thank you very much. So, Jean-François, let me ask a question to you. Oh, we have a question, do we have a question? Okay, so, okay, I have a question for you then. Okay, so you recently worked on artificial intelligence corrupting our morals. How would that work? Okay, first, in this case, you have to start from the baseline, which is how do humans corrupt the morals of other humans, which means that that we have different frameworks to think about that you could have people who are role models. So you see another humans doing something bad, and you imitate that person, you could have another human giving you some advice that is bad advice that makes you do something evil, or you could delegate an evil action to another human, and so on. So the thing is, though we have to trans transpose that to the words of machines. So imagine that you see a machine doing something that is evil. I guess, many, many people were very outraged by the episode known as Twitter type, which is the day where Twitter bot was exposed to was deployed on Twitter to interact with people and started repeating racist sexist things that people were saying online. And many people were outraged and thought that if this bot is saying these things, other people are going to see that and be influenced by that. So that would be the role model issues. But then of course you also have an AI might give you advice that is unethical for some reason, and you would then you know just follow it without realizing. In our paper, we basically said that these two things are unlikely to happen for behavioral reasons. But the last one which is delegating something you know to be evil, or an ethical to machine might be a real threat. That is that people might feel that it's okay to actually delegate the dirty work to a machine and feel still feel good about themselves have plausible deniability about what it tells what it says about them as person because that's a very, that's a very strong motivation for people to act ethically. That is that they don't like to think of themselves as bad persons. So if they have a way to have plausible deniability. about their character by delegating the dirt work to a machine, then that's actually apparently a serious risk. So in some sense, if I understand it well, it's not my area of work. So maybe in some sense, these contradicts what Cesar says in terms of humans switching machines rights, because if there is an accident and this action was produced for machine. So people is catching more than if it's by a person. So that contradicts the thing that people choose a machine to do a thing or not. So, in that case, you have to imagine that the outcome is probably good. It's, it's obtained obtained by analytical means but it's good let's say that you delegate investment to an algorithm. And the algorithm, you know, you suspect is using privilege information that is taking from somewhere, and the outcome is actually positive you're making more money. You know you don't have the bad judgments from a bad outcome generated by machine because the outcome is good, but the means are an ethical and you would not do it yourself, because you would know this is not supposed to happen. But you know if you just suspect that the machine is doing it but you have no evidence then you know fine. I think that, I mean, in your opinion, machines should have rights, maybe, and the thing that what because one of the topics in ethics in artificial intelligence is whether the robots or machines should have rights or not so if for example there is a self driving car and crash. What happened with this machine so do you think that maybe the existence of rights in machines should change for example their results you obtain in terms of ethics also one or no. I, I'm not really seeing that for in this example, like the car crashes. Why would we want the machine to have right. I mean in that case, the problem is the responsibility gap I think that you're explaining. You know who's responsible what's going on. And it seems a very dangerous idea to say that the machine is responsible, because then the bucket stops, you know, the responsibility just stops there. And the car probably doesn't have a bank account to compensate the victims. The car cannot be jailed or punished. So, of course we want to say the responsibility should move on higher in the chain to the people who designed the car, the people will load the car in the market or someone some human at some point. And I think many people are saying that this idea of machines having right as a dangerous effect of making the responsibility bucket stop at the machine. I mean, if I can add a little bit so, and I agree that machines, you know, are very far, you know, and probably might never have rights as humans have rights but the rights over a lot of things, you know, and I think it's important to look at that nuance so I think for example, a machine anytime soon is going to have a right to life. But I do believe that for example, when it comes to copyrights. There might be ideas in which you might want to have a machine that produces something that something gets used by someone else. There might be some royalty that gets paid back, you know, and, you know, if that thing that the machine produces use as another by another machine as input you might have kind of like chains of copyrights, you know, that are used to compensate some teams along, you know, those lines, you know, so when it comes to rights, you know, there's such a big space of possible rights, you know, property rights vis-à-vis, you know, fundamental human rights that I think we have to discuss them more individually and that's, you know, one of the ways in which you might be able to get more precise answers. Yeah, sure. So, okay, so finally, Dana, also, I wanted to ask you, so you haven't worked a lot on algorithmic game theory and pricing. Could you provide us some example of where algorithmic pricing is used? Yeah, so again, it's pricing, it's another example where we can use artificial intelligence to learn about people. So if we think about pricing, we think about a company who tried to fix an optimal price for their products. So, of course, if I'm the company and I want to sell this water and I know that you, your willingness to pay for this water is 10 euros, I will say, okay, this water is 10 euros, because I know that it is the value you have for the water. The thing is that, first, we don't know the valuation of people for products, because even if I ask you first, you don't have an incentive to tell me they're right. So, and on the other thing is that even if I know all these values, I can do discrimination. So there is one very studied topic that is discrimination pricing. And it's basically it's unfair and it's illegal to do discrimination just because I know that you can pay and you want this water and you are able to pay more. So there are a lot of studies and a lot of things to do about that. So we basically we can model this problem as an optimization problem is a problem in mechanism design where basically you want to maximize the revenue of the company that you take into account some equilibrium constraints that measure that the consumers are also maximizing their utility basically. So if you know that today the water is 10 euros, but tomorrow will be one at one euro, probably you will wait and you will buy tomorrow. So there are a lot of constraints are a lot of things that we have to take into account to to determine the optimal pricing policy and artificial intelligence helps to predict or to to learn this willingness to pay of people. There is one parameter one value that it's really difficult in economics to to learn. So having historical data and being able to learn about this historical data allows to put people in different clusters saying okay people in this cluster, maybe, and is able to pay this amount of people in this other cluster, another pricing. So based on that is that a pricing policy is made. And for example in terms of discrimination, because companies cannot do discrimination. So what they do is to do it without saying that they are doing discrimination, for example, airlines companies when you have different prices for the same seat, basically in some sense, that is discrimination saying okay. This is the same seat but now I'm saying that this is a business class. So you have to pay three times the value and maybe you would pay yesterday for the same seat. So, yeah, there are many problems in pricing and artificial intelligence can help in these problems, basically learning about how people think, how people evaluate different products. Okay. And so consumers behave strategically towards an algorithm, a pricing algorithm. The consumers. Of course, if, if I am a consumer and I say okay I will use all my knowledge about artificial intelligence or the algorithms to try to do the same strategy as them, I can learn about their strategy. And then I come in. This is the case where basically consumers are completely rational. This is the definition of being rational in economics, basically, I can learn and I can, yeah, I can guess, which is happening in future and based on that I will do the best for me. So, I think, yes, it takes time, but as, for example, with tickets for flights, we can learn about data because we know that price are dynamics and we don't know if tomorrow the price will be the same or not. But we know more or less that in some dates the price are higher or if you want to buy a ticket and you try to buy it one day before and would be really high price. So basically, if you use these platforms to compare pricing prices and an airline, and you have like a good algorithm and a lot of data and good data, of course, you can also develop an algorithm as maximizing your, your, your utility. Yeah. Okay, thank you very much. So, I think we have a question from a moment. So, he says, I think you have answered the questions in zoom. No, okay, sorry, not definitely the question. I cannot. So, the question is over there. Okay, so we have all seen how Africa is transforming itself through digitalization. A concrete example is a mobile banking, which has allowed the creation of new opportunities and the monetization of applications. How do you think Africa can take advantage of artificial intelligence and boost this digital transformation. Anyone can take the question. Someone has to buy the bullet, no, so I'll start, you know, and please join me, you know. So I agree that Africa did a great job at developing mobile banking, which is something that a lot of other places have tried to achieve. In the US, I met all of these entrepreneurs know that have been developing mobile banking and say like in five years, you know, like we're going to basically displace all banks and it didn't happen. It never never caught on and even the forms of mobile payments that that exceeds like the Apple pay, they're not as popular as like the, the good old plastic credit card that that gets used a lot in the United States and here in Europe as well. And so that was a big change now when it comes to artificial intelligence. Is this an opportunity for for Africa doesn't have the same type of conditions, you know, for leapfrogging that mobile banking had my intuition and here I'm going out on Olympus that they might not be the same, you know, in the case of mobile banking. There was a population that was a bank that was growing in terms of income, and there was a cheap and accessible technology like the mobile phone that could be used to provide banking service to that population. You know, now when it comes to the development of artificial intelligence, you know, it's not something that you are able to do so much in kind of like that distributed context in which, you know, you are satisfying a consumer need with a low cost, you know, in the case of mobile banking and a lot of the artificial intelligence that we see right now is being produced in very no centralized big efforts, you know, when you look at GPT three like this big language models, you know, it's really hard to compete if you're a person that is working on on on natural language generation. You know, you are basically in a university, unable to compete with GPT three and Bert and all of these big models because you don't have the engineering team, you don't have the computing but those models are very exportable. No meaning that through an API, you can interact with them, therefore the transportation cost is very low so you train them once, and many people can use them and that generates a very different economic geography than the one of mobile banking. And of course there might be opportunities to maybe develop applications and exploit that very fluid economic geography in the case of Africa, but I think that the, the geography at least of creating this organization like creating a mobile banking you know, in which the advantage is having locally and bank clients and creating an AI company in which the advantage is not so much on that demand side but more on having a real advantage on the supply side in an economic geography of low transportation cost and high value are very different. Thank you. Okay, so, okay. I don't know if I can read it. Can you all milky. Yes. So, Paul asked us to talk about the black box effect, you know, basically he's saying that if we judge humans based on intentions and machines based on output is that because of, you know, the black box effect is because machines cannot explain themselves. So, I'm sure Jeff from Sahas a lot to say there. I'm going to just mention one paper that I like a lot of paper by Berkeley divorce, which I'm sure you're familiar with, which is about algorithm aversion. And basically in that paper what they, what they do is they give no people a certain amount of money that they can decide to invest with a human financial advisor or with a machine financial advisor. In the experiment, they get to take the money back home so there's kind of like an incentive to try to maximize your bets, you know, to, to basically do wise investments. Now, what they find is that people prefer the human financial advisor, but then they have a condition in which the human financial advisor is paired not just with a machine financial advisor but they're also showing the picks that this machine financial advisor made, you know, and then they're sort of opening the black box they're not just saying this financial advisor is giving you 7% return. No, it's like, you know, this financial advisor, you know, bought Tesla this price sold it at that price and whatnot. So what, what they find is that people avoid the machine even more after they see the machine or after they see the picks. No. So as people would be kind of like over interpreting the mistakes of the machine and they don't feel so identified. No. So when you open the black box, at least that paper of course opening the black box in terms of explainability can mean a lot of different things you know that's a very narrow example. But it's still in that example, you see the opposite of what you expect based on that intuition. You don't see people becoming more trusting of the machine, but people over interpreting the mistakes and say, I wouldn't have been so stupid to you know, test that $500. Maybe you sold at 400. Exactly. So that's, that's what I wanted to add. Yeah, thanks for introducing that paper. So I can go straight to the floor. So yeah, I think we have to look at the black box in the outcome, first say that, okay when when we say that that, for example, people are less forgiving machines mistake than human mistake. I mean, you can see the logic right. We know that various variants in human performance, you know, we don't expect people to deliver every day the same well like machines. So, so when someone makes a mistake, we know that yes many things could have happened and we're not quick to judge that the person is incompetent. So, the problem is that when a machine makes a mistake, we don't have the same kind of perception that there's a lot of variants, and a lot of reasons why a machine would make mistakes so it's tempting to just think well, if the machine is making a mistake that's because it's badly coded. Whereas we don't say that oh this human made a mistake once their brain is badly coded. And I'm not going to trust them. If we give some slack to humans we give less like to machines, but then there's also the problem that indeed the black spot problem is that the thought process of machines are opaque to us. Again, it's always nice to go back to the baseline and to say our human thought process. So transparent to us because I've been a psychologist for 20 years, and for the most part humans are still black boxes to me. I don't want to over and we know to do over emphasize the understanding that we have about how humans decide. But yes, at least we have at least the illusion that we understand what other people think and how they decide and, but we don't have that for machines. And that was very striking to me when for example, there was this very highly publicized car crash into Arizona where Uber vehicle with a safety driver crashed and killed a pedestrian. And in the days after the crash, you know, a lot of information came up, like, okay, the car, perhaps add issues with identifying the object while the human on the road. But then the safety driver was distracted and was watching the voice on her iPhone during the ride. And the pedestrian was crossing the highway at night in the dark. And outside of a crossroad. And so you would see that, okay, it's very complicated situation but what you saw that is that everyone, everyone jumped at the human responsibility, either safety driver pedestrian for the first few days. And you can understand that I have no idea how a car thinks. I have a pretty good idea that yes, not watching the road that you're the safety driver is bad that crossing the road at night is not so smart, but the machine, I don't know. So in that case, we think that okay giving more information about how the machine perceived the road and so on, which changed people's mind, and it did to a certain extent because there were explanations about the mistakes that the machine did. These explanations, they're always very imperfect very crude. Sometimes I compare the explaining machine thought process with visualizing a complex data set. The figure, the graphic you make from a complex data set is never the exact truth. It's just the way you have to frame and emphasize some aspect of the data set servers, like a persuasive component to it. Sometimes you sacrifice a bit of accuracy, or a bit of granularity to make your message more compelling. All right, this is what we are condemned to do. When we want to explain complex algorithm, we will never get rid of the persuasive component. You know, we will never be able to give a neutrally framed explanation of what was going, what is going to do what's going to happen. But I don't see how we're going to solve this all our efforts toward explainability are always going to hit that wall that if people realize that there's always a persuasive components to the way you explain the machine, they will never trust completely. What you're saying. Is the one on the top. Okay, that so from a cyber security point of view, which we kind of attack can be expected to artificial intelligence based system, how disruptive could it be, and how could it be prevented. I'm just a humble psychologist. That's definitely also very far from my area of expertise, but I met some people that you know have been working on this along the path and of course we'll know about this adversarial attacks you know we know these attacks in which sometimes you change a little bit of a stop sign and the machine you say that's a duck or whatever something different. The point is that you know you can fool the machine in a way that you will never be able to fool a human, because the way that the machine is interpreting the input, you know, because of that obscurity. Sometimes it's not very robust, you know, so there's a lot of these adversarial attacks and I don't think that's a bad thing I think that's a great thing. I think that's the evolutionary pressure that you want, you know, on the development of those algorithms on the, you know, training data you're going to use and whatnot. I think a lot of people have been doing great advances at developing AI that is more robust by combining these statistical learning models that sometimes can be very good at the scalability and speed and performance and all of that. But, you know, all of a sudden there's some corner case in which they make really stupid mistakes with AI based systems that are based models symbolic logic, and that in some sense you cannot fool in the way that you might be able to fool a system that is trained and you don't know that much what's going on inside the network. So, so, you know, there are attacks that you can do by providing input that can fool the machine, but there's also a lot of effort to create these robust algorithms, you know, and some of them involve this symbolic logic, some of them also involve mimicking some sort of feedback structures, you know, that are able to denoise, you know, the input as it moves, you know, throughout the layers of a deep learning system. So, I think this is a very interesting, you know, question because we are kind of at that space in which the technology was developed, the mistakes were understood, and now we're starting to figure out how to create a technology that is robust to those mistakes. Thank you. So, so it's almost 6 p.m. Thank you all for coming. Before we round up the discussion, we would like to present our latest IST magazine with a special focus on artificial intelligence and IST 10th anniversary. Okay, so we have now come to an end of today's event. Thank you, Jean-François, Cesar and Dana for coming and for sharing your expertise with us and for sharing for answering so many questions. And thank you very much to the audience for being with us and for participating by your questions. So, if you're interested in our research, please visit our website. The link should appear in the chat right now and follow us on our social networks, Facebook, Instagram, LinkedIn and Twitter to stay updated with the upcoming events. Thank you all for coming and have a great evening.