 Hello everyone my name is Zane Asher I'm a news anchor at CNN International so welcome to harnessing the generative AI revolution. So much to talk about here this is a fascinating topic. Obviously generative AI has the capacity to disrupt, revolutionize, upend nearly every industry, every business, every sector. The benefits clearly overwhelming when you think about the potential for efficiency, productivity, cutting costs etc. But of course so are the risks. Some of the major concerns we're going to be talking about will be about data privacy, potential for bias, that sort of thing. I mean the list goes on actually when it comes to the risks. So I want to introduce my panel. We've got Michael Schwartz chief economist at Microsoft. We've also got Nicole Sehin, founder and executive at GP, a global employment platform. Gevor Mantashayan, deputy minister of high-tech industry for Armenia and Mihir Shukla, co-founder and chief executive officer automation anywhere. So Michael I'm going to start with you because of course Microsoft and Open AI have extended their partnership. Microsoft has invested billions of dollars to open AI. There's so much to talk about obviously the risks here ensuring that the technology and the infrastructure around the technology remain safe. But just talk to us a bit more about how Microsoft's vision and Open AI's vision overlap. Well, so first of all I couldn't speak for Open AI, Microsoft. I think both Microsoft and Open AI want AI to help people to achieve more. It's Microsoft's mission but I'm sure Open AI wouldn't mind. I think both companies are really committed to making sure that AI is safe, that AI is used for good and not used for bad. We do have to worry a lot about safety of this technology just like with any other technology, right? When cars were invented, it was a wonderful invention. All of us got here through the magic of internal combustion engine later today and yet that's a dangerous technology that kills thousands of people a year. I hope that AI will never ever become as deadly as internal combustion engine is but I'm quite confident that yes, AI will be used by bad actors and yes, it will cause real damage and yes, we have to be very careful and very vigilant to avoid it by all the means possible. We have to put safeguards. I think people who worry about AI taking away jobs are paranoid. I don't think people should be worried too much about it. It's a good thing when AI makes us more productive. I think we should be worrying a lot more about AI being used by bad actors to cause damage because please remember breaking is much easier than building. So before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections and so on and so forth. And we have to be careful about that. So, Mihira, to Michael's point, I mean, in terms of AI being used by bad actors, what is the best way for companies to mitigate that kind of risk? I think we are learning that. We have about more than 5,000 customers. About 70% of them are already engaged in some kind of a generative AI discussion. And in many cases, we are putting guard rails in place. So before I talk about it, the potential is amazing, right, across so many areas. But we are learning where it is ready for it now and where it is not. So take example in healthcare. Although the outcome in clinical side is amazing, we are not ready yet to deploy it to diagnose and getting a diagnosis wrong. So, but on administrative side, on the claims processing, on customer service and sales and marketing, there are safe areas where you can deploy it today and have an enormous impact. We are putting various guard rails on all the AI answers, measuring the outcomes, making sure we are capturing the outcome. So if it deviates too much, we know about it. We are also exploring a possibility where some of the AI generated content, maybe we tag it so that just knowing this is AI generated content, does it help you understand the pros and cons better and live it to the reader? So there are quite a few areas. How do you mention, how do you protect private information and things like that? I think knowing where it is ready now and start putting guard rails and then take it across various industries and various functions. So, Nicole, you just heard Michael there say previously that, you know, the biggest concern is, of course, bad actors. And we shouldn't necessarily be worried at this stage about generated AI taking our jobs. Do you agree with that? Yeah, I do mostly agree with that. I do think the jobs will change so much at a speed that potentially people are not, that's what people are afraid of. But I think people are always afraid of change. But realistically, we've been using AI for a long time in many other areas of our life. It's just like the pace is accelerating. So for example, think about spell check or word processors and all of these things that have come along to make our lives so much more efficient. We don't generally see companies slowing down and they're hiring, of course, at this point, but things will change and I think people just become so much incredibly more efficient. But even things like engineers, there's just not enough. So the more people can be productive in their work, the better off we all are. So it's not necessarily about AI taking our jobs, it's more about the workload and the type of work that will essentially change. I think it's the speed and competence and efficiency with which people are able to do their work. I would think things that require research, that isn't dead, but it's like 5x more efficient, probably, if not more. And Gavoreg, Italy temporarily banned chat GPT. Some might say that that's an overreaction, but from a nationwide perspective, there are so many concerns when it comes to privacy and security. Thank you very much for the question and I think we're already privileged to be the fourth panelist who will just try to answer on the topic and I think the most important thing are already covered by my colleagues in the panel. But the reaction in Italy, I would prefer if my colleagues from Italy will react on this, but I will tell from perspective of the club of the states what we see at this moment. I think it was a part of the echoing of the fast development of the product, which sometimes it's very not easy to put under even the control. This is a little bit monopoly of the state what you are having to try to predict the processes which going on and ensure the security in the society. And I think nobody can debate on this that the technology is really fast growing and then not necessarily we understand when this will take us. And there is a fear, of course, and we is in the ministry as a high tech ministry, we are in an interesting situation that we want to protect the technology, but we also need to protect the society. So the question is, is the technology harmful for the society at this moment or the society is already ready? I think it's very, this is a topic of the debates to see in which society how much the society is ready to accept this tool, which obviously, and I think it's not hard to debate around this, could be very easily used to be manipulated the reality. This will create another acceleration of the situation when we know a lot of things was going on in the media by the fake news and the rest. But here we are giving another tool to accelerate the fake news, which is a very dangerous direction, I think. And we need to create the general, the protocols for the use of the application, the AI in the different fields. And I'm echoing the automation or in medical technologies, this could be the life-serving purposes where the manipulation can still exist, by the way, but if we use the necessary protocols, we could be successful. It's another question, who is controlling this protocol and how we are applying this? Yeah, I mean, you touched on a great point about disinformation, for example. I mean, it's already a concern. You can imagine how much more of a concern it will be with generative AI. But Michael, to Gavorg's point, is the technology evolving too quickly? And if it is, how much of a risk or how much more of a risk is there for early adopters in particular? Well, I don't think technology ever evolves too quickly. But too quickly for us to be ready for it? I don't think we are ever ready for a technology that comes about. That was true with electricity. That was I'm sure true with fire. That was certainly true with technologies of industrial revolution. So of course, we are never ready. But I think we need to remember one very important thing. When AI makes us more productive, we as the mankind ought to be better off because we are able to produce more stuff with less, with less work, with less toil, with less usage of resources. So clearly, if you play it right, we'll all be better off for that. Is it evolving? Let's play it right, actually, me in practical terms. Oh, I think that's a really, really good question. And I think it makes means many things. I think first, it means being responsible about how we use that technology. So for example, any image that generated by AI that could be confused with someone for real image, of course, should be labeled like what I like what you suggested. In general, I think any should go along way in terms of making sure that we label AI output. That's the very minimum. I think there are many other things that we can do to make it safe. And we are doing that, right? So for example, today, when you use new Bing, when you use chat GPT, the product that you are interacting with has enormous amounts of safeguards that's there. It's not perfect. It could still tell you things that may be incorrect. It may tell you things that would be damaging if you were to follow them. But by and large, there's enormous amount of effort that was put in to prevent it from spitting out hateful content, biased content, and so on and so forth. Of course, it's going to reflect many biases that are out there in the purpose of text that's it's trained on. But nevertheless, there's a lot of those investments that are being made. And in fact, if you were to kind of come inside Microsoft or inside of open AI and play with the language model in the absence of those safeguards, it would say a lot of things that a lot of things that would be great and on the money. But it would also say a lot of things that would be in various ways unsavory. So, well, you can reduce some of the good and reduce a lot of the bad with those safeguards. Sure. I think the I think over the time, we'll figure out how to deploy this and when various enterprises there could be few bumps along the road like every technology has. I think what the what we're concerned about is the unintended outcome. So if you think about last time we deployed AI at scale that was on social media engagement. And for whatever reason, we decided that instead of giving people 360 view of the topic, they should hear more about what they already believe in. Now, when we did that, what we did not realize is we let it led to mental health issues in the in the children in everybody actually and everybody election interference at some places of a democracy came at a risk. I would argue that it accelerated popularization of the society. Now who would have thought that a like button would cause all of that? Yeah, so now that is just AI used in engagement. Now imagine if you generate content. So AI generates a content and then AI engages you. There is a risk of you kind of getting hooked on to a drug that is in you know, completely AI generated. And we are not at a stage society wise to understand the difference. And so I think those are the concerns on a larger part in a business environment, we will put enough guardrails in place. But how would we as a society protect against this and are we ready for it? Just one more thing. This large language model Michael knows this very well. We call them language models, but they you know, pictures are a language sound is a language music is a language. Almost every form of content that you see around us could be generated with a with this with the same generative AI models. Are we ready for it in three seconds AI can simulate your voice. And there was a there was a case where a kid called a parent and asked for a money. And apparently it wasn't a kid. It was the generative AI extorting money. Are we ready for this? Right. So I think it is those areas that we will we need enough guardrails and some kind of a regulations will help in this regard. This is this is big and an important one. But as Michael said, I think, you know, just like cars, if we drove fast on every intersection, there will be chaos. And yet it produces for the most part it is better for us. So we'll get this technology there provided we all commit to it. So Nicole, how do you think this technology is going to disrupt your business? Nicole runs GP. It's a global employment platform. You know, we've talked about some of the risks a little bit. You said that you agree with Michael that it's not going to sort of take all our jobs. But of course it is going to disrupt your business to a certain degree. 100%. So I'll give some context about what we do to put my response in context. So what we offer as a platform globally that anyone can hire any company can hire anyone anywhere without having to deal with legal tax or HR issues. So ultimately if they want to hire somebody in Hong Kong, let's say they put that they put that person on our payroll via software that's overlaid on top of our global legal infrastructure. And then they have access to our HR advisors, legal people and everything. And so you essentially hire the person in the foreign country. Exactly. It's a great business idea, by the way. Thank you. And there's a lot of AI that will ultimately feed into that. I guess the way we look at it, so with that context, there's a lot of in our knowledge base. So our customer service teams are constantly speaking with our customers, responding to things. All of that will be managed with AI taken out of like, you know, all the gong, all the listening to the calls and kind of automatically feed fed back to the customers. If I think, but there's still a very human element, which is where the people come in and just become much more efficient at their job. So let's say we're employing somebody in Hong Kong for a customer. And that that employee is in the hospital. And the HR person is just a little panic trying to figure out what to do. So they they come onto the platform to talk talk to an HR advisor in Hong Kong. While they say while they type in my employees in an accident, and our person gets on the phone, what comes up is the insurance? What the what, you know, what are the terms of the insurance? Where is the employee based, you know, everything about that employee and kind of what are the what's normally done in Hong Kong about health care information, all these things that can just be automatically fed to the employee. Now that's set to the customer. Now that said, at the end of the day, the HR person is going to want to talk to an employee of ours and like, talk me through how can we help this person who's who's found him or herself in the hospital. So yeah, there's still definitely a role for people. Yes. It's just that they get to do what they're really good at, because they want to help people not look up insurance information and information. But what are you nervous about that? What am I nervous about? I would say that it's always about moving quickly enough. There's only so many so many engineers in the world who have this capability. I mean, the numbers are really small, like who have the experience of working with this type of software or who are really high quality. It's always about hiring enough people quickly enough to build what we want to build. And I don't think that's changed. The people we have just become more efficient. Okay. So yeah, it's always a race for talent. I would say one thing that is fortunate is we do have people we're able to hire anyone anywhere. So that gives both us and our customers access to people in a better way. Fine. And before I I understand, I mean, obviously, I respect that you can really only talk about Armenia, but from a national perspective, what is the role for regulators in all of this? At this moment, it's very free and liberal at this moment to to work. And I think the in a position what I'm right now, the I'm trying more to listen to be honest. And I think generally the governments at this moment need to listen and be part of the dialogue as well. We know a lot of governments, some of the governments are investing in the technology as well. But the private is going very fast as well here. And this coalition can build the environment where we would like to live. And I really echoing the examples, which we're saying the labeling is very important to identify. But it's just a little bit trying to manage the risks which is bringing after that. Of course, it's not putting out under the control the technology and just giving the opportunity for the people who could be vulnerable just to be aware. But this can accelerate other reactions as well. And definitely this will this is impacting the labor market. It's just all about the adaptability of the new technology. And as we like to say, say the example of the of the car, but I want to more simplify just the pen. You can also harm with the pen as well. You can put a lot of harmful content right by the pen. And you can distribute it the same way the same content very harmful to the people who could be very vulnerable to this. Is the pen dangerous? It could be if it is on the hand of the people who want to behave dangerous. That's why I would like to say we are right now absorbing to understand the reality to create the environment where we will not slow down the development. The important is to go on the right ethical values on the right track, not to open up the door for the dangerous processes to happen. And the beauty is on the example of the company. It's bringing the opportunity for a lot of people. But the thing is following, there's a summit called Gross Summit. We talk a lot about the risk healing and up scaling. But here definitely we need to go to the public and show the advantages of the technology, not just only scare. This is the important thing as well. That of course there was advantage in a lot of opportunities behind the technology, but let's use it in the right manner. And here are I'm pretty sure the digital platforms were already start using this marks like we did during the COVID-19 when the people were looking for and this parallel I think it's important when we talk about the generative AI which could be generated artificially and the news which can also be generated artificially. And a lot of states start using this COVID-19 official link where you can link and know the information. We already have the experience from this. I'm thankful for the COVID experience what we have because we learn a lot and get prepared to this revolution which is fairly called the revolution. It's already happening. It's all matter you are part of the revolution or you're staying behind. So Michael and Mihir either one of you can jump in to answer this. Just in terms of the best way to scale. I'm interested if we're going to talk about the benefits. Obviously we spent a lot of time on the risks and we'll get back to the risks again. But if we're going to be talking about the benefits, how do you ensure that the benefits, you harness the benefits to really target a broad base of society so that it's not just certain industries that are benefiting from the technology but a broad base of society is as well. Who wants to take that? Michael, do you want to take that or Mihir? Okay. Michael experienced it last day. He was just having a discussion with different people. Okay. All right. So I think that it's a certainty that AI will not be benefiting all industries and all people to the same extent. Nothing ever does. Some industries will adopt it a lot faster. Some industries will adopt it a lot more slowly. I don't think we could predict who will be the biggest winners there. So for example, in some of the experiments that we did, measuring sort of impact of AI and developer productivity within an experiment where some people in the control group didn't have access to GitHub co-pilot and some people in the treatment group did. And there's literally 50% difference in how quickly the people in the treatment group managed to accomplish the task. And those people where the people who benefited the most were actually developers with less experience. So you can say, well, maybe it will reduce inequality because it kind of helps more the guy who is less skilled that that could be. There's also an opposite argument. Maybe it's actually the other way around because the more basic tasks maybe could be fully automated by AI, but things that are more architecture that require more thinking, large language models couldn't do because what large language models do literally, absolutely literally, they take a string of text and then they generate one more word. After that, they forget everything they did before they take a string of text plus one more word that they generated. So they don't it doesn't even have a state and generates a next word. Literally that is what it does. So how likely is someone who is brilliant and coming up with the next word to replace your job of figuring out your five year plan or your one year plan on an architecture of something big, right? I don't think it's very likely. So I think it cuts both ways. We don't know kind of where it would be more useful, where it would be less useful. We don't know the speed of adoption. I think that people like I'd like to say that AI changes nothing on the short run and changes everything on the long run. And the reason that I know that that it was true for every single technology that came before it took decades for people to see significant productivity gains from electricity. It took a pretty long while for smartphones to actually make a difference in the way people do business and it still didn't fully kind of reach its potential. So same with AI. You're not going to see big productivity impact of AI until you can rethink your entire process and the way you're doing business around AI. When you're just kind of trying to make certain pieces a little more efficient by doing things in the old way, there are some gains, but they're relatively, they're going to be relatively modest and relatively slow. So it's only when you redesign the process to take the maximum advantage of what AI can do and leaving people to do things that they are the best at, only then you get kind of the biggest impact. So one more sort of sound bite about kind of how all of us will benefit from AI. I expect, so I have a really good health insurance and yet every time I need to see a doctor, especially a specialist, there's a really long wait and I'm very healthy person. It still is kind of bothers me and it's really a much bigger problem for people with less fancy health insurance and who are not as good of health as I am. So with AI, a lot of tasks that doctors do and we clearly don't have enough of that medical advice could be really automated so that we will all be able to benefit from better health through AI. I do think that when you start using AI in small ways, when the doctor has the process that he has and then he advances it with AI, sure it helps some, but I envision that in the future it would be really completely different. It would be redesigned. Your first encounter, your first briefing would probably with AI where you will talk about your symptoms and AI would ask you intelligent questions. You know that experience you call North Helpline and they have script and they ask you questions and you go like, why are you asking me about my heart? I just broke my leg. No, no, no. Sorry, do you have fever? It's like what fever? I just dropped something on my leg. Or calling the alliance. Yes. So imagine if AI could help a little bit in this process. So you have a few intelligent questions and then based on those questions, AI tells the doctor, look, this guy needs to do some of those tests, maybe an x-ray this and this. After he does it, he will talk to you, you will have all the information. That would be much better. That would be a different process. How long would it take to our healthcare providers to get a hang of it? A while, but all of us will be better off. And there are many, many other applications like that. I'm sure that, start with why. Why do we need this? Like can we do without? I think the world, most of the world has a productivity crisis, in my opinion. So many countries have a less working age population. Not all. There are exceptions. But a large number of countries have a demographic challenge. If we don't solve, so the way the math works is in order to meet GDP targets, we have to increase our productivity by 50%. That's the only way we can make all social structure work. Now, let me put it in perspective. In the last 60 years, we had computer, internet, mobile devices, and every software ever written. Does anybody have an idea how to produce 50% more productivity on top of that? And if we don't, there is a direct correlation on not being able to produce enough and riots on the street because when there is not enough to go around, that's what happens. So if we want to figure out a way to operate countries and societies, we have to tackle this productivity challenge. And AI and automation are not the only one, but they are a significant growth and productivity driver. In addition to, so that is, we need it. In addition to that, I'm a firm believer that talent is evenly distributed, but opportunity is not. And these tools, whether it was computer 30, 40 years ago or it is generative AI today, being able, it's a large democratizing force where it makes opportunity available everywhere and makes that. And so it will harness the human talent and highlight the human talent that exists everywhere in the world. And we will see an unbelievable amount of creativity that will come out of it. The kind of use cases that we are already seeing across, there are lots of areas where there is so much friction in our business. So take example of call centers where all of us have experience waiting on a call. We now have a next generation call center actually working with generative AI that are handling 80% of the call completely end to end. So when an email comes about let's say an airline and send it to change something or cancel a flight, generative AI would interpret the intent of the message. Then with the technology like robotic process automation you will execute the intent across various application. You take the output back and send an email back to the customer with a very warm tone and offer options. This entire experience happens in two minutes that used to take 20 minute on a call and countless other questions. Across so many industries the opportunity to improve customer experience and remove friction from all of it is just one amongst so many other possibilities. I want to see if anyone has any questions. Right away. She's getting your mic. Thank you. Probably to you actually. It should be common knowledge that we would not have today AI or talking about AI if we had computing of the 60s. Because it is the computing power that allows to get an answer quickly out of an extremely large amount of data that the algorithm is parsing. Now in computer science expert systems has been always a goal. That means harnessing the knowledge of a human expert in one domain in order to be able to provide quickly an answer even if that expert is not available at the time. Now we go through neural networks where the number of nodes allowed to get the proper answer from a pool of data which are used to train the neural networks learning algorithms and so. Now in all these activities there is one mantra garbage in garbage out. Now if we take the AI with the computer speed that we have today available and we feed it with everything that there is in the social network no one should be stupefied or surprised that you get garbage from the AI. And this is what happened recently in some of the experience the early experience with chatGPT. If I get chatGPT and I train it with conspiracy theories I will get the best conspiracy theories in the universe not in the world just because it will have convincing arguments that JFK has been killed by CIA or that the Twin Towers is in a job. We'll convince 90% of the people in the world. Now the problem is content curators. Now the legislations and the companies should also look at the scope of the AI. A recent example is a recently released AI that is able to interpret data, medical tests and scans and predict where the patient has a tumor where the doctor says in general I don't have enough information. The AI has enough information and is able to say this guy has tumor. The doctor goes do extensive additional tests and indeed the guy has tumor. This has been released a few weeks ago. So why not having legislations that determine first the scope why I'm releasing AI? The content that the AI is fed in order to get this knowledge and there you go. To the AI that works on the tumor medical if you ask him the question is Hitler alive today he will say what are you talking about? I'm looking for tumors. Ask me about tumors. Shouldn't this be a solution? I may. Is that for my call or? Well, now we can expand it. Yes. I can start. Yeah, go ahead. So truly in the fair view that the topics which are arising it exists and that's why we're talking about the importance of application and that's why I was mentioning about the hand to hand working the government and the private here and it's a ethical part as well how we use this tool. But the thing is the technology is going grow. Sometimes we want to observe. I also mentioned that I think the states right now need to a little bit listen be part of the dialogue to understand where we are going and why. But I will tell you one thing as well. My colleagues saying very openly the application is very important but there is also artificial general AI which is coming which is still we need to understand what is it and it's a hacking of the humankind and in a different senses. But here I think that there is one thing which we're missing. It's emotional part as well of this product which AI is generating the history, mankind the creations what was building the paintings or the history what was generating and so showing the existing of the humankind. There's a lot of things that was generated not artificially. It was generated by the real personalities and right now we're creating the reality where a lot of paintings and the art or whatever we'll call it can be artificially created. And the people who were part of this creation they might have the emotions and I'm an economist I'm not a psychologist but led the psychologist to explain it. If you go to generally to divide the people people are making decisions operationally or emotionally. And there was a lot of emotions behind the AI I think which is creating these fears and deriving a lot of decisions. Michael did you want to jump in on that? So I like many things you say don't agree with all of them. I think that if legislature were to try to legislate the training set for AI that would be pretty disastrous. So just kind of one little factoid any every company that I know of in AI space as far as I know considers the choices of the training set a trade secret. And the reason that you keep it a trade secret because actually there's a lot of technology that goes on into figuring out what out of billions of pages that are out there should be used as part of the training set and what should not be. And it's very complicated. So if a Congress were to make those decisions about training sets, good luck to us. So that's I think part of the answer. So part of the answer, I think your point that hey, there's a danger that AI will like convince everybody of something completely erroneous that's definitely true. The same thing could be said about the worldwide web. And in fact, when the worldwide web just kind of appeared the quality of web pages was actually quite a bit higher than it is today because nobody was trying to there was very little commercial components of people were just kind of saying things that seemed important for them. Later on, you have search engines, you have search optimization and you have a lot of websites that do a lot of gaming and very questionable content and so on and so forth. Same problem with the algorithm. There will be a lot of spammers who generate content hoping it will get into an AI model and get AI model to recommend your product to people. That's a real problem and people in companies that develop this technology are already thinking how to fight that kind of spam. So definitely there are technological solutions for that and definitely we are very aware of the problem and the taking steps to prevent. I said one thing that although world is aware of only few such models there are hundreds of generative AI models and works with various outcomes. So it is complicated to figure out how to regulate all of them. Yeah. Any other questions? So thank you very much. You all talked about the importance of regulating this type of technology. So two questions here. Which are the principles that you would apply in this regulation? And second, building on the example of the cars, the issue is that when you go to a different country you are perfectly aware that you're driving in the UK or in the US or in continental Europe. When you're using AI you don't know where that AI is being generated. So if the rules are not global it might be very difficult to apply the same rules. So how would you go about solving this? Me here, do you want to tackle that? Yeah. I think the short answer is we don't know all of it yet. We are trying to figure it out. The... I'll point... I think the challenge we face just to put it in perspective there is a difference between a knife in AK-47 and a nuclear bomb. They all have very different destructive power and scale. The challenge with this technology is the speed and scale at which it can affect. So many things in a way that we cannot control. So we need... The rules and regulations around it is very infant. We will have to tag... At least what is being discussed right now is to tag things and tag by locations and maybe make sure that what one AI generates is not consumed by other. Otherwise, can you imagine... One will generate and other will consume and they'll make it believe when anything they want it to believe. So... In specific areas in health care, like peak health care or logistics in various areas, where it is safe to apply and where it is not safe to apply, various logging algorithms. So in certain cases, all the output that we produce in various countries out of Generative AI, we keep it locally. We monitor all the outputs routinely today, early days, but today we monitor it and see if it is deviating too much out of the mainstream. There is significant security concerns. So if you hacked into your Generative AI, could you make your entire organization believe in something else? So new security restrictions and watching over Generative AI platforms that we deploy. These are just the few areas that we are exploring, but in this area, almost every week, we are evolving new guidelines as we deploy. So should it be... Is it more about when it comes to regulation, where the AI is generated or where it is consumed? Especially when it comes to GDPR, privacy rules in Europe, for example. Isn't it more about where it's consumed as opposed to where it's generated? I mean, Michael, do you wanna take that? Well, so I think there is kind of a normative question and there's a legal question. So let me address the normative question, right? What should... So I love the question of what should be our philosophy about regulating AI? Clearly, we have to regulate it. And I think my philosophy there is very simple. We should regulate AI in a way where we don't throw away the baby with a bathwater. So I think that the regulation should be based not on abstract principles, but on... As an economist, I like efficiency. So first, we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios. Once we see that, hey, there's real meaningful harm. But you should wait until we see harm before we regulate it. Well, I would say yes, because we could not predict where the real harm would be, what the real problems would be. So I think that at least the first time we started, say, requiring driver's license. It was after many dozens of people died in car accidents. And that was the right thing. If you would have required driver's licenses where there were the first two cars on the road, that would have been a big mistake. We would have completely screwed up that regulation. So it has to be at least a little bit of harm so that we see what is the real problem. Is there a real problem? Did anybody suffer at least $1,000 of damage because of that? Should we be jumping to regulate something on a planet of eight billion people where there's not even $1,000 of damage? Of course not. So once we see real harm, we have to ask ourselves a simple question. Could we regulate it in a way where the good things that would be prevented by this regulation are less important and less valuable than the harm that we prevent, right? You don't put a regulation in place to prevent $1,000 worth of harm but the same regulation prevents a million-dollar worth of benefit to people around the world, right? So just using this common sense approach to regulation, I think it's critical. Is there a technologist, two fellow technologists here? I think I agree with Michael that overregulation could hurt innovation. But I think it makes sense just like chart GPD is going through its version. You can start looking at some regulations, very simple ones, like just start tagging things or something like that because it also has to evolve. What we learned in social media is one day when you realize all the implication, the genie is out of the bag and you have no idea how to regulate. I think we can't be blindsided that way again. So I think we should start the conversation but not overregulate. Gavore, do you agree with what Michael said? Should we wait until there's harm before we regulate? Agree on the need to have the driving license. Better to have the driving license, not to have it. But even having the driver license and checking it, it's not protecting the harm what's happening on the road even. So, but this is just an example to minimize. My disagreement with Mike is different, is the following. With the way we are operating, it's try to understand the potential harm which it could be obviously already there and already put it on the regulation, not waiting to it be be harmful. If you go to the example of the car, they don't give the kids or the babies the right to drive that car because most probably with the development of the human body, they will be not ready for the accidents to react. So these kind of basic things should be the foundation. The question is very hard. The short answer is correct. We don't know. And in the Council of Europe, we're also keeping a discussion on this level with the different partners around the world. On a state level, we are, we try to find the way which could be beneficial for the countries and the states, but also not to harm the society. But the elephant is in the room. Of course, there is a damage here. We should not go on a damage control mentality, but try to predict as much as we can. That's why the expert level of the consultation and discussion, that's why this panel is also important, to identify the important message. And it's a little bit, sorry to give a comparison to the COVID as well, but there was a lot of discussion that is the mask saving you not to be infected? Not necessarily, but it's just reducing the risk for you to be infected. So I think this approach is very applicable for the technology which we are discussing right now, to get prepared to so-called singularity, which can create the importance difference in the technology. So Nicole, never see. This is a little bit more training if we can call it that way. Nicole, obviously your industry is a little bit different, but your thoughts on this idea of waiting. Oh yeah, I mean, I agree with the board, which is like some basic fundamentals make sense. I think we're very fortunate that a lot of the businesses that are creating this generative AI and at the forefront are being very thoughtful. Like the founder of open AI is being very ethical and mindful related to this. And yet there's a little bit, the cat's out of the bag really quickly. I think some basic security measures like let's not give kids the keys to the car, make a lot of sense. In terms of my industry and some thoughts specifically, it does go back to what Jorge asked in the audience is the how do you regulate this? Because there's really no functional international body. And I guess I'm a little more concerned about, this is for Europe, this is for the United States, this is for China, this is for Japan. Like there's, we already have that related to data privacy and it's very hodgepodge all over the globe. No, it's not gonna work. And it's not gonna work. It's not gonna work. Ingredients and food, for example. Exactly. Could that be US, yeah. Yeah, so I think, yeah, it's quite a tricky problem and no simple answers. I think we have, okay, go ahead. So I'd like to agree with the rest of panelists, but of course whenever we can impose a regulation that causes more good than harm, of course we should impose it, right? So there are innocuous regulations that don't kind of damage the innovation like labeling requiring to label AI. Of course we should have those things and in terms of access to kids to AI and so on and so forth. Of course I'm completely in agreement with that. I think the principles should be the benefit from the regulation to our society should be greater than the cost that our society pays for the regulation. I think that was my point and I think the rest of the panel would probably agree with that. I think we also agreed that in the medical technologies the application is very naturally at this stage that we are staying. And plus this will accelerate the general, the future of the governance. We are not talking about this, but this will come in the next decade. Maybe not decades, just a year. Could I just touch upon one topic about re-skilling? Because in all of this, we need to bring the entire world population along with us and we need to train so many people on this capabilities so that everybody can be part of the digital and AI revolution. Unfortunately, we have no time left. I'm so sorry. He wanted to ask a question. I don't think we have time. Okay, thank you so much, everyone. Can we give a round of applause to our student panelists? Thank you. Thank you. Thank you. Thank you so much.