 Welcome to the Carnegie Endowment for International Peace and our panel discussion on machine learning and artificial intelligence and transatlantic discussion. I'm Mike Nelson. I just recently became the director of our program on technology and international affairs, and I'm very glad you could all be here today. Carnegie is a unique place, partly because we have the ability to have global discussions about the most important topics at hand. And today's topic is one that's getting a lot of news. Every week here in Washington, there's at least two or three meetings, conferences, hearings, panel discussions on artificial intelligence. But this one's going to be different. It's gonna be different because it's gonna be global. It's not just going to focus on consumer applications and personal data. We're also going to look at business applications, which in many cases will be the most profitable. And we're also gonna spend some time on military and intelligence applications, which you may never hear about, but also could be very important. We have a special group of panelists, and again, I think a unique combination of speakers from both sides of the Atlantic. We have government officials as well as corporate. And I think that is exactly what we need to really get a well-balanced discussion on this topic. I'm going to introduce each panelist and give them just a few minutes, four or five minutes, to give their viewpoints on where artificial intelligence could go and what government's role will be. I'll ask a few questions, and then we're gonna devote most of this discussion to your questions, because that's why you all came here. If you are tweeting, the hashtag is MLAI, or you can just type the Twitter handle, Carnegie Endow, or Carnegie Endowment. So with that, let me introduce our first speaker, Pekka Ale Pelotelay. He is the former president of Nokia, and for the more important purpose today, he was the chairman of the High-Level Commission on Artificial Intelligence that the European Commission convened to help them explore these issues. He's the only one on the panel who's not a geek. Three of us are trained in science or engineering, and that means I guess we're a well-balanced panel as well. Thank you, and the definition of geek, I could actually build on that. There are other means of being geek than engineer, I must say. Very nice to be here. Thank you for the invitation. I would address, during my opening, probably the perspective from Europe's dimension and set a few elements in the backdrop and then share certain things. The High-Level Expert Group on AI, which is an independent expert group, giving advice to commission what we have done, or what we have concluded and what we are working on at the moment. Not from the engineering side, from the economics. Robert Sholov, Nobel Prize winner in economics. He has said that in innovation in dense nations, economical growth is stemming for one third from increased input of labor and capital, and for two thirds for capability to apply latest technology. And that sets the backdrop for the importance of digitalization, data, and AI. So the growth will come not totally, but they will be great, great elements of advancing the growth. So therefore, it's the speed to apply, speed to learn to apply AI, which is crucial for economies, for businesses, and for governments. The narrative when we think about the market, I think one notion we need to understand when we speak about AI, that there is no such thing as a generic way of applying AI. It's a business or context specific. You need to understand for what you apply, and that creates the foundation. So therefore, when we look at the big picture, global picture, the markets, there's a lot of activity, a lot of discussion on one particular market, which is dominated by US and Chinese companies, and that's a business to consumer market, and the digital only one. And for right reasons, but the business to business market and government to citizen markets are very big important markets as well, which have not been paid to similar attention. And if we are to believe, we are to believe what McKinsey has said about the size of the business to business market, size in terms of value at created by AI. It's twice as big as the business to consumer market. So therefore, when we speak about AI, we should understand the business specificity, we should understand the different markets, and not only the digital only market, but also the physical product with extended digital dimension. So business to consumer market, for example, automotive, is a very different market than the digital only. If we, if I say only one thing, then on the high level expert group on AI part, we have two different reports out, one is on the ethical guidelines for trust for the AI, and another one is the competitiveness part. And there are seven requirements, starting from the human agency to the technical robustness, to the privacy, data privacy, to the transparency, to fairness, well-being of society and environment, and the seventh is accountability, which are setting the requirements in building the ethical guidelines based applications for AI, which we may discuss more, may not. We have discussed with Lynn earlier today that what the US administration came out on the 11th of January just 10 days ago, is based a great foundation for something where Europe and the US actually can find a common ground to really find a ways and methods to work together to create markets, which would be based on the same values, the same foundations of respecting rights as is done in the US and is done in the Europe. Regulation, I'm not going to cover the competitiveness part, I'm going to cover only by one statement, which might be the topic of today's discussion, and that's the regulation. Out of 11 takeaways, regulation is one, very strong recommendation from high-level expert group is that regulation should be risk-based. So risk-based governance for regulation where the propensity of the risk and then the impact of the risk are then creating the foundation where from the regulatory dimensions, if anything, would be then taken. And it's important to think whether we need the regulation or not, we have many laws which can be applied already, and then if we need regulation, then when, for what, when and how, and there is multiple things which will create a rich foundation of finding the wisdom to do the regulation right. Thank you very much, that provides a nice overview. When we found out that PECA was in town, the next first phone call I made was to Lynn to see if she'd be available to join this panel. And the reason for that is that for the last couple of years, Lynn has been the lead person in the White House Office of Science Technology Policy on artificial intelligence. She was and still is the Assistant Director of Artificial Intelligence. She has recently promoted also to be Deputy Chief Technology Officer, and she's the ideal person to cover this not only because you're your present title, but because of your past title as a professor and a PhD computer scientist. So, Lynn, explain all the things you're doing and explain how you do it, because you're not just doing AI, you're doing biotech and other emerging technologies. Sure, so thank you, and I'm really pleased to be here today. And certainly, I almost focus on AI in particular. I think we have so much in common in terms of our values and our mindset with the EU, and it's a very critical and important time in the history of AI that we can work together strongly in this area. I think right now there's a lot of discussion about what's the proper use of AI, and a lot of concerns, naturally, about uses of AI that we don't agree with that aren't consistent with our values. Certainly authoritarian uses of AI are not what the Western countries want to have duplicated in our own nations. And so the question is, what's the right pathway forward in order to accomplish that? Regulation is certainly a common hot topic these days, certainly with what the EU is planning to release in the next couple of months. And then, of course, as hopefully everyone is aware, the White House also released just a couple of weeks ago a draft guidance document for the regulation of AI. If you haven't found it, I'll put a plug in it, go to federal register, search for artificial intelligence, and you'll find the draft memo. It's out for public comment for 60 days. So until March the 13th, and we welcome feedback. And that's an important part of the regulatory process, is to engage in a multi-stakeholder approach to make sure we're hearing from civil society, academics, industry, and so forth on what is the right pathway forward. I think there's been a lot written about how the EU and the US perhaps are on different pathways toward regulation. And there are small differences, I think, that we can certainly work their way through and how we implement. I think the most important thing is to recognize the common shared values about creating trustworthy AI. There's no dispute about this. If you look at the president's executive order on artificial intelligence that was signed last February 11th, that establishes the National AI Strategy, the American AI Initiative. And it calls very specifically for this process that we just are underway now of releasing a draft memo for the regulatory approach to AI, recognizing that that's an important component of creating trustworthy AI. But it's not the only component to creating trustworthy AI. It really requires a multi-pronged approach. And so our approach is very consistent with what the EU talks about as well. It's consistent with the OECD AI principles that we collectively signed on to last year. And that talks about a number of issues, such as the importance of R&D. If you think about challenges such as, how do we make sure we know what an AI system is recommending or a decision that the AI system is coming up with, we often wanna ask the question, how does it come up with that recommendation? Well, you can't regulate that right now to say all AI systems will necessarily declare or explain themselves because the technology is not there. We need more R&D in order to achieve the kind of AI that has the kinds of characteristics that are consistent with our values. And you can look at this in other areas too, safety and security and robustness and other areas, fairness, non-discrimination. There's still some open R&D challenges. And so this is a common theme with what the EU is proposing, what we are proposing, what was in the OECD AI principles. Another has to do with technical standards. And the importance of, if you're going to oversee the use of AI, say you want it to be provably fair or provably non-discriminatory, you have to be able to measure it against some standard. Well, what is that standard? It doesn't really exist today, it doesn't exist today. But there are certainly activities happening around the world in professional societies and other, NIST is doing work in this area to help us now be able to answer the question and to develop the international consensus voluntary standards that help us answer a question of, how do we measure a system against these kinds of characteristics that we want? So that's another element of what we're working on with the EU also talks about and other free nations around the world. Another piece is education and workforce. We have to recognize that the good uses of AI may change the nature of work. And so what does that mean in terms of opportunities for people to engage in this new era of AI? So we in the executive order and the American AI Initiative have a number of actions to help look at worker skilling, look at education and workforce from kindergarten all the way up through high school and college and technical school and then re-skilling as an adult. Getting into this mindset of constantly thinking about lifelong learning. And so, again, this is a common theme across the board to make sure all people in the society can engage in this new era of AI. And another thing that's called out specifically in our executive order is the importance of international collaboration. And so we certainly welcome opportunities like this where we can look at these common themes, figure out ways that we can address some of the areas that have some differences and then move forward collectively. Certainly in the regulatory space, I think there are a number of common themes. Certainly we very much also agree with a risk-based approach. The idea that not all AI is the same. Some use cases are more challenging than others. And so we need to dig into particular use cases. It's important for public engagement. It's important to make sure that we're creating trustworthy AI. We limit regulatory overreach so that we're not hindering people's ability to use AI for positive benefit. So I think there's so many more things that we agree on than we disagree on that there's a very positive outlook going forward. Well, thank you very much, Ben. Our third speaker is Zachary Lenios. Zach has trained, his degrees are in electrical engineering and he's had the opportunity to work in three of the most exciting places to be an engineer in America. He worked at DARPA. He worked at the MIT Lincoln Labs outside of Boston. And just recently he joined IBM Research and where he's in charge of all governmental programs, helping governments around the world understand what IBM Research is doing. He also had one of the toughest jobs in government. When I joined the Quinton Administration, we had the Plumb Book. It was all the cool jobs you could get in the administration. At the same time, there was another book called The Prune Book. His job was in there. He was the Assistant Secretary of Defense for Research and Engineering, which oversees a huge research budget within DoD and requires a lot of talent. So, Zach? Mike, thanks. You know, it's been a while since I heard the Plumb Book and Prune Book, in fact. That was the case and I do recall it. First of all, I wanna thank the Carnegie Endowment for hosting this and for inviting us. I mean, you guys do remarkable work and this is a topic that needs lots of discussion and it's not just an engineering discussion. In fact, it's exactly not just an engineering discussion. My background is at the intersection of advanced technology for national security. My entire career has been developing technologies that have an impact to our nation and nations, our partners, NATO and the EU partners. And AI, artificial intelligence, is probably the most impactful technology that will have benefit across that space. We'll talk more about that through the afternoon. I just wanna make a couple of points and then kind of be a little bit provocative so we can get on to some good questions. At IBM, we think of AI as more augmented intelligence than artificial intelligence. I've been on many studies where we start with the role of autonomy, that there's gonna be some agent that comes to a decision, maybe it's a risk-based decision, but it does it through some black magic. Maybe not so much, but maybe so. The systems that we're building, at least within IBM and other companies, we think of those as partnered systems. So the user actually has a vote. The user has a vote in transparency and in trust, in data integrity and where that data resides. And that's an important issue, that's a policy issue, but it's also a technical issue. So we think of AI in those coordinates. We also think, for us, it's absolutely essential and foundational that the data used to train these systems are owned by the user of those systems. And that's also a policy issue, but it's a technical issue as to how do you manifest that? How do you manage, how do you think about derivative insight? Like you and I will apply data to an algorithm that maybe Lin develops and Becca uses, and it develops an insight or recommendation. Who owns that recommendation? Who has the right to that recommendation? That is, who has the revenue associated with a value prop for that recommendation? It's a hard question. It's a technical question, but it's also a policy question. And then lastly, and I'm sure this will be the afternoon, we at IBM think foundationally about trust and transparency in the systems that we develop and deploy, and the way we develop and deploy them with our clients and the way they live through their life cycle. And that requires clear thinking about the data strategy, clear thinking about obsolescence, clarity about what a red team looks like. That is, who can attack these systems? How might they be obfuscated or attacked? And what might the third move look like? The second move is kind of easy. The third, fourth and fifth move is a little bit harder. So that's a perspective that I would bring. I would just kind of leave the intro with three points I'd wanna make. It's pretty clear that AI, whether you call it augmented intelligence or artificial intelligence AI, is moving rapidly into the mainstream. Companies are adopting it. Enterprises are adopting it for real value, not just for curiosity. In fact, everyone in this room is experiencing this in some fashion. And that's happening remarkably at a remarkable pace. Second, to my earlier point, trusted and assured AI is absolutely foundational. And the third point, no one's gonna argue with that. And the third point, this intersection between industry, government, and academia that we've used to drive all sorts of technologies prior to this and will continue to drive technology. In fact, that was the inception of OSDP. This intersection of industry, technology, and academia. Absolutely essential in this domain, probably more so than any other domain I've ever seen. And this is a section, this is a region that isn't just about people pushing bits. It's about people that understand the ethics and understand how this will be used, how value will be created and how it will be monetized. So it's a remarkable space. And I'm delighted to be here. We're gonna have a great session. So let me follow up. You mentioned a couple of specific things. If you were advising the president like Lin is or you were advising the democratic candidates and you wanted to suggest to them some kind of moonshot for AI, something that would apply artificial intelligence to one of the big problems that governments facing. What could you imagine that being? What areas? Or if you like, the military analogy would be radar. We developed radar during World War II and a very intense effort. There's a lot of universities that were part of that. Is there something that could really energize people so that rather than just this vague uneasiness about AI or a focus on consumer applications, we actually thought about something big that could change the world? Yeah, so Mike, that's an interesting question. So I have a thought. It's not the thought, it's a thought. And it would be, I would think, where AI could have an impact on unlocking utilization of healthcare records to help patients or where it might help financial institutions work in regions where there are large numbers of unbanked individuals, think Africa, or where we might help in AI where you could build a moonshot to reduce the shipping cost and shipping time of goods that require lots of coordination. The way all three of those were involved in. And the industries involved in each of those. The idea of a moonshot is interesting. DARPA does this often. And the one example, I'll leave it to other panelists as well, but the other, the example at DARPA that was there at the time, we had the first of three urban challenges, robotic challenges, and this was 15 years ago, 35 teams. Carnegie, by the way, was one of the teams. Stanford was too, as was MIT. And the first of those events didn't go so well. It went seven miles out of 140 miles. But it taught people that this domain was gonna be interesting. In an unleashed, a generation, it really was a generation of students and academic centers to make investments in regions in this robotic center. In fact, Carnegie stood up. The Carnegie Mellon University stood up. A major robotic initiative in that space. As did MIT and GTI and others. But it showed people that there was an unsolved problem that if you could just figure out how to address it, there'd be a big market. Now 15, 20 years later, we're starting to see it. We see it in cars that you can buy every day. A lot of the concepts that we developed there, it took some time, but that was a seed. It was a seed that opened up a new frontier. And I think the same could be done in health here. I think the same could be done maybe in a few other industries. Well, everyone says the moonshot had an enormous number of spin-offs from Tang to Teflon. It's a spin-off that you're betting on. So Lynn, what's your moonshot? No, the idea of a single moonshot is a little tough in AI because for the same reasons, it touches almost everything. And so you could pick a sector and you could probably come up with a moonshot for every sector that you can think of. I completely agree that healthcare is an area that has enormous potential for helping us be able to better understand disease and treatment options. And also to personalize those treatment options. I think that it has huge impact. If you think about education, also in the personalized domain, AI has a great opportunity to train people individually according to their interests and directions that they wanna go in. They're a vision for the kinds of jobs that they would like to have. And an AI system can help personalize that training giving the right lessons and developing the right skills. And I think that personalized side of things is not so easy today because AI works well today in kind of the averages. It helps detect common patterns, but not so much the tails. And if you look at individuals, individuals are not, no individuals typically average. Everyone has some uniquenesses. And so I think the next wave of AI that is not today's way, but the next wave that DARPA's investing in, more kind of smarter AI that can deal with more complex situations is opening up the possibilities for doing more on that personalized side, which I think can revolutionize the world for the individual as opposed to the collective. And then you also think about precision agriculture, for instance, the ability of AI to be able to say this is even, and it's tied to local weather predictions. Being able to say this is how much, what the weather's gonna be like tomorrow on your farm and therefore you need to take these actions in order to maximize the yield of your crop. So a lot of these areas have a common theme in my mind of sort of personalizing. We can't do it today, but we're headed in that direction. Very good. So Pekka, if you're advising the people with the money in Brussels, who fund all the research, what exciting high profile, high impact moonshot would you suggest that they embrace? I would be pretty much aligned my colleagues here on the panel have said, but to add something else, I'm gonna build the list of moonshots. I would say that there are a number of things which have to do with the environment. Produce some of different kind. So environmental elements, climate change elements would have multiple sectors where the AI, and that would be really a moonshot where AI could actually create an understanding of correlations and find patterns which are so complex that we just don't understand. That would be my hope. The second moonshot would be, and that has to do with the trust and the trustworthiness. In order to create the kind of the common theme, the foundation for the ethical guidelines and thinking in EU is that AI is not, it means to an end. It is not an end in itself. And it should advance the well-being of citizens by maximizing the benefits while minimizing the risks. And I think that there are so many areas where the maximizing of the benefits is they are already moving and there are moonshots there, but I would like to go for the minimizing the risks part and actually add certain elements of the security side that we could be sure that be it a private person, be it a company or be it a government, we could create tools which are head-off to the potential malice use of the information and the data which is going to explode and the different backdoors which are open are going to increase. So finding a solution for that. Are we allowed to be a little bit provocative? We encourage that. Okay, it's my style. Lynn, back to your point of personalization. The three of us could probably answer that question, the question that you posed. Given the experience that we have, what we've read, the people that we know, the gaps that we see in our imagination of what's next. What if we had a system that could do that? What if we had a system that could in fact debate with a user, these are the pros and cons of what's on the horizon? Here's what I've read in the 100,000 journals that I've consumed this morning. By the way, here's the logical sequence of how you might answer the question. So the value is in the construct of the answer, not in the answer, you'll get to the answer. But what if we had a system that could actually dialogue with you in such a way? We're building such a system. If you've looked online and if you haven't, take a look at a project called the IBM Debater System. It's fascinating, absolutely fascinating. The work is centered in our Haifa Research Lab, but of course we've got teams all over the world working on this. And it's a system, rather than what's, and this isn't an IBM thing, but it's an example of where AI is headed. So the IBM Jeopardy event was to answer a question in the form of a question, which is a language all into itself. If you've watched last week, my wife and I loved the grand chat, you know, the greatest of all time event, the goat, absolutely fascinating. Not quite as good as Tom Brady, but pretty good. So building a system that can act at human scale, at human complexity and compete with a human and wind in a closed form answer is a hard problem, but it's solvable. This other question, this other area of debating is very different. You're posing arguments that don't have closed form solutions. You're trying to understand the opponent's appetite for risk and you're trying to build a dialogue that's correct, convincing and compelling. And building a system that could maybe do that, we and a few others, it started down that journey. To me, that's the personalization question. So having a system that could interact with you in a way that says, you know, Lynn, you've got to do this thing at Carnegie, but there's some things you didn't quite read that maybe you might want to look at that, you know, it's kind of like the radar orally of MASH, right? So you forgot to ask the question, but here it is. It seems to me that's where a lot of these systems will go. And the ability to trust those systems will be measured by who actually uses them and what the risk-based consequence is absolutely essential. I think you've been too provocative because I think what I just heard is that you recommended that we have an artificial intelligence system that replaces the three of us, the four of us. No, no, no, no. And determines what the moonshot idea should be. But it does drive to an interesting question. I mean, in order to do that right, you'd really have to kind of understand what the citizens want and what would be exciting for them and what would serve the most people. And to do that, you need to have really good, reliable information. You have to understand the structure of the argument. Yeah. You have to understand the culture, the environment, the perspective that people come in with, what's missing, what's incorrect. All data is ambiguous, conflicting, in some cases missing, you actually have to understand. I call it biff mud. It's that way. Biff mud. We used to talk about big data. I hate that term. First off, it sounds like big OIL or big government. But more importantly, we've had big data for 30 or 40 years. We haven't had biff mud. This is a term I used in something I wrote for Bloomberg a few years back. And if you look on Twitter, you can actually find BFFMUDD, it's a hashtag. Big, fast, fat, missy, unreliable, distributed data. And that's what we really need to deal with. That's what's different. Information in all these different places and a lot of it unstructured and unreliable. So my last question for the three of you is how can the US and the EU collaborate to make sure we can get as much of this biff mud as possible and develop the techniques needed to make sense of it? Because I think AI policy is a misnomer. Data policy, biff mud policy, that's where we should be focused. Any thoughts on where we need to think twice about data policy and particularly with regard to sharing government data sets? What checks it in the beginning that this derivative data kind of a transfer learning will be, I don't disagree with you on that. That is crucial. That's one set of data. But there's a small data part, particularly in B2P side, which might be crucial. And there are also no data solutions which we need to understand where the data is actually generated and used. So there are multiple ways. That is crucial for many, many applications and we need to get that going. But it has to do with the rights, the value, the IPRs and how in many times the most valued data is coming from the learning or first derivative, second derivative, the third derivative. And how do you agree it's more business side than regulation side? How do you make sure that people will understand and there are technical tools of making sure that there is a kind of a fair share of data being shown and used so you can do the division of labor or have the Chinese walls where the need be and still you can use the same data? So in one way, I would think that the whole paradigm of thinking what is the data, what kind of a qualities on structures, that would be great to understand and then find a foundation with entities, entities, US, Europe, which have the same set of values to create big markets where we could really take advantage of that idea of yours. So then how do we facilitate sharing and building? I think there's multiple threads here and one thread that you were suggesting is very much on the algorithmic thread of being able to create AI systems that it's not just about detecting patterns and data and then kind of spitting things out. It's getting at a level of understanding that we don't have in current systems today and so you can have all the data in the world and it could spit out a match to a face but it has no clue whatsoever what it's doing. It has no concept of facial recognition. It's just simply matching patterns and data and getting to the point of actually understanding what the data is saying, being able to say you're reasoning through somebody's thought process and saying you're missing these key arguments is a very different kind of AI algorithm that may need a lot of data but it may not and also getting to Becca's point about many AI systems. We wanna do research on the kinds of AI systems that don't need as much data because that is more human-like. Humans can learn very quickly with small numbers of examples and so then we may get around a lot of these concerns about data if we're able to develop new AI techniques that don't require as much data. So I think here and now our AI systems use a lot of data and we need a lot of data. We need to find ways of sort of democratizing the field so that people can engage in data and the kinds of AI techniques that require a lot of data but I think we don't wanna get so fixated on that that we forget the fact that we also need to invest in other kinds of AI that can actually get around this need for lots of data. That does sound like a step beyond artificial intelligence or artificial wisdom. Software that knows what it doesn't know and knows what it needs to learn to know. That's deep, that's deep. I'd build on the data volume question because right now that is a bit of a paralysis. Our systems globally require enormous amounts of data to move into a domain that's adjacent. To move into a domain that's two steps away is even harder. To give you an example, I've never been in this conference room before but I know immediately that it's a conference room. It's an auditorium. I don't image this room in pixel space. I know it's a room, right? If we had a computer vision system here it would take an image of every pixel. It would overlap red, green, blue, RGB and it would formulate a match for what this room is compared to every other room that it saw. I've never seen that bottle of water from this angle and this lighting but I know it's a bottle of water. So there are things that we do biologically that simplify our computational view of the world. Maybe it's biologically, maybe it's genetically, I don't know, people don't know we're studying. But to date the computational systems that are used to make the simplest of calculations, the simplest of assertions and inferences require an enormous amount of data and that simply doesn't scale. So we and others are taking a close look at new types of computing architectures. This whole domain of AI accelerators, right? So there's a whole body of work to look at how do we optimize computational efficiency for the types of workloads that AI requires? These are things like graph analytics, natural language processing, machine learning, very specific workloads that look quite different than the transactional workloads and those computational fabrics will be quite different. So I absolutely agree. That's a, I mean, there's a policy question which is where you started and we diverted, which is okay because we have the ability to divert to the questions that the audience really wants to hear assuming there's a question of how do you technically achieve it? And then the related question, what are the right policy wrappers? And how do you put that together? I don't know how to answer the second. We have some thoughts in the first. Well now it's time for you to ask questions. We have a microphone back here. Please, please, please give us your name and your affiliation and then give us a question. We're gonna try to get lots of questions in so we don't have time for small speeches. So right here in the very front row. Dan Lieberman, if you merge just a artificial intelligence with robotics and you establish a factory where the AI does all the administrative maintenance and even operational work, which is certainly possible. You have a factory with huge production which almost no workers. You send that through out of the economy. You're pouring a lot of goods but you have no one to buy the goods. You've got a capitalist system which is not generating any capital. So in the end, will AI lead to a more social type system? Who wants to take that? So I would argue that all throughout modern industrial age, we continue to develop new kinds of technologies that change the nature of work. AI is doing that, robotics is doing that but they don't happen overnight necessarily and we can see in many cases, see what's happening and notice the importance of education and workforce development so that we can adapt to the development. I'm a roboticist myself and I know how extraordinarily difficult it is to build robots that have any kind of common sense reasoning that can do much more than some of the same things over and over. So it works on the factory floor because it's many factory floors because they're very predictable. I think it's going to be extraordinarily difficult to have these kinds of technologies take over more of the creative side, more of the kinds of applications that have a lot of uncertainty, require some reasoning and common sense. Extraordinarily difficult to do that anytime soon but nevertheless, there are certainly types of jobs that are going to be impacted and so that's why the education and workforce piece is so critical that we provide people with the skills that they need in order to engage in the jobs of the future. The administration has been pushing the National Council for the American Worker and the pledge to America's workers to help reskilling of people who are already in the workforce but want to be able to gain those skills that can allow them to engage in these new jobs that are open. It is a collaboration with industry. Industry knows the kinds of jobs that are open. They know the kinds of skills that they need and so they can provide opportunities for people to engage in learning those new skills. So again, I think it has to, we recognize that changes will always be happening to the nature of work but providing people the opportunity to continue to engage through education is key and I'm not one who believes that technology will be able to do every job in the future. Ironically, it may not be the jobs in the factory floor that are replaced first. It could actually be a lot of routine coding jobs that are replaced by AI. It is a major change. It may come, we don't see the change in the coming years and then it is accelerating. So short term impacts are underestimated longer term are bigger than we think today but in many areas, the work which will be completely replaced by AI or robotics is repetitive by its nature and if it's believed what the studies of different houses of big brains and thinkers are forecasting, probably the bigger issue is that many tasks in our current works will be impacted by AI and robotics. So the augmented part will be the bigger portion we would need to get our handle on and that means that we need to educate, we train and continuously train the workforce to understand how they can take best advantage of AI and robotics and related technologies. In one way, if you, the history of economics, there is a certain term for the period when the new technology is coming and you don't see a positive impact on employment side. It's, I forgot the name, there's a one particular name for that and we saw it with the steam. We have seen that with electricity and probably we will see that with the AI as well but when you look over that period, it's there's a great creation of jobs with the delay and that we also need to keep in mind. Thank you very much for a very hard question to get things going, right here please. Andrea Kindle-Taylor from the Center for New American Security and we've talked, I think, a lot about the benefits of AI on healthcare and individualization and things like that but there's obviously a lot of new challenges that AI is creating, particularly stemming from authoritarian powers, revisionist powers, so things like deep fakes and micro-targeting and the surveillance equipment, the facial recognition and all of those things. So as you think about what a transatlantic strategy looks like building on the shared values between the countries, what are some of the things that you would put at the top of the agenda? What are the biggest challenges that you think that United States and Europe need to be focused on to mitigate some of the challenges or the risks that are also stemming from these new changes? So from my perspective, I think it's important for us, those of us in the Western world, I think, to recognize that we are in kind of this competition now in terms of how AI is being used. In the Western world, we have values as a society that we don't want to become, say, a surveillance state, for instance. And so one way to approach that is thinking about regulatory approaches. I do think it's very important for the US and the EU to come together and figure out what that right balance is so that we don't come too far on the extreme of over-regulating AI to the point we can't use it because then AI developed in other parts of the world that don't have the same constraints will take over, those companies will take over and then we have no choice but to use AI from companies that are supporting authoritarian regimes. So I think that's an important area for us to come together. Certainly, as Pekka mentioned, we've spent some time this morning also discussing some of these issues and recognize that we're very close. And I think it kind of gets into some of the implementation details in terms of what some of those differences are. I do think by joining forces, we can ensure that our companies in Europe and the US and other free parts of the world can be strong and can continue to compete in these areas. Some of the areas that perhaps we can look at exploring are areas like technical standards. How do we, if we want to make sure that AI reflects the kinds of values that we have in our societies, well how do we measure that? If we need, want to say, have a technical standard for non-discrimination, well how can we measure that a system is not discriminating? If we wanna have a technical standard that says certain AI systems are safe, well what does it mean to be safe and how do we measure that? So a lot of the work of NIST here in the US and I'm sure there's similar kinds of work in other parts of the world are helping to contribute to the R&D and the understanding of how do we make these measurements that allow us to make tangible these desires that we have from values. It kind of turns values into something that's measurable and implementable. I think that's certainly an area that we have a lot of common ideas on. I think being able to test and explore maybe through regulatory sandboxes is a way that we can also share approaches internationally so that we can, as we have certain technologies that we're not sure what's the right way to use them or not, we can explore them in a safe space rather than just banning them outright so that we can learn from them and learn how to take good advantage of them. So those are some thoughts. Exactly. Let me add to that a little bit. Fascinating question. I think the standards and the regulatory approach is something that there's gotta be a cooperative effort to embark on together. But let me put an example out that might frame your question a little bit differently or maybe add to it. As I look at NATO as a region and I've had similar discussions on similar panels at NATO in Brussels, one of the challenges that NATO faces is the security uncertainty of its perimeter. Second challenge, of course, is the resources to address that with completion. Now, the U.S. Department of Defense and its administration, we've got a remarkably robust Department of Defense and, of course, we spend for it and it's the most powerful force in the world. And we're, of course, part of and we support NATO and that tension is still there. So the question is, how does NATO in the EU countries, how do they protect that region with fewer personnel? And how do they think about a part of the world that's remarkably dynamic where the next two moves are not well known? By the way, you could have the same discussion with regard to the Asia Pacific region. You might have the same discussion with regard to the Middle East. We're in a national security environment today that looks very different than what existed 20 years ago. People today call it the gray zone. In fact, your organization has written a lot about it and others have written a lot about this. We're not in a region where, with a few exceptions, it's a kinetic to kinetic engagement. We're in a part of the world today, we're in a part of history today, where most of this tension is moving up to the brink of engagement but not quite crossing. So you've got to think about game theoretics. What's my opponent willing to do? And how much risk are they willing to put on the table? And that requires a new set of tools and it requires a new way of thinking about first structure, global security, how we employ all those tools. And it seems to me, if I look at NATO to the North and NATO to the East, I'd be pretty concerned about that as an opportunity because it's not a force on force engagement. It's an economic, political, and other forms of engagement. Okay, any thoughts on the military, national security consequences? That's a tough one. I go back to the discussion, Lin's thought, and just adding to that. But it starts from the same values. There actually, when you think what are the areas where we can cooperate, you will have a abundance of feels where it makes absolutely sense. And the question is what is the order and sequence, how you will tackle those. And in one way, it will be a layered approach and what might be crucial now is that we do take care of the work that we do define the terms in a way that we don't misunderstand or misinterpret things. That we get the terms right so that the discussion will happen through very clear channel with the same vocabulary, with the same definitions, and that will create the very effective way of engaging in dialogue. And we have it, it's a human centric, it's a fundamental rights, democracy, freedom of speech which we used to say that they are simple things and they are kind of a granted, but they are not granted. And therefore you need to start from there and make sure that they are taken care of. And then when it comes to the other areas, we make sure that the innovation, the broader entities which can come together that they make sure that the market and the way how the innovation and the pre-competition research is done so that it is competitive and it can create products which will be then fantastic products to be used by consumers or business. And then the trust of Lino over the cupboard, that's all the elements of the same way of seeing how the ethics and trust is done. A lot of things we could start. I think we have time for one more question. Oh my goodness. My name is Minky Song and I'm intern of the New America Cybersecurity Initiative. What if IBM's success to develop logical, trustworthy, ethical AI? I think it is a achievement of singularity. Then what will be the value of emotional, untrustworthy, untrustworthy and unethical human like me? Wow. I thought the first question was hard. So let me give an example. Everyone in this room, I am certain, has at least one, where's mine, has at least one cell phone. You absolutely trust it. There's probably three people in this room understand how this works. You don't even think about the applications. You don't even think about the data. You do very, you don't do much sort of review of what am I gonna release. You just click and go. When technology becomes mature and it's adopted, the technology's invisible. You don't think about it. You just use it. This is a case where, in my view, that will indeed happen, but it's not gonna happen by magic. We've gotta take some time to think carefully about what the policy regulatory ethics issues are. You gotta think about what the values are. You gotta think about what the risk and consequences are of a recommender system providing a voice into a conversation that can have high consequence. And it's a new domain. It really is. And by the way, it's not just a group of circuit designers and system developers that are building these. You know, we bring, and others bring, many disciplines into this field to replicate the domain that that system's gonna be used in, in the training corpus, in the values, in the context. It's quite different. So I think in answer to your question, as you start seeing these systems evolve into your personal life or into your business life, the technology will be hidden. It really will. What will be present is the value of that technology in the impact it provides. One last example. I have one thermo, actually, I have one type of thermostat at home. It's a nest. I don't think much about the algorithm. But I think a lot about the cybersecurity of that system. Right? I mean, I know how the algorithm works, but I really do care about the cybersecurity element of that. And we've kind of done some work on it at home. The technology piece of this, I think, is as exciting as it is, as technologists. It's part of a bigger story. And I think that's what you were trying to encourage in the dialogue this afternoon. And I add as well, you know, the crux of your question is, what's the value of the human being if we have AI that's somehow perfectly trustworthy? At least that's the way I heard it. And certainly, AI is a tool. In my mind, it's not a higher life form that we're developing that somehow is going to take over the meaning of life for people. It's a tool to help humans live better quality of life and have better security, have meaningful careers. And so I think we always want to keep that in front and center, that this is a tool to help us. It's not a tool or a higher life form to replace us. When I was in grad school a long time ago, and I'm a roboticist, one of the early questions was, what are the ethics of unplugging a robot? With the idea that somehow you're building something that deserves to be plugged in and have a life. And I've never been of that mindset. This technology is about humans, having better lives and helping humans. It's not about an end into itself. So that's a perfect lead into my final lightning round. I always like to ask my panelists what their concluding tweet would be. That's a great tweet. AI is a tool, not a replacement. So I challenge the other two panelists to come up with a tweet. And before I do that, I want to thank our friends at the European Union delegation for helping pull this together, introducing us to PECA, and helping fund the refreshments. And also, more importantly, for the people in webcast land, helping make sure that we could do a good job of getting this to the world. So a concluding tweet, 280 characters. Exact, yeah. Or less. Yeah, actually I was pretty much on the, I met Lin first time today, and I'm surprised, not surprised. I'm very delighted how many, in how many matters I'm sharing the same view. And that would have been my tweet. So it is, when we look, AI is there for advancing the well-being of humans. And that is done by maximizing the benefits and minimizing the risks. And that's where we have a, probably much bigger opportunity than we have realized to create a common ground and markets and ways of working to ensuring that that happens based on the values that we have in the US and in Europe. Exact, up over to you. And so I'd say, I don't tweet, so I don't know. Not a lot to. I'd say remarkable progress, but work ahead. And the work ahead isn't just on the technical front. And that's exciting. Everyone in this room, everyone in the audience should be part of that discussion. I think that was exactly 280 characters. Thank you very much. And thank you for the great questions. Thank you. Great job. Glen, a pleasure.