 Welcome to the ITU studio in Geneva. I'm very pleased to be joined in the studio today by Anastasia Lauterbach, who is the author of the Artificial Intelligence Imperative, International Technology Strategist, Advisor and Entrepreneur and also Non-Executive Director of three major companies. Welcome to the studio. Thank you so much for coming here. Now, I'd like to start off by talking a little bit about the transforming power of AI in today's society and how this impacts policymakers and regulators. So, artificial intelligence is one of the most powerful technologies on this planet, so it's not surprising that technical luminaries such as Andrew Eng compare it to electricity. Andrew has been with AI for many, many years. He co-founded Coursera, where you can learn about AI online and he's just created a fund to invest in artificial intelligence. It's quite a powerful fund for machine learning startups. He's not very concerned about AI being a force for evil. He's comparing this to, like, I'm not concerned about overpopulation and Mars, so I'm not concerned about Terminator coming and destroying all of us. But the point is that artificial intelligence is something tremendously practical. It's getting embedded everywhere. It doesn't matter whether you are in medical profession or in manufacturing or maybe even in agriculture, you will have to deal with artificial intelligence applications. So, if Internet has already disrupted approximately 20% of global economy via e-commerce, frontline applications, customer-consumer-facing applications, AI will transform the rest. So, that means that 80% of the economy is getting transformed by artificial intelligence as we speak here. Right now, the AI is still not very, very intelligent. So, some scientists compare its intelligence to an intelligence of a four years old. Other are more optimistic. They're talking about seven years old. But, for example, Yan Lakan, who leads AI at Facebook, he's just believing that AI has intelligence of a rat. And still, it disrupts so many businesses. Among top 10 most valuable companies on this planet, there are five brands which are all about artificial intelligence. And those are our traditional suspects, Alphabet, together with Google, of course, Apple, Microsoft, Amazon, and Facebook. So, those companies are so-called full-stack AI companies. So, what does it mean? They invest in their own semiconductors. So, they don't go to Intel or Qualcomm of this world and buy from them, but they invest to actually design their own, to control the experience. They control their own cloud. And, of course, they have vast amounts of resources and a huge developer community and ecosystem to go after applications and services. And they combine the R&D with real-time applications and services. Most scientists who are very famous in machine learning and zip learning actually work at those companies and simultaneously they work at universities and bring fresh talent to those companies. So, human resources, human capital is everything in this world of the future machines. It's very important who you are getting. And it's not surprising that right now many people are thinking about diversity in artificial intelligence research, development, and actually practical applications. So, famous Fei-Fei Li, who is a scientist with Google and with Stanford, she is a co-founder of a network called AI for All. Together with Melinda Gates, they want to attract girls of the age of 14, 15, 16 to the world of zip learning and machine learning. They are attracting those schoolgirls to something which might become one day a next generation of AI practitioners. And that's tremendously important because AI has a capability to scale everything we are about as humans. And so, if you have a team of, let's say, only white male developers or, let's say, only Chinese male developers, then you will get a data set and some algorithms who will be just wired according to preferences, habits, and just thinking processes of those groups. So, those data sets and algorithms will miss a huge chunk of the world and we will get an automation which will not correspond to the whole world. And that might be very dangerous, especially if we are, for example, in financial services, if we are in education, who is, for example, deciding on whether your kid will get admission to a college or who is deciding whether you will get admission to a certain social security program. So, the inequality might be nation just because those teams of developers are not very diverse and they are biased. Then there is an issue of artificial intelligence which is quite close to what I do, who work and this is around adversarial attacks on machine learning applications and algorithms. Unfortunately, like every technology, being neutral, AI is getting into the hands of criminals and this is something regulators really need to be smart about, think about and observe. So, right now, you can, for example, hack into the real time data stream and change the perception of a sensor so the sensor will see a stone instead of an animal like a raccoon or a stone instead of a kid. This is highly dangerous. If you have voice authentication systems, right now with the power of machine learning you can actually fake voices and you can fake eyes, irises of people to actually mislead systems, sensors, machines and even people if you are, for example, on the phone and receiving a call by a robot but you think it's a real person. So, there's a huge new world of issues coming at us and the question is how do we mitigate the risks and how we are getting better. Last but not least, there is one thing where I believe that regulators need to exercise regulatory humility and this is around the fact that there is nothing, let's say, intelligent about current machine learning systems. Everything happens by design and who is in charge of design and principal humans. But there are systems dealing with deep learning technologies which might be perceived as black box systems because human engineers can't actually reverse engineer them to explain every single step why, for example, a system came to one conclusion or to another conclusion. So, what is happening there is that regulators sometimes say well if that's a black box then we can't control it so we just can't allow for such systems to be used. But AI is nothing static, so there are changes on a weekly basis. For example, NVIDIA right now, this is a semiconductor giant who actually established this GPU technology for AI processing This company developed a technology which colors certain parts in a dataset and algorithms to make it more, let's say, recognizable for human eye and what pieces of, let's say, a dataset or code were predominant in making one or another decision. So, this is, of course, not a true reverse engineering, 200%, but it's a pat and we are on the pat. And something which is very close to my heart is that we need to invest into fundamental research to be better with mathematics, which are the base of artificial intelligence because let's not forget that Google is actually mathematics from the 30s and AI is mathematics from the 40s and 50s. So, things like, for example, swarm algorithms, swarm data analysis that might lead us to the path where we will understand how this black box works or not. There might be other paths, but that's something which is tremendously important. So, for me, artificial intelligence just to kind of, you know, in a nutshell is a force. Right now it's getting used by the most valuable companies in the world but it gets embedded into all kind of businesses, into our society and, of course, it has impact on everything we do, be it education, be it healthcare, be it social care, be it defense. So, this is something we are going to live with and I'm not a huge believer that we will have another AI winter because we just have so many resources and so many people and companies interested in this technology, it will never stop. Now, I mean, that's all fascinating stuff. We've had over the last couple of days regulators sitting in that chair quaking. They're quite nervous about the future because they can't really get up to speed and they don't know whether they're ever going to be able to get up to speed. So, I just wanted to find out from your perspective, what's the answer to this? Do they have to have technology sitting side by side with them? Are they in a position, basically, to be able to regulate for such a future that we have ahead of us? Look, I think that those regulators are not alone being kind of confused and maybe even afraid about what might come and about how to do their job. Corporate bosses are in the same shoes even if they run multi-billion-dollars companies and corporate boards are in the same shoes themselves. So, I quite often observe that people who are so skilled in deciding about mergers and some compliance rules and governance, they don't understand technology and this is because governance rules were defined as backward-looking rules. So, something from the past, which was the experience, got more or less progressed into the future and then people said, okay, it happened in the past. We will not allow this to happen again in the government, but let's say in the company as well, in the corporation. So, having technologists helps. However, you need translators, so translating from English to English from German to German, French to French, people who can translate technology. So, none of the technologists is a Swiss knife. So, you need multiple faces around the table to argue though piece of a technology stack. It's good if someone, let's say, is the most strategic and can kind of, you know, combine from the semiconductor up to the application layer and say, look, this might impact this group of a population or that group of population or maybe this set of companies. That's something very important, but ultimately I believe in teamwork and in interdisciplinary approach. If you read through AI scientists, you might see that many of them are talking about philosophers, cognitive scientists and neurobiologists who will contribute to the nation world of AI. Of course, we are already 60 years into this world, but we're still very, very young and we need input of multiple groups in society, in industry and in regulatory work to contribute to the world we actually want to have instead of just like complaining and expected what might happen. If we just lean back and expect bad things might happen because of the lack of diversity, because of some bias with certain maybe companies, even with certain countries, what is good for one country will not necessarily be great for another country. So we need some platforms to exchange and I ultimately believe that we need to provide knowledge and artificial intelligence to all groups of the society. So if a regulator is, let's say, living somewhere in, let's say, Geneva, he's not just a regulator, he's a member of the community, he's a father, he's a husband, he's a brother. So maybe it's quite good to reach out to the local school and ask what do you do around artificial intelligence and if the answer is nothing, then maybe to order some books from Amazon.com and just provide those books to the teachers and have some open discussions on what is actually this live 3.0 of Max Tagmark about why physicists and quantum physicists are discussing the world of artificial intelligence and how actually is that now the cyber criminals are attacking multiple companies at once. I mean they are not doing this with human force, they are doing this with machine learning algorithms. So our world is interconnected not just because we are in connectivity of all those 4G and now 5G is coming. Our world is interconnected but because everything has to do with everything and they are so huge interdependences. I think it's very valuable to learn from other disciplines. For example, in biology, in genomic research, you listen a lot to scientists who are talking about CRISPR and genetic engineering and I mean this is something which is huge for the whole humanity and AI is so huge for the whole humanity and they are dealing maybe with the same set of issues. If we do CRISPR, what does it mean for our decision making? If we for example are parents and we know that our child who is unborn right now will be disabled or we just want this child to be maybe black but have blue eyes. I mean in theory everything is possible but what should be allowed? Should this decision be made by a single person or should maybe community society have a say in developing the set of rules? How do they think about this stuff? And this thinking might be transferred even into AI thinking because once again everything has to do with everything and our plan is just too small to let's say live in a kind of silos on an island, one island for regulatory work for telecom operator another island for regulatory work for financial services companies everything is interrelated and technology is just a common tool for our society and our humanity to live and unfortunately in this world there will be some companies and some people who will prosper and some companies and some people will lose and for those we need decisions too. Many of my friends in Silicon Valley believe in universal basic income so we are here in Geneva, we are in Switzerland you had a referendum on universal basic income a couple of years ago with a negative result but the discussion is still to be continued maybe companies need something like universal basic whatever income whatever the term is for their employees if they can't be retrained for what is coming there I don't know, I don't have all the answers but I'm just giving certain insights and impulses and pledge for more robust connectivity among our ecosystem which combines so many industries, so many technologies So do you think that this forum, this symposium I'll say that one more time so do you think that this symposium is a good platform to be airing these thoughts and to be sharing these thoughts with regulators as well as all sorts of the other people the industry etc. who they gathered here? I think that this group can grow it's already a very fine diverse group looking at the faces, all kind of skin color we might get more diverse in terms of age we should accept input of people who are let's say below 25 or even above 65 because those are members of our society too and the technology impacts their life too So I believe that as a platform this is a fantastic place that it has to evolve while incorporating new nuances of what is going on in the world certain teams are always the same how to fight the inequality what kind of choices will individual states take to actually fight this inequality how do we go after the private sector how do we go after dialogue this is nothing new but technology maybe accelerates certain developments and I believe with more technology literacy among regulators there will be a better understanding in how to speak this common language which will incorporate element of technology and not just let's say law and usual social policy Anasasi Latibak thank you very much indeed for being with us today Thank you so much for having me and we hope to catch up with you again some stage in the future Oh yes absolutely and do check out our videos on the ITU YouTube channel and more of our podcasts on the ITU SoundCloud channel as well Thank you very much