 It's such an important topic in our view. You know, artificial intelligence is moving around the world, it's really changing, and will increasingly change how we interact with the world. And the question of ethics is now intersecting on good days and colliding on bad. I can't help but pause for a moment just to reflect on the issues of the world, certainly the issues in Australia, the issues in New Zealand where I've just spent the last two days when it comes to technology. Because I think it does speak to us broadly about technology in society and it in some way frames the issues for artificial intelligence as computing moves forward. The terrorist attack, the tragedy in Christchurch is obviously something that was a devastating loss to the people in the nation of New Zealand. But it was by no means confined in its impact to the people who live there. It's being discussed around the world and that's a good thing. It should be. I think it needs to be a learning moment for everyone who works with technology. It's a learning opportunity for everyone who thinks about technology. And I think it's clear as day that despite the good efforts of some, the technology in place and the safety controls in place did not do enough. I think that's sort of obvious to most people who think about this. And so as we look to the future it is a time that really calls on the tech sector in my view to do more in this space and to recognize that it is no longer in a world where it can expect to have license to do everything that it wants by itself without government oversight or review or regulation or law. I don't think anybody yet has all the answers in this space, but one of the things that we've noted in the last few days, one of the things that I talked about with government leaders in New Zealand is the three areas where at a minimum we think there is a need to do more. One is to focus on prevention. Clearly we don't want these kinds of videos uploaded or broadcast in the first place. And there's more advances that can be made, both when it comes to human controls and technology to address that. Second, we need to recognize that no matter how good a job everyone does, this is not likely to be the last crisis in the history of the world. And our industry needs to do a better job of coming together and working together in moments of crisis. We've actually developed certain crisis response capabilities for issues like cybersecurity, and we need to apply them when it comes to online safety as well. And third, and I think there's an opportunity for all of us to reflect on the need to establish a healthier online environment. I think it's disturbing on many days to see how toxic digital discourse has become. It feels on certain days that the internet is bringing out the worst in humanity even though we take pride that it also creates many opportunities to bring out the best as well. And while there's a far gap between hate speech and an armed attack, I think there's an opportunity for all of us to think about how we try to create a better environment online so that the standards of behavior we just insist upon in a civilized world offline are reflected there as well. In short, it really speaks to how challenging the connection between technology and society has become. It speaks to the unintended and to some degree unforeseen consequences of technology, and hence I actually think it speaks to the issues we're here to talk about directly today as well. Because we are rapidly entering a world that's powered by artificial intelligence. As we think about AI, as we talk about AI, I think it's often helpful to start by talking about what the heck it is. If you've come here concentrating in computer science or data science, you can tune out for the next 90 seconds. But for all the rest of us, it is so important, I think, to remind ourselves what we're talking about. Fundamentally, it's about computers understanding the world, reasoning, and making decisions. Decisions that previously were always made by human beings. And that reflects the ability of vision to advance, not just for computers to see, but to understand what they are seeing. The ability of computers to understand speech, not just to hear, but understand what is being said. The ability of computers to translate between languages, and perhaps most fundamentally of all, the ability of computers to reason, to gain knowledge and to learn. I think oftentimes people are waiting for the day when magically all of a sudden, all of this will appear and we'll say this is the day that AI arrived. But the truth is, it's been arriving all around us already. We see it in a variety of different ways. If you have a new automobile from BMW or somebody else, you may well have an indicator in your car that has a light that flashes, it has an audible signal when the computer connected to a camera sees and detects a human being that is crossing the street. Or if you have an iPhone, you may have an app that is called Steno. And if you don't, you can easily obviously download it. And what Steno does is it records everything that is being said, just like a tape recorder has for many, many, many decades. But in addition, it uses speech recognition to create a transcript at the very same time, all on the basis of computing. Or if you have an Android or iPhone, you can download the Microsoft translator app. And you can either type in or speak into it what you want to say in one language, and you can choose to have it translated into a different language. In fact, you can choose to have it translated into 62 human languages or Klingon, if that's what you're into. It shows how quickly machine learning is moving into language translation. Or if you use a service like Spotify or Netflix or iTunes, you've probably experienced recently listening to one thing or watching a show and then getting a recommendation for what you might want to watch or listen to next. That's based on machine learning. It's based on learning what you like and what, learning about what other people who have similar tastes like as well. In a way, you could ask, why are we having this conversation now? You know, the first academic meeting about artificial intelligence took place at Dartmouth in 1957. So why are we in Canberra in 2019? Well, I think when historians look back at the decade that we're about to complete, they will say that this was the decade where artificial intelligence took flight. And it's really for three reasons. One is we've seen this enormous advance in computational power, not just the ability of computers to move faster and finally get to the point that is really needed for effective machine learning, but they've moved to the cloud as well. Ten years ago, if I had come here in 2009, I don't think we'd even be talking very much about the cloud. It was beginning to enter the conversation, but it wasn't something that people were using to store their own data. Now universities, companies, governments, NGOs can choose to access computational power in the cloud meaning they don't have to go buy a server farm themselves to make use of all of this power. Second is this constant doubling of data. Digital data is doubling every two years. We've got more digital data that will be created this year than existed on the entire planet a decade ago. And the third is the fundamental advance in what is called deep learning or neural networks. Since that conference at Dartmouth in 1957, there was a vibrant debate about which approach to machine learning and artificial intelligence would take off. Would it be expert systems where people try to write the rules and then computers would reason on the basis of these rules, or would it be neural networks that would work the way the human brain works, which is mostly by recognizing lots and lots of patterns. It turns out that you can train a computer to recognize a photograph with a cat. You just have to give it about 100,000 photos and tell it which ones are cats and which ones are not. And it learns yes or no. And it is very similar to the human brain in that regard. But all of this really tees up the fundamental question that now confronts us all everywhere around the world. With all of this computational power, how do we earn the world's trust? And in some ways, when we published a book about this about a year ago, we realized it boiled down to one thing. The question is not only what computers can do, it's what computers should do. And that's when these issues move from the realm of computer and data scientists to every field that is represented at a university. Because it is ultimately all of these disciplines that need to come together to enable us to think through and answer this question. It's what led us at Microsoft to start to think about what we and so many other people call the ethics of AI. How do we begin to think about ethical issues for artificial intelligence? Well, in a sense, it required that we start to identify what the ethical issues are and begin to define principles for them. We came up with six. The first is fairness. We believe in fairness in societies around the world. We want to avoid bias. We want to avoid discrimination. And as I'll discuss, that is a real issue for certain fields of artificial intelligence. Without the right training and without the right data, it turns out that computers can be just as biased as people. The second is reliability and safety. An issue that's been front and center here in Australia and the news over the last 24 hours. It was really the automobile that gave birth to the legal fields that created the obligation on almost all suppliers to ensure that their products are reliable and safe. The good news is that gives us a lot on which we can build and we need to build on it. Because when computers start making decisions on their own, we better ensure that they're operating in a reliable and safe manner. The third area is privacy and security. An issue that has really evolved rapidly over the last 40 years as a result of the information age. But now that makes its way into the handling of data sets and people's personal data as well. The fourth is inclusiveness. I always think it's worth reminding ourselves that one of every seven human beings on this planet has some kind of disability, temporary or permanent. It may be a physical disability. It may be a mental disability. But it is vitally important that computers today and in the future with AI are designed to make it possible for everyone to live and improve life. And if we fail to think through these scenarios, we can actually take people's lives backwards. All four of these ethical principles really rely on two others. The first is transparency. You can't expect people to trust technology they're unable to see or learn about or understand. And this is a complicated issue because it started with people thinking, well, we'll publish algorithms, and then in addition to realizing that publication of algorithms exposes trade secrets, what we really appreciated was they're just not as insightful in answering the questions that most people have. So there's been a whole focus on making AI more explainable and transparent. And then the last one is accountability. In many ways, I actually think it is one of the most fundamental questions for our whole generation of people because we are the generation that is bringing this intelligence to computers. And therefore we are the generation that will decide whether computers will remain accountable to people and whether the people who designed these computers remain accountable to the rest of the world. I think it's of fundamental importance that we do so. Now that really kicked off for us a broad discussion around the world just 14 months ago. I've had the opportunity to talk with people at universities, to talk with computer and data scientists. If anything, the thing that has surprised me the most is how quickly this issue has gone mainstream. It was reflected just a month ago when we went to Rome at the Vatican's request to talk about this topic with the Pope. That is not something that happens every day. But it shows how important this is. It is on people's minds. And what we're finding is, as people focus on the general principles as they should, we're also not surprisingly finding that these issues are then evolving in the context of some very specific scenarios. And I think it's these specific issues that will in many ways provide early learning around the world for how ethical issues are considered. One issue on the West Coast of the United States has been AI in the military. You know, we're entering an era where artificial intelligence is being infused into weapons in a wide variety of ways. And it's forced us to step back and ask, well, what are the ethical issues for AI and weapons? And what do they mean in practice? Well, it turns out that in many ways weapons systems are quite varied. But the biggest focus has been on what's called autonomous weapons. Or, as some people refer to them, killer robots. The truth is, no one wants to wake up in the morning and find out that machines started a war the night before. So how do we address these issues? Well, in many ways it calls on us to focus on three in particular. Reliability and safety. We can't afford to have these weapons become unsafe. The second is transparency. No one can have confidence in their military and their military policy without transparency about how AI is being used in this field. And I think the most important is accountability. It is about humans in the loop or meaningful human control. So that we don't delegate so much power to AI-based weapons that human beings lose their ability to control them. So this is suddenly rippling around the world. It's made it to the United Nations. It's made it to the major military command centers of countries around the world. There's a vibrant debate about whether every country will apply the same set of standards or whether there'll be a divergence. And fundamentally how we ensure that these kinds of new weapons remain subject to the existing international laws of war. And just as new advances, really since the 1860s in technology, have required that governments and the public come together to think through what it means, whether it was hollowed out bullets or dynamite or chemical weapons or biological weapons or nuclear weapons, this is a next generation of issues of fundamental importance for arms control, arms management, and public safety. And then there's the other issue that's really taken off that I wanted to talk about a little bit more because I think it's been interesting and it's instructive, not just for the issue itself, but for what it means more broadly. And that's AI and facial recognition. I think it's fascinating because when you step back and think about it, what is one thing that we could all do when we were just infants? We can recognize a face. Infants can differentiate between a mother and a father, a caregiver or a stranger. It's a classic example of the pattern recognition capability embedded in the human mind. Well now it turns out that computers are rapidly able to do this as well because it turns out that you can take all of the facial characteristics that we don't even think about and reduce them to a set of mathematical equations, the distance between your pupils, the shape of your nose, the width of your smile, the jut of your jaw, the cut of your chin. And when you reduce these to mathematical equations, it turns out that computers are getting better and better at recognizing people. There are a great many benefits that will flow from this. Some of it will be a matter of consumer convenience. There's a bank here in Australia that is working with us on a pilot so you can walk up to an automated teller machine and it will recognize your face. You put in your pin and you get your money. You no longer have to carry around your credit card. There are far more substantial uses as well. There are some diseases in the world that manifest themselves and facial characteristics that computers are proving more capable at recognizing than some doctors. It is a tool that doctors can deploy to help diagnose people who have serious conditions. But it is also subject to abuse and misuse and needs to be dealt with properly. That's why as you heard last year we took what was regarded as the unusual step of saying this technology needs to be regulated. One of the things that fascinated me was the reaction of some others in Silicon Valley. They said well you guys must be behind if you want regulation. Well we point out that in the United States the National Institute of Standards and Technology has been measuring the performance of facial recognition technologies, the algorithms. More than 40 companies submitted their algorithms including Microsoft. And so I'm quick to remind people that in fact our technology was at the very, very top in terms of performance. We don't think this is something that needs to be regulated because we're behind. We think it needs to be regulated because it's too risky to leave to the tech sector alone. We want to avoid a race to the bottom. Facial recognition is a good example of an AI based technology because it will get better with access to more data. So it can be very tempting for companies to do whatever deal comes their way if it will enable them to get access to more data and improve their service faster. But that is the classic recipe for a race to the bottom. And the only way to avoid that kind of race to the bottom is to create a regulatory floor. That in fact is what we need to do. So just as I showed you with AI and weapons we've identified the issues that we believe are pertinent to facial recognition. And we've announced principles that we will apply in our own business in each of these six areas. But we've said that there needs to be law as well. And we wanted to share with you a little bit of our thinking. The first issue that we believe needs to be addressed is bias. If you look at the performance of facial recognition services today they are not as accurate for women as for men. They are not as accurate for people of color as they are for lighter faced or Caucasian people. And the reason fundamentally in part is because of the access of data sets. It turns out that there are more photographs on the internet of white men than dark skinned women. But the interesting thing as we've thought about it is that no one really wants to buy a service that's biased. No one wants to use it and have their decisions lead to discrimination. But the market cannot work unless it's well informed. If there's not academic access to these services if there are not the opportunities for third party testing and comparisons then you create a recipe for that race to the bottom. So what we've advocated is legislation that would require companies in this space to share information, to explain how their service works and to enable third party testing as a legal obligation in order to participate. We've also said that there should be a legal obligation on those that deploy facial recognition to ensure that there are trained human beings who use the results and think based on their insight and expertise before they simply follow what a computer spits out at least when they're making a decision that is going to have a real impact on other people. We think a law that does those three things can make a real difference in addressing this part of what in effect is an ethical challenge. The second issue is about privacy. The interesting thing is we're suddenly entering a world where what one US Supreme Court justice said over a century ago the right to be let alone is perhaps at risk in new ways that we never thought about before. Because with ubiquitous cameras every time you walk into a store every time you walk into a collection of stores it's possible for cameras connected to facial recognition services to follow you around. Do you know everything you bought, everything you picked up, everything you put down, everything you looked at in the last store before you walk into the next? And this too will bring benefits. It will make shopping more convenient, but there are risks as well. And what we're saying is before public establishments deploy this technology at a minimum there ought to be some public notice so that people know. And there ought to be a mechanism in person or online for consent. And this is an area where we then think there will be a lot of evolution and regulatory innovation over time. And then the final area that we focus on is what we think about as the democratic freedoms that we all rely on every day. Now here too, facial recognition will help keep the public safe. There are many bona fide and important uses of it including at the border or in airports. But our perspective is it's really important to strike a balance between safety and democratic freedoms. When you think about it the right to speak and to assemble has always been fundamental to democracy. But facial recognition deployed by a government can really unleash mass surveillance on a scale that's unprecedented. Now it may be unprecedented, but it doesn't necessarily mean that it's unpredicted. If you go back to George Orwell's book 70 years ago, what he postulated in his book 1984 was literally a world where people couldn't come together to organize to express themselves politically because they were literally followed and heard and seen everywhere they went. Well, this is where we just have to be careful about the future that we're creating. If we don't think these things through in advance, we may find ourselves waking up in some city in some country in the year 2024 and feeling that in fact we're living in a page from 1984. So what should we do? Well, what we've recommended is that the law permit law enforcement to use facial recognition for the ongoing surveillance of a specific individual only in limited circumstances. When either there's an independent or other court order as there is today for a search warrant around the world or there's an imminent risk of death or serious injury. And we think it's especially important perhaps you might think ironically for the countries and governments that protect these freedoms the most to be the fastest to put these precautions in place because that's the only way we'll build a consensus to then persuade or at least create some pressure for other governments to do so as well. One of the interesting things as we've thought about all of this is that we've actually tried to bring a new approach to regulation for technology. What we've done is taken a page out of software development. There's a concept in software development, those of you who concentrate in computer science will recognize it of creating a minimum viable software product. If you look at that top row, if you want to build a car you don't build and release a product until you have every piece of the car in place. But what a minimum viable software product involves is the creation of the smallest product needed to achieve a purpose. A skateboard can become a bicycle, can become a scooter or a moped or motorcycle can become a car. And the virtue of that approach is you get feedback and you learn faster from real world experience and then by the time you build your car you are probably going to be launching your car faster but more importantly you're likely to have built a better car because you'll have gotten all of this feedback. So the question that we've really been thinking about is can we take this concept from software development and move it to technology regulation? In other words, instead of debating for years and years the biggest, broadest possible software law for facial recognition can we get something going and out the door? Because we look at all the recommendations I just shared with you and I would be the first to say I will guarantee that ten years from now someone will look at that and they'll say that doesn't go far enough. There were many more things that needed to be done. But our point is let's do what we know how to do and learn from it. We unveiled our proposal in Washington D.C. in December of last year. In February just last month the Washington State Senate, where Microsoft is based passed a piece of legislation with those pieces in it by a vote of 46 to 1. And I'm optimistic that this will then make it through the State House of Representatives and signed by our governor in law by say April or May. So imagine what it means to go from a world where you can say here's a smaller regulation that makes sense and say that in December and by June have it in a law. That is just not the way we're typically accustomed to governments dealing with the important technology issues of our day. But we think it's a new capability that needs to be developed so that governments can move faster. Whether it's facial recognition or whether it's some of the other issues that are garnering headlines at the moment. Ultimately every aspect of ethics for AI needs a global conversation. That's one of the conclusions we've come to. It will require a global understanding because fundamentally this is global technology. And one of the things that I think is so important to recognize is that ethics fundamentally rely on philosophy, a human view of the world. And we live in a world of different philosophies, of great philosophies around the world, but differences between them. There are some days when we're dealing with these ethical issues on this or other questions and I feel like we're in the middle of a debate between Socrates and Confucius. A debate that the world hasn't resolved, a debate the world's not going to resolve, but a debate that needs to continue with a healthier and healthier dialogue. Because at the end of the day we're not only going to need strong ethical principles, we're going to need AI ethics reflected in law. That's the only way to ensure that society as a whole is protected from certain ethical advances that we are worried about or would regard as dangerous. I do want to conclude by saying that despite all of these concerns, I do work for a tech company. We are excited about where this is going. And ultimately it is about ensuring that technology can advance in ways that create benefits and we address the risks and harms as well. I do want to just share with you a few of the amazing advances where AI is already making people's lives better. Take the life of someone who's blind and what it means to give that person something they already have, a smartphone, which of course has a camera and the ability to speak including into an earpiece and connect that to AI so that a person who is blind can start to see in new ways. That's what we've done with an application called Seeing AI. You can see this short video about how it works. In Microsoft research project for people with visual impairments, the app manages the world around you by turning the visual world into an audible experience. Point your phone's camera, select a channel and hear a description. The app recognizes safe friends that have brought you three feet away described as the people around you including their emotions. It reads text out loud as it comes into view like on an envelope of cameras, p-pops, or room entrance conference 2005 or scan and read documents like books and letters. The app will guide you and recognize the text with its format, top of the page not visible. Hold steady. This agreement was made with cash and it identifies friends in the authority of your customers. When looking for something in your pantry or in the store, use a barcode scanner with audio cues to help you find what you want and candles to make a suit when available. Hear additional product details and even hear descriptions of images in other apps like Twitter by importing them into Seeing AI. Finally, explore our experimental features like seeing descriptions to get a glimpse of the future that gets a young girl from a frisbee in the park. Experience the world around you to see AI app from inside. I think that's one example of how AI actually improves the lives of many people. A second example I love I think speaks to what goes on at universities. In this case, it's Princeton University and the Princeton-Geneza Lab. AI is actually transforming the work of a Near Eastern Studies Scholar. Marina Rustow. A lot of what she does is put together the fragments of documents from the Middle Ages that came from Cairo. Well, it turns out that these documents were torn over the years. They ended up in museums and libraries around the world. They've been digitized. But typically you have to figure out how to put them back together. And you can't rely on the shape of the paper alone because the paper has been torn or retorn in some instances. Well, in this case, she was able to use an AI algorithm developed in Israel that focused on the angle of the penmanship, the width of the ink. And in literally a blink of an eye, she was able to take the top half of that document that you saw a moment ago and connect it with the bottom. And by doing so, figure out what had happened and exactly when it had been created. I think that is a fascinating glimpse into how AI will in fact impact every academic field at this and every university. And AI is already going to work to address many of the problems of our time. At Microsoft, we launched a program called AI for Earth. It's investing in grants and technology support and research with people focused on these four areas, including across Australia. We've launched a program called AI for accessibility that is working with startups and universities to build on that application called Seeing AI and take it to other areas. And we've launched a program called AI for Humanitarian Action that is working with the United Nations, with NGOs, with universities and others on issues like disaster relief, the needs of children, refugees, and human rights. At the end of the day, I am reminded as I think about all of these ethical issues by a quotation that was made by one of the most imminent scientists that lived a century ago. It was Albert Einstein. There was a disarmament conference in Geneva, Switzerland in 1932. Diplomats came together when the world was starting to appreciate that things were going in a difficult direction. And he talked about what the machine had done for humanity. He talked about all the benefits that it had created. But he also said just think about how carefree life would have been if the development of the organizing power of man had been able to keep pace with his technical advances. We all know from history that it did not. And it was a disaster for the planet. I think our opportunity and our challenge for our time for our generation and for this century is to take the steps that will be needed so that humanity's ability to manage technology can keep pace with technology itself. Thank you very much.