 Good evening everyone. To our invited speakers, guests, board of trustees, and live stream viewers alike, welcome, bienvenue. My name is Christina Tessier and I'm the Director General here at the Canada Science and Technology Museum. Nous sommes ravis de vous présenter cette soirée. We are happy to present this evening of curiosity on stage with our partners at Google. Physical and digital audiences for coming tonight to participate in the global conversation about our official intelligence. Curiosity on stage is a new program whose goal is to favor dialogue between the public and Canadian innovators, scientists, researchers, manufacturers, artists, and innovators and explorers. Scientific and technological discipline fill our stages with their stories of curiosity, creativity and collaboration. And there is no more pressing topic to discuss through this program than artificial intelligence. Each day the media brings us new stories of its breakthroughs and reflections on its power. And so the term artificial intelligence is one with which most of us are familiar. Despite its ubiquity in the everyday lives of Canadians, AI remains somewhat misunderstood and popular culture. Nous espérons ce soir démystifier pour vous la technologie de l'intelligence artificielle. Tonight we seek to demystify artificial intelligence and show its potential. We are very happy to have among us world-class leaders in the field of artificial intelligence. We welcome our moderator, David Usher, Facebook's Joelle Pinot, Google's Pablo Samuel Castro, LMNAI's Philippe Baudouin, and DeepMind's Doina Precup to the Stage, an amazing group here tonight. Each is an esteemed pioneer in their field and we are very grateful to have them share their expertise with us tonight. Their mission is to improve the world thanks to artificial intelligence and that is truly fascinating and it reminds us that the future technology starts today. podium to David. I want to thank everyone once more for engaging in this evening's dialogue. As Canadians and as global citizens we are entering the age of the AI revolution and we hope this evening inspires you with its endless possibilities. And just because I have the mic and my dad's in the audience I'm going to wish him a happy birthday. Good evening. Good evening. Hi, how are you guys doing tonight? It's like it's a concert, right? So just to give you a little background how I got to be standing up here for an AI revolution panel. My name is David Usher and I'm a musician by trade but I am also an Uber geek secretly at night and I run the Human Impact Lab at Concordia University and I also run an artificial intelligence creative studio called Reimagine AI and I work with Pablo Samuel Castro on some music artificial intelligence agents. So the way the evening is going to run tonight, I'm going to really make it about the panelists and you. I'm going to give a short introduction of all the panelists and then they're each going to give a 10 to 20, 20 minute talk, right? And then after that I'm really going to open it up to you and let you ask questions what you're excited about, what you're afraid of, what you're interested in. So as they're giving their talks what I'd really like you to do is think of the questions that you want to ask and really be brave and ask those questions at the end of their talks. So tonight we have Joelle Pinot who is the head of Facebook AI Research Lab in Montreal and an associate professor and William Dawson Scholar at McGill University. Take your seat. Castro, a research software engineer at Google in Montreal in Google Brain. Philippe Bauduin who is the co-founder of Element AI and currently leads its applied lab and AI for good initiatives. It's a little like a game show I know. And Doina Precup who splits her time between McGill University where she co-directs the reasoning and learning lab in the School of Computer Science and also leads DeepMind in Montreal. So I am going to now open up the floor to all of these guys and don't forget to think of the questions you want to ask after all the talks are done, alright? Thank you. Good evening everyone. Bonsoir tout le monde. Good evening. It's a great pleasure for me to be here with you this evening. And if everything goes well, I'm going to try and demystify what artificial intelligence is. We're developing and potentially some of the promises of this field opening up. Then the stage to my collaborators, colleagues who will talk about very specific areas of AI and different potential. I think anyone who has been reading the media following what's going on is not surprised perhaps that we are having a panel on AI today. There's been enormous attention put on this technology and its potential. Some of the challenges that comes with it. Autonomous driving is one of these areas where we are soon to see a major shift in transportation in general. Many ways the technology which we are seeing deployed in vehicles today, though it seems very sudden, is technology that goes back several years. Twenty years ago in the 1990s I was in Pittsburgh doing a PhD in robotics and at the time already researchers, colleagues of mine were developing autonomous vehicles, smart cars, and they had a vehicle that was actually able to drive from one coast to the other of the US. This is 1997. 98% of the driving was done autonomously. That seems very impressive for the time. Now if you ask which percentage of the time was not done autonomously, when did we really need the driver, anytime the vehicle wasn't driving in one lane on the highway? We had some technology but the gap between where the technology was and where we needed to go to have truly autonomous vehicles was still very large. If you look at the smart car technology developed today deployed by several companies, actually it's much better equipped to handle dynamic environments driving in the city, driving in various conditions. And so that difference from 20 years ago to now in many ways is a reflection of the progress we've seen in AI. It's not so much that the fundamental machines are different. It's really the AI technology that we're seeing that has changed a lot. We're seeing the innovation in AI not just in transportation of course. Probably many of you have seen the likes of Siri and Alexa and similar interfaces, sometimes with funny results, sometimes a little bit frustrating. But the fact that this technology is so widely deployed is actually another sign of both what is possible now but of the road that is still ahead of us in terms of the development of that technology. One of the reasons we're calling this time an AI revolution is actually an analogy to previous revolution. When you're at a time of great change it's interesting to look in the past and see what are other times where we've seen such large changes and perhaps the closest analogy we have is that of the industrial revolution. During that time we saw a great shift between the workforce. Humans' mechanical power was suddenly replaced in many different areas by the physical mechanical power of machines. And what we're seeing today with AI is actually a replacement of humans' cognitive power abilities in several specific tasks with equivalent, not always equivalent, but with a development of cognitive skills on the part of machines. And so it's really this shift that is propagating across several sectors, several applications, several domains that we are discussing today. Back to this question, what is AI? How do we demystify this complex notion? Human intelligence is a rich combination of various abilities which are all intertwined in a very fluid way allowing us to have beautiful behaviors, creativity, reasoning, appreciation of art science, the ability to connect with individuals. Machine intelligence so far has focused on sub-problems of this, whether it's natural language understanding, computer vision, reasoning, building memories. So far the technology we've developed is very much tackling each of these problems separately, developing mathematical models, computer algorithms to tackle these problems. And while we have computer programs, AI systems that are now able to translate from French to English, from Japanese to Farsi in a relatively robust way, and we also have computer programs that are able to play poker, checkers, go in a way that matches the ability of the best of the humans, these computer programs are actually very different. The models and the algorithms that we built are very much specific purpose, whereas humans seem to have developed the ability to actually be effective across a really broad range of tasks. And so one of the challenges going forward for the technology is actually to think of how we can develop new models that are multi-purpose that can solve a wide range of tasks and abilities. Going back a little bit historically, there's sort of few different eras in the development of AI, and I'll pull out two of them to keep things relatively short. And I'll make a distinction between the early age, which was what I would call more the programmatic area of AI. Essentially, we were programming computers a little bit like we write a recipe. We would write computer code that specifies in a very clear way all the steps that the machine needed to take to solve a particular problem. And that was fine for some tasks. It gave great results in terms of doing planning and automated scheduling. It gave great results in terms of building systems that can play chess, but it was very brittle. And so whenever one of the conditions in the recipe wasn't matched with the conditions in the real world, we know the real world can be full of uncertainty, then the computer program would break and our AI did not really subsist. More recently, what we're seeing is a category of AI that is fueled by machine learning. And so the machine, rather than being programmed line by line, is actually shown several examples. And if we take the example of a machine actually looking at scans of brains and learning to delineate tumors automatically from that image, it's very hard to write that as a recipe. But if you present the machines with thousands or tens of thousands of images, it can learn to automatically detect the regularities of patterns that come back in again and again and automatically pick out the features that define the notion of a tumor. And so almost all of the progress we've seen in the last few years really come from this technology of machine learning, the machine being taught by being shown examples. We've seen spectacular progress in this way in our ability to program computers, teach computers to analyze images. We have systems that can now analyze images and pick out several objects, identify what are the objects with greater precision than humans. These computer programs that can run on a laptop actually are able to automatically differentiate between 125 breeds of dogs straight from images. We're seeing similar progress in speech recognition. The technology for speak recognition was sort of stagnant through the late 1990s, early 2000. And in recent years, we've seen a big improvement in performance. In this case, I'm showing actually the error, the number of errors the machine makes. And what we see in late sort of 2008 to 2012, 2015, this dip means it's making fewer and fewer errors. And that is really fueled by machine learning technology. Not just any kind of machine learning, a particular branch of machine learning called deep learning. It sounds a little bit mysterious. In fact, deep learning is inspired by human brains. Our brains are composed of a collection of neurons, biological neurons. These neurons send information between each other in the form of electrical impulse carried on the axon. Some of you remember a little bit of your high school biology. In a computer, the analogy we have of an artificial neuron, you can think of it as a little line of code, not line of code, computes a very simple function. And if you connect all of these little lines of codes together, they send information to each other and they can actually compute very complicated functions, represent complicated contexts. So this is what we call artificial neural network. And when we connect many, many of these neurons, hundreds of thousands of these neurons, and you train them with a lot of data to predict the right function, then you actually have deep learning. And so much of what we do in our lab is actually to come up with the right configurations of these models to figure out how to design the architectures, the particular functions that are being computed, such that we can analyze images, that we can understand language, and that we can eventually understand text, speech, any information about the world. This is an example that I really like of a machine being trained with a combination of images and text. So in this particular case, the task is for the machine to look at the image and predict what should be the caption for the image. So on the left, we have machine predicted caption, two pizzas sitting on top of a stove top oven. Not so bad, right? On the right, we have a group of young people playing a game of Frisbee. This is a sentence that the machine predicted. These are not my sentences. We're not quite done, right? On the left here, we have a refrigerator filled with lots of food and drinks. I don't think so. And on the right, we have a yellow school bus parked in a parking lot. Again, not doing great. Perhaps not surprising if you remember that these machines are trained with a lot of data. And when you see a vehicle that is yellow and has wheels, many times this vehicle is actually a yellow school bus. And so what we see is these machines are these algorithms. This computer is really learning to replicate the patterns that are in the data. So one of the challenges we have is to make sure that this is robust even for the data cases we haven't observed frequently in our data set. I will finish by giving you just a taste of how this has impacted robotics. I started with autonomous vehicles. I haven't worked in autonomous vehicles. In fact, in our lab at McGill University, we've been developing autonomous wheelchairs, smart wheelchairs, where we take hardware, physical power wheelchairs produced by standard manufacturers. We equip them with several sensors on board computers and give them the ability to navigate in the world. So the goal of these particular platforms, which we've developed in collaboration with colleagues in rehabilitation centers in Montreal, is really to provide much more flexibility, autonomy for people who have physical disability mobility disorders. The wheelchair is able to build a map of the environment. It's actually able to plan a path through this environment, avoid obstacles, get to a desired goal. In the early days, this map is actually from the very early days from about 10 years ago, pre-de-learning technology. What you see, perhaps you're not used to seeing data in this space, but what you see on the right is actually a map of the first floor of the McConnell Engineering Building at McGill University. All the white space are the empty corridors where the wheelchair can go. And along those white spaces, there's dark lines. This is the walls that have been detected by the lasers on board the wheelchair. What you'll notice is it looks like very clean for a university building, right? No students, no obstacles, nothing else. The hallways are nice and empty. That's because the graduate students went into the lab at 10 o'clock at night and built a spot when nobody was around. We weren't able to handle the complexity of humans walking around. In the last five years, we've really changed this, and we've had, as a goal, to really navigate densely populated hallways. This is from École Polytechnique de Montréal, where we have some of our collaborators. We've done similar experiments in a shopping center, the Alexi Nier-Hommal in Montreal. The camera images you're seeing are captured from the wheelchair, right along the person sitting on the chair. In this case, the wheelchair is being driven autonomously. It learns to circulate, to avoid humans, to avoid the walls, and actually get to the end of the corridor without any collisions. This was not done just once. This was done several experiments over many days, quite reliably. This behavior was achieved with a technique called imitation learning, where human actually demonstrates the behavior to navigate around, for the wheelchair to navigate around the pedestrian, and the wheelchair actually learns to imitate that particular control strategy, learns from demonstration how far you need to be to maintain a socially acceptable distance, how fast you can go, and so on and so forth, again, using machine learning techniques. Of course, we don't always want to avoid people. In some cases, the wheelchair and the person sitting in the wheelchair may be accompanied by someone they may want to walk together. And so we've actually built other technology that maybe we can turn off the sound, other technology where a particular robot is actually able to follow a person. We deployed this technology on the wheelchair, but we also deployed it on some of our field robots, where a robot in this case is actually navigating in some space to follow people in an autonomous manner to actually map out a field out at the Canadian Space Agency. So just to finish off, before I pass the mic to some of my colleagues tonight, one of the things that I think we're really expecting to see is how this technology is going to deploy, radiate through several of the sectors. I only talked about robotics today. Some of my friends and colleagues on the panel will talk about some of the other sectors, but I think there's really interesting perspective in many of these fields, which we're going to see in the last few years, in the next few years. It's always important for me, I think, to finish by noticing that I'm here today sharing some of this work with you, but in fact, there's a huge team of fantastic graduate students that are sitting in our universities. These are the ones that are at McGill University, but there's others in Ottawa and several of our Canadian cities that are doing all of this work, and they're the next generation that are going to really help us build this revolution. Thank you. Next up is Philip Huey. So I'm the one who's supposed to be running for 20 minutes. I'll start my timer here and try to go through all of this a bit more quickly. So today I'm going to talk about AI for good. AI for good is a lab that we have at Element AI in Montreal, and I'm going to tell you a bit about the idea behind this lab. But first, before I get started, I wanted to show you what I like to call the mass low pyramid of artificial intelligence. So you know the mass low pyramid of our human needs. This is of the equivalent but for our artificially intelligent friends. podium, please. Sorry? What? Stand at the podium. Oh! It's not... This mic... Is it working? Excellent. Sorry, I like to walk around when I'm talking. I'm going to try to do this at the podium. So this is the mass low pyramid of AI needs. And when you start doing AI, there's a lot of things you need to do. The first thing is you need to collect data. You need to make sure that you have enough data to train your machine learning system. Eventually you will need to move that data around and to store it. And all of this takes a lot of time, takes a lot of resources and can be very costly. Eventually you need to explore your data and to transform it into something more meaningful. And then you probably need to label it. You need to have some humans work with you and try to figure out the aspects of the data you don't know about. Because artificial intelligence is about learning something new about your data. And in order to do that, you have to label a couple of examples before you can get started. And only when you've done all of that can you really unleash the power of AI. This is when you can start to learn and you can optimize and you can get better and better results. So going through all this pyramid is very expensive. There's a lot of stuff to do there. We talk about the AI as if it was a single thing, but it's essentially the tip there. And you need to do everything. So who has the resources to do all of that? Well, if you think about who's doing it, the pioneers in the field are essentially the big tech company. And when you think of the business model behind the big tech company very often. I like to give this quote. This guy, Jeff Hammer Backer, was one of the first engineer at Facebook. And in 2011, he quit Facebook and said, okay, that was a bit provocative, but I kind of like it. The best minds of my generation are thinking about how to make people click ads. So it's very generic. It's not the case. There's a lot of great research happening at Facebook. But the bottom line is we need to have AI in a lot of diverse fields. It cannot be only happening in a big tech. And at Element AI, we really want to do that in a bunch of different industries. And as Joelle said, it's going to happen in a bunch of different fields. But in particular, we wanted to make sure that there was a way for us to contribute to the greater good. And this is why we started the AI for good lab. And the idea of the AI for good lab at Element AI is we have three principles there. The first one is we want to empower the people who are fighting a good fight. The ones who are trying to elevate human suffering or trying to have sustainable impact in the world. We want to do the research that enables that. So research and AI for sustainable development is something we care about the Latin that AI lab. And finally, we don't want to be like the white saviors wrapping in Africa and solving the problems there. We really want to work with the local population and find the local startups and the people who really know the challenges that are going on in these places and empower them and give them the ability to do what they need to do with artificial intelligence. But more importantly, the thing that's driving us is the idea that we really want to build stuff, right? We want to make things. And I'm emphasizing that because a lot of AI for good is about the ethics of AI, right? How do we care about privacy? How do we care about security in a world where we have artificial intelligence, where we have personal data sets being used to train systems? These are really important questions. And we are seeing, I guess, today that they're becoming more and more critical. And we, like, at Element AI, we care about that a lot. But the goal of the AI for good lab is to make things, you know, build AI that has impact. And the way we want to do that is by connecting different actors in that big space. We want to find out the funding organization, the philanthropists who care, who believe that AI is the right way to have an impact in the world. We want to work with NGOs, with government agencies, with local startups that are able to identify the right problems that know the domain where AI can have impact. And finally, at Element, we want to come and provide the technological support, the research expertise to try to make that happen, to make the thing happen. So the next slide should be all about what we're building, right? We want to build stuff. Unfortunately, we also have that philosophy that we want to finish our thing before we start talking about it. So what I'm going to do instead is I'm going to tell you about a project that looks very much like what we want to do. And one of the reasons it looks very much like what we want to do is it was actually the project of Julien Cornebis, whom you see on the left side here, who was a deep mind and then spent one year working with Amnesty International on that project and is now the head of our AI for good lab at Element in London and he's the director of research with us. So this is a project he worked on last year with Milena Marin at Amnesty International and Daniel Wurrall, who's a postdoc at the University of Amsterdam. So before I get started on what this thing is, a quick show of hands. How many of you could tell me the name of that country here? Okay. Not too bad. So this country is Sudan. It's just south of Egypt right there in Africa. And the southern region there is a region called Darfur and you probably heard about Darfur or maybe it rings a bell or you kind of remember it as bad stuff happened there and essentially this project is about that. I know I didn't remember what went on there when Julien told me about this but it kind of rang a bell, right? But before I get started on the bad stuff that's going on in Darfur, let's look at the country. What does the country look like? If you go and take a walk in Darfur, you might encounter one of these huts. This is called a tukul. A tukul is the main home in Darfur. It has a patched roof. It has mud walls and can be circular like this one or it can be a square. And there's a lot of them. Sometimes they're organized in villages. Sometimes they're on hills. They can have walls around them. There's a lot of diversity. They can be in the middle of the desert like this and have different kind of walls. But there's a similarity to that and this is where people live. They organize them in villages or cities and things like that. This is what it looks like. Now going back to what happened in Darfur, since 2003 there's been an ethnic war going on there. And that means that you could in the middle of the day, in the middle of the night a couple of soldiers could drive by in a jeep and burn your houses and burn an entire village. And what this means is that it scatters the population to the winds and people don't have any place to live and it's been going on for a while. And in 2003 we used to hear about it quite a bit. Back in the days George Clooney was one of the spokesperson for this and Amnesty Netanyahu and other NGOs were trying to bring public awareness there. Now the issue is that these things slip from our collective mind and Amnesty is really keen on bringing public awareness to this. But they also want to build a strong case for the atrocities to try to document them, to try to figure out can we actually bring international justice here and make sure that people who did that we know who they are. Now the problem is that it's really hard to get access to Darfur. The frontiers are closed, it's just hard to get access and get hard evidence. You can get testimonies from people who come out and actually need more than that in order to build a solid case. And this is what Amnesty International wanted to do. So what they decided to start doing is let's look at this country from the sky. Let's look at satellite imagery of this country. And what do you do when you have that? Well you can have a human look at all of that and try to figure out are there huts there, are there burnt huts. An additional issue is we don't even know where the villages are. We can look at a map but they're not very precise and they don't have the precise villages or the location of the huts there. So one of the tasks for the human is to actually figure out where people live. So you can do all of that manually and that's what they set out to do. They paid an expert for a couple of months to work on this and she was able to map a hundred square kilometers of Darfur around I think a village called Jabal Mara. And there she found a number of huts. I don't know if she actually found burnt huts but she looked at a hundred square kilometers in a couple of months. This wasn't enough. So what do you do when you need more power? Well you get more humans. And they use a technique called crowdsourcing. Crowdsourcing is when you turn to the wisdom of the crowd. Sometimes it's some clear crowd that has wisdom but in this case it worked. What they did is they put up this pretty cool website where you could go and you could get a job to do. You could get a couple of these tiles Darfur seen from the sky and you had to label them as containing human inhabitations or not. And you got a little training and then you got your tiles and you just did that and at the end maybe you got a little score saying I'm going to just move ahead one slide just to show this. You get a little score that says hey, good job. You've labeled a couple of images. Now the thing with this is it's much better. You can get a lot of people working on it and in three weeks these people were able to map a thousand kilometers square of Darfur and identify huts there. But Darfur is 500,000 kilometers square. So it would have taken a very, very long time. But if you look at these images on the left here you can see that the images seen from the sky are a lot of lines and corners and stuff like that. This is exactly what a technique in artificial intelligence called convolutional neural networks are really good at identifying. So enter Julien and his machine learning expertise and he basically built based on the labeled images from the crowd he built a convolutional neural network that was able to automatically identify houses or burnt houses. And this system could be deployed on the remaining squared kilometers that hadn't been addressed by humans. So the idea here was to very, very quickly label the entire country. So the best thing to do at that point is to look at the little video and I'm hoping someone can start it. Am I supposed to do it myself? Oh, sorry, I messed up. Let's try again. Excellent. So this video shows Julien's system in action. So you can see zooming out. We're in Africa. We're in Sudan. We're zooming in to the Darfur region. And all the little green dots that you see there are dots that have been labeled by humans. And if you zoom into one of these dots, I think this is Jebel Marat, a village I was telling you about. You can see that there are human habitations there and the places that haven't been labeled are trees and places that are not inhabited. And if you look at what humans have done, well, sometimes they do some mistake. In that case, okay, it's a field. It's not quite a house. There are humans nearby. And in that case, it's just a total mistake. The human who labeled it probably got it wrong. So we have that for a large region. We have that for different places. In this case, it's a village bordering a road. But it was correctly identified again. And you can look at this, but you can see that most of Darfur isn't mapped. Now, when you turn on the convolutional neural net, you get the mapping of the entire region. All of that was mapped using an artificial intelligence algorithm. And if you zoom in to the region that the deep network think is human, you can see it got it right. These are real houses there. And they've never been seen by a human, but the system identified them correctly. Now, the challenge is, will it work in different regions? You can look at the hill instead of looking at the desert and see if it works. And, you know, drumroll. It's exciting. I've never seen a video before. And are there houses there? Let's look. Zoom, Julien, go. And yes, it got the houses even in the more hilly region. But what's more interesting here is it got houses that humans didn't even get, right? The red regions there are houses that are harder to identify, but they are round circular to cools. Now, this was, you know, this was what Julien thought was a failure mode, right? There are no houses near the South Sudan border. But when they looked at what the algorithms saw, well, there were actually houses. We know there's no big village there, but there are a network of criss-crossing roads and there are houses nearby. So the algorithm got that right, too. Now, the real test is, can the algorithm detect burnt houses, right? And these blue dots are actually the algorithm trying to figure out where burnt houses should be. And if you zoom in, what you see is that you really get these houses without roofs that are there and that are a real example of a village that was burnt to the ground. So this is a good example of artificial intelligence, CNN, trying to help people fighting a good fight, right? People trying to document the atrocities going on there and maybe help us collectively get rid of them. So that's all I had for you today. Thanks for listening. I think I managed to do it in less than 20 minutes. Just to leave you with something, if you're interested in helping there, as I was saying, the goal of the AI for Good Lab is to connect these three different groups of people. So if you know philanthropists who believe that AI is a good lever to help the world get better, you can connect them to us. Thank you very much. Thank you. Next up is Doina. Remember to think of your questions, okay? So thank you, everyone, for coming. I was asked tonight to talk a little bit about artificial intelligence and healthcare. And so I will try to give you sort of a bird's-eye view of what the challenges are that the AI can handle in healthcare, basically through a series of examples. And I'll start by sort of outlining what's the big challenge in healthcare. This is according to Dan Burwick, who's a former administrator at Medicare and Medicaid. And his idea was that really there's three different things that the healthcare system actually tries to do. One, of course, is to figure out what's wrong with the patients and give them the right treatment as well as possible. The second one is actually to enhance the patient experience and to ensure that the people working in the sector actually have good conditions. And then finally, there's a third goal, of course, which is to reduce costs, because healthcare is oftentimes the biggest budget item that we all pay for. The interesting thing is that the healthcare system is really, really complicated. You've got hospitals, you've got family doctors, you've got patients who are in their houses doing searches on the internet, you've got pharmacies who are filling in prescriptions. Sometimes you have other kinds of workers like social workers that are involved. So there's many different moving pieces. There's many, many sources of information. And that's actually where AI can really step in and do something really good. We have a lot of information that's being collected. It's messy, it's noisy, it's coming from many different sources and it has different qualities to it. But if we could actually try and make sense of this information and also try to integrate it across different systems, then we could do really well. So I'm going to show you some examples of healthcare applications of machine learning. And to do that, I'll show you sort of three different broad areas of machine learning and how they can play into this field. The first one is what's called supervised learning. So if you've ever gone to school, school is basically the perfect example of supervised learning. You have a teacher, the teacher tells you what you need to know and then you go and take a test and you pass the test and everything is good. And so what does that look like for an AI system? Somebody needs to give the AI system examples that have been labeled with the right information. So in this case, we have a system that has to recognize faces and images and so we had some people who labeled all the faces and were going to train the system with images where there are some faces and where there are no faces. And the system can learn how to do this. Now we can do the same sort of thing with medical images. So this is an example from work that we've done at McGill with my colleague, Tal Arbel. In this case, we have patients who have brain tumors who have been scanned using MRI. And so a doctor might have labeled certain parts of the brain where there is pathology. The challenging bit here is that pathology is very, very different from patient to patient. Sometimes you have very small regions that actually look like little dots or like noise, but actually they're meaningful and one needs to detect them. But if you have specialists, they're very willing to provide this kind of data. Of course, the data is never perfect because people also make mistakes. People sometimes disagree with each other in terms of these labels, but we can take the data and train a system to kind of recognize this pathology. This is another example from my colleagues at Google who were looking at retinal scans of diabetic patients. So diabetics sometimes developed a condition called retinopathy, which can lead to blindness in the long run. And there are doctors who are very well trained in recognizing these, but unfortunately, these are few in number. And in certain parts of the world, for example, in rural India, there just aren't that many doctors that have this kind of capability of analyzing these images. And so they took images of retina and they trained systems that tried to recognize retinopathy. But actually, they tried to see what would happen if the system would try to predict not just whether the patient had the disease or not, but also certain other information about the patient. For example, is it a male or a female? What's the age of the patient? Do they have high blood pressure and so on? And so what you see there is some examples of images where the system in each image is predicting something different. Sometimes it's the age, sometimes it's the gender, sometimes it's the blood pressure, sometimes it's even the BMI of the patient. And actually in all of these cases, prediction accuracy is really, really impressive. Close to 90% in most cases. So these images carry a lot of information and in some cases doctors actually are not able to make these distinctions on their own. But the AI system, if it has an update at a train on, it can actually do this. The second thing is there's some little dots, green dots there on the images, they're a little bit hard to see. But basically they show where is the system looking when it's making these different predictions. And depending on what it's trying to predict, it's actually looking at different things. If it's looking at the health of the eye, it often looks right in the middle. Sometimes when it's trying to predict blood pressure, it will look at the blood vessels in the eye and so on. So we get a little bit of insight into sort of what's the information that's being used out of these images. This is another example that actually got a lot of the scientific community very interested and possibly a little bit scared. This is the nature paper that came out recently on skin cancer detection. And this is a system that essentially is on par with the best people who do this kind of job. And this is a visualization that's also showing kind of in 2D what are the different kinds of examples. And you can see that the system kind of clusters away different types of skin cancer and again can do really well on this. Now there's a different kind of learning which is more complicated called unsupervised learning. Unsupervised learning is something that you do when you analyze data without having a specific goal in mind. So in this particular case there's two trajectories there. They're taken from an accelerometer. And they're similar in some ways. Both of them go around and around. In some ways they're different. One is kind of spiky, the other one is not. A system can be showed these trajectories and try to infer what's interesting about them but of course the problem is much less specified. It's really interesting to think about this though in the context of healthcare because sometimes we just have a mass of data and we would like to sort of make sense of this data. And so this is an example from Mielta and colleagues. It's a system called a deep patient. It uses a deep neural network to analyze electronic health records. So this is information that's been gathered from different patients across the healthcare system that looks at their hospitalizations and lab tests but it also looks at other information such as what was done at the family doctor and it tries to figure out what are interesting characteristics of the kind of group these patients together. Without actually being told, oh, we're trying to predict whether the patient is a diabetic or whether the patient has a heart condition or something like this. And the hope here really is that by analyzing lots and lots of data we can figure out what are the commonalities among groups of people that we may actually be able to leverage in order to predict their long-term evolution in their health and to predict also perhaps the efficient ways of treating them if they do develop a condition. Now, there's a sort of third way of doing things which is called reinforcement learning. This is actually the field in which I spend most of my time doing research and just to sort of give you an idea of what this is. How many people here have pets? Okay, kids? Okay, significant others? Okay. So sometimes you have these entities you have to interact with them and you want them to do things a certain way, okay? And it's kind of hard to communicate, okay? You know, certainly being told doesn't work most of the time, right? So supervised learning is kind of out of the question and then supervised learning is just hard, right? You just kind of have to leave them to their own devices to figure out what it is that you want. So what do you do? Well, oftentimes at least what I do with my kids is we set up some kind of a reward system, right? If you do your chores then later on you can go and talk to your friends or something like that. So, harsh, I know. So this is the kind of thing that you also do, of course, in psychology or in animal learning in a lab. So you have a picture there of a little mouse that's trying to figure out how to push some buttons in order to get to a food pallet. So the food pallet is the reward that it gets. And the mouse learns how to do this task pretty quickly and very well. And the way it does it is basically by trying things out at random at the beginning. But once it gets the food pallet, right, that will kind of reinforce the sequence of actions that it did. And the mouse will remember and will know that this is something that it should do again. So when the mouse is pleasantly surprised by the outcome, whatever it did before that outcome kind of gets highlighted. So we train actually artificial agents very much in the same way. We have our agents embedded in an environment, observing the state of the environment and taking actions. And they receive rewards. Now for automated agents, rewards are just numbers, right, positive and negative numbers. And their goal in life is basically to maximize the sum of these numbers. So the higher the numbers, the better. And a lot of the interesting thing is a lot of problems can be phrased in this way. It's a little bit scary in the sense that learning has to happen by trial and error. But at the same time, this involves a designer much less than supervised learning. So we don't need the human in the loop there sort of always telling the agent what to do. And interestingly, agents are actually able to cope with receiving rewards with a lot of delay compared to when they did their interesting actions. So this has led to some really interesting results. This is a video of the learning process of AlphaGo. AlphaGo is a system developed at DeepMind that learns to play Go better than any people or machines. It was really exciting when this happened because of course for people who have been around for a while like me, Go has even been a bigger challenge than chess because it's a really, really complicated game that has strategy and tactics and is just very, very hard to solve. And the system learns by reinforcement learning, purely by playing games against itself and by observing whether it's won or lost. So it's a very sort of powerful methodology. Interestingly also, this actually learns better than when we use data that is coming also from people. So if we try to leverage labels from people, that does slightly worse. And that's partly because the system, when it trains by itself, it's always solving a problem that is just the right level of difficulty. It's playing against itself. So it's playing against a mashed opponent. There's always something interesting to learn. Now a lot of people say, well, reinforcement learning is good for games. What else is it good for? And so this is sort of the biggest success of reinforcement learning in the medical field has been its application actually in modeling the activity of dopamine neurons in the brain. This is work that was done in the mid-90s and has been replicated many times since from dopamine neurons, individual dopamine neurons in the brain. So I'll try to explain to you a little bit how this works. So first of all, what's dopamine? Dopamine is a neurotransmitter. It's a substance that kind of comes and washes out all over the brain and it's very important in sort of the, it's basically the reward system of the brain. And it's involved in some pathology like Parkinson's disease. It's also involved in addiction behavior. But it's also involved in making people feel good. So how does the brain learn by using dopamine? Basically what this shows is the activity of these dopamine neurons in the case of an animal that's sort of being conditioned with a stimulus and then it wants to do a response. And when the animal is just kind of sitting around and the reward comes out of the blue, what you see in the top picture there is these dopamine neurons activate. There's a surprising positive reward and so there's a spike in the activity level of the neurons. Now what happens if we learn, if for example you have an animal that gets a stimulus like a bell or a light and then the reward, the food comes later on. Well what happens is actually the neurons don't light up when the food comes. They light up ahead of time when the stimulus comes. They light up in anticipation of that stimulus. The animal knows already that the reward is going to come. And that's the second rule. And in fact if the reward does not come, these neurons kind of get extinguished and their activity dies down. It's sort of like the disappointment of we were expecting a reward and it actually did not come. And interestingly this pattern of activation of the neurons is exactly mimicked by the reinforcement learning algorithm through its computations on the other clean graphs. So that was very surprising. It's a simple computational explanation for something that was thought to be a very complicated and mysterious brain process. Now there are actually other things that you can do with these kinds of algorithms. So this is a project at McGill University that we're doing in collaboration with the Neonitology department at the MUHC and with some colleagues in biomedical engineering. And so here actually we're doing patient monitoring in the intensive care unit. In this case we're working with some very, very small babies that were born premature. They're about the size of my hand. And so whenever they're born they have to be intubated because they're not able to breathe on their own. And so they're ventilated through a machine until such time that the doctor decides that they are ready to be extubated. And in general you don't want to keep them on the machine for too long because that damages the lungs in the long run. But of course if you take the machine off too early you might need to put it back again because it's small and it's a painful procedure. So what we're doing is actually we're gathering data as these babies are intubated and when the doctors come in and try to see if the baby might be able to breathe on their own and then we try to train the system to do a prediction over this time series of data to see if the baby is actually ready or not. And we're using for this cardiorespiratory signals that are coming from the usual clinical instrumentation. One of the interesting things in this project is that of course the ICU is a really, really hectic place and oftentimes decisions are actually made based on nurses' notes or based on a very short interaction with a patient because this is just the logistics of hospital life. But an AI system can be there 24-7. Whenever the sensors are turned on it can monitor and help out with that. So what's the future like? Oftentimes people fear oh AI is going to replace the radiologist and the doctors and so on. I actually view it as the AI being in a supportive role where it's used best and specifically in order to kind of help the people and relieve the burden of some of the tasks that might be repetitive or very hard to implement in a realistic setting. A lot of people are hoping that AI is actually going to bring about this dream of having personalized healthcare where we adapt the treatment to the characteristics of each patient. But there's actually lots of interesting technical challenges still to solve. One of these challenges is the fact that the AI systems are really good at predicting things but they're not really good at understanding why they happen. So in the case of let's say the retinopathy example we can predict what the gender is but it's not really that the system understands which components are actually doing the prediction. We can sort of look a little bit and understand the motivation of the system but not really anything in depth. So understanding causal mechanisms is actually one of these big open technical questions that a lot of us are interested in. Thank you very much. Pablo. So I'm going to follow Philippe's lead and turn on my timer to make sure I stay within 20 minutes. So good evening everyone. My name is Pablo. I'm a senior software developer in Google Brain and I'm going to talk about how I'm investigating using AI for the creative process and specifically for the musical creative process. So as many of you probably already know it's really hard to write good music without good structure and the type of musical structure that I'm talking about can take on many forms. So for instance you can talk about the harmonic relationships of frequencies that give you things like consonants or what pitches sound good together versus dissonance or what pitches don't sound that good together. You can talk about musical compositional forms so for instance sonata form in classical music that allows composers like Beethoven to write really remarkable symphonies. And even nowadays you can talk about 12 bar blues and blues and jazz for writing these songs. You can talk about the circle of fifths which is a structure that anybody that's studied music has learned which tells you about the relationship between different pitches and how they work well together how scales work together with different harmonies and for instance improvisation what scales you can use for improvising and this is all of course in 12 tone harmony. And speaking of 12 tone you can talk about Schoenberg's 12 tone system for composition which some people don't like as much. And of course these are just a small sampling of the many different structures we have in music. And one that's really popular and well known at least in classical music is counterpoint. So counterpoint is a set of rules that describe the relationship between multiple independent voices that are harmonically interrelated. So all of these voices are kind of their own thing but they still work well together. And so it took a long time to develop this and it certainly culminated in the Baroque period and probably the composer that most people know that uses counterpoint is Johann Sebastian Bach. So why don't we take an example from Johann Sebastian Bach this is from one of his chorales. So chorales are a type of composition where you have four voices soprano, alto, tenor and bass and they're four independent voices but they work well together. So what I'm going to play here is the melody from one of Bach's chorales for the soprano line. So it's a nice melody fairly simple. So then what Bach does to this melody is he adds three more voices underneath the alto, the tenor and the bass and he adds them in a way that respects the rules of counterpoint and by doing so and of course introducing his art he does it in a way that creates these very rich harmonies and creates really beautiful music. So now we can listen to the same soprano melody supported by the three lower voices. Very pretty. So how did Bach do this? He did this by following the rules of counterpoint and these rules as I said it took a really long time to come up with them and it took about nine centuries to develop these rules. That's a really long time. Finding these structures is really hard and as music progresses and as technology progresses and more people get involved in music it's going to be hard to develop these types of rules quickly to catch up with everything. Luckily for us as you may have inferred already deep learning is really good at finding underlying structure in hard problems. So if you take an image deep learning has shown remarkable talent at being able to identify what object is in that image and if you take a spoken speech it's able to identify what the person actually said and then you can translate between different languages in really remarkable ways. You can even take a picture and deep learning is now able to tell as Joelle showed tell a story about the picture not just identify what objects are there but what's actually happening in that picture. So deep learning in all of these cases and many more is really able to find the underlying structure that make these examples interesting. So why not apply the same techniques to music? And so this is where Magenta comes in which is a sub team of Google Brain that lies at the intersection of music and art machine learning and creativity. So I collaborate with them a lot. I'm not actually part of Magenta but I work with them very closely and we're really involved in maintaining everything open source so we release our code open source we publish in public conferences and more importantly we want to engage with creative people creative coders, creative musicians, creative artists and we want to do this in a way that fuels our research so that we can develop state-of-the-art generative models for artistic and musical creation. So if you want to check out Magenta the link is there it's some very cool stuff. And so we're going to talk about one of the projects at Magenta this was led by Anna Hwang who's an AI resident with us and it's called CocoNet, counterpoint by convolution. So Philip already mentioned convolutional neural nets applied to detecting burnt houses in Sudan Anna applied them to learning counterpoint. The way she did this was by training a convolutional neural net to complete artificially incompleted Bach corrals and so I'm going to specify what I mean by that. So what Anna did is she took the Bach corral book there's 371 Bach corrals so there's a lot of them but she split them up into bite-sized chunks so these are like two to four bars the reason she did this her method is quite computationally intensive so this just allowed it to rate faster and it also gives you more training data. So what she did is then having these bite-sized chunks she selected some notes randomly and just deleted them so this is what I mean by artificially incomplete. So she deleted some notes and then she passed this incomplete snippet into her convolutional neural net and asked the network to complete the Bach corral. So the network will do its best it'll add notes sometimes right sometimes wrong and compare what the network produced against the ground truth or the original Bach corral and obviously at the beginning it's going to make some mistakes and those mistakes you send it back to the network and you ask it to adjust its parameters in a way that it makes fewer and fewer mistakes as it goes on and so you do this over many many many examples and as has already been mentioned these networks do learn these underlying patterns and what's more remarkable is that they're able to generalize to melodies that they've never seen or harmonies that they've never seen and pass any melody and ask it to complete it in the style of Bach's corrals. So we did this we passed in the same soprano melody that I played a few slides ago at the top and we asked CocoaNet to complete it so the three lines at the bottom are what CocoaNet produced and so I'll play for you now. So pretty good. So one of the things that's remarkable about this is that none of the rules of CounterPoint were actually embedded into the model the model just learned those structures from the examples from the training examples so it sounds good it sounds harmonious it doesn't sound random it doesn't sound dissonant obviously if you start analyzing from in terms of CounterPoint rules you see that there's some mistakes like there's these jumps here that you don't really do in CounterPoint but overall it's done pretty well for having never been taught the actual rules of CounterPoint so I'm just gonna play them back to back so you can hear them in more contrast and now CocoaNet so pretty pretty exciting this got us really exciting but why are we doing this? I mean this is really cool stuff but why do this at all? Are we trying to get rid of musicians? I hope you can anticipate the answer to this is resounding no I'm a musician myself so the last thing I'd want to do is get rid of musicians so what we really want to do is empower musicians we want to give them new tools the next generation of tools to help them create new art so whether it's getting out of a writer's block or pushing you into new regions that you wouldn't have gone on your own when I get into lyrics if you have somebody that English for instance isn't their first language maybe a tool, an intelligent tool can assist them in writing lyrics that they wouldn't have been able to do on their own so how are we doing this? One of the approaches we're taking is by artistic collaborations so one of my main projects is working with David in developing a suite of tools to assist musicians and songwriters and creators and so when we started this collaboration we decided to focus first on lyrics and the reason we wanted to do this is lyrics are obviously something very important to David and it's also something we felt that there wasn't a whole lot of offering in the field at the moment so when we started talking it was clear to both of us that we wanted something, a tool that was interactive so we don't want something where you press red and you get the next top 40 hit we really want something where the artist retains the control so it's simply going to be a tool that the artist can feed off of and write new lyrics so in addition to that we want something that's going to be relevant so we don't want it just to produce random lyrics and it's all up to you to figure something out we want it to be relevant to your interests as well as the rest of the lyrics that you've already written so that it's contextually relevant and we also want it to be adaptive so the styles can vary from artist to artist and even within the same artist from song to song so we want this tool to be able to adapt to whatever style you're trying to write at the moment and so the way we did this is we trained a recurrent neural network which is a different type of neural network that I'll talk a little bit about in the next slide we trained it with this public data set we found on Kaggle which is the top 40 hits over the last six decades so we passed this data set into our recurrent neural network trained a model and then we used this model in our interactive tool to help in lyric writing so what's a recurrent neural network? you can think of the model as this M, this M square here and what it's trying to do is it's trying to predict the next token in a sequence of tokens so these tokens can be words, they can be notes so the inputs that you get are the X's at the bottom and these are fed into the model and then the model says okay I received X0 I'm going to guess that the next output is going to be Y0 and then it'll compare that against the next output the next input in the training data and you'll see if it got it right and when it's doing this it's updating an internal state so you can think of this kind of like a memory so it said I received X0, I produced Y0 and I'm going to try to remember that and so in the next state it's going to say now I'm going to receive an X1, I'm going to predict a Y1 and it continues in this way so these H's are maintaining a sort of memory so that it remembers that when it's predicting Y3 it's predicting Y3 not just because it's seeing X3 in this iteration but it's seeing X3 and before that it saw X2 and before that it saw X1 and before that X0 so to make things a little more concrete about text which is what we're working on we can imagine training a model over lyrics so if you start off with the word this you feed it to the model and the model might say okay I think the next word is going to be is and then you compare it against your data set and oh you got it right, great, good for you update your internal state and now it says okay I've seen this is and I'm going to guess the next word is uh and so you got it right and now you predict song this is a song that makes sense and then it turns out you got it wrong the next word was hit so at this point you have internal parameters to try to not make this mistake the next time but now that it's seen this is a hit it's going to say okay I think the next word is song and it will continue in this fashion so we do this over many many lyrics and the model updates its parameters to try to reduce the number of errors it makes so we train this model and then I generated a bunch of lyrics for David and I here are some samples of the lyrics it produced so I sent these lyrics to David and he called me immediately I picked up my phone really excited to hear what he had to say and David said Pablo these lyrics are terrible I can't use any of them the model doesn't work what are you going to do I kind of like the green ladies line but I guess it didn't do much for David so David was unimpressed so I went back to the model and I started looking at part of the there's a kind of black magic to machine learning it's not all these abstract models that you just throw data at them and they train there's a lot of work that has to go on Felipe alluded to this that has to go into tweaking the parameters tweaking the way you train it tweaking what data you train it on that changes the way your model behaves so I went back and what we decided to do is to use some lyrics from some previous songs of David's these are the lines in red and so we would prime the model's internal memory that I was alluding to and ask it to continue the lyric so we'd give it give me a sign and say okay now complete this and they would produce of the stars and I want to say that I'm going to be the one so I sent only the white lyrics which are what the model produced these are just a sample of them I sent a bunch of them to David and this was actually something he could work with so David spent a weekend looking over these lyrics trying to make sense of them he used some of them as is some of them he rewrote himself but this was very much in the spirit of what we're trying to do and so David rewrote one of his songs Sparkle and Shine and we released in a video last year and we revealed it at a conference Big AI conference and so it was we considered this a great success as a proof of concept of what we were trying to do in terms of building these interactive musical assistants so this taught us this whole process taught us a lot about the difficulties in developing these models for this particular case one of the things we learned pretty quickly is that we're asking too much of the model we're asking it both to learn English and to write a good song and this is really challenging for most of us but on top of that we were kind of constraining it by giving you this really weird data set which are just lyrics it has a lot of problems this data set some of them here is that there's really weird spelling so instead of writing because people write cause or love is written as LUV pop songs are not the most varied in terms of the themes that they talk about they often end up just talking about love so it's not great if you really want to get a broad spectrum of English and the other thing is pop songs repeat lines a lot and the model learns this very quickly it's like if I repeat the line the line you just gave me over and over I'm gonna do okay and that's one of the things we found that's one of the things we found with this model that if you think about averages so you can almost think about in an abstract sense the average pop song so if you just produce the average pop song most of the time you're gonna get it okay and the model ended up doing this it ended up just producing the average line yet when we say give me a line it would just say the average line so if you want to know what the average pop line is over the past six decades it's you know that I'm the one and you can add baby at the end and you're still okay so the model very quickly learns that if it just gives me this line it's gonna do okay it's gonna it's errors gonna go down up to a point where it's satisfied and so this is one of the challenges we're dealing with we don't want the model to do this we don't just care about it reducing its error we actually wanted to produce interesting lyrics and part of the one of the initial solutions we found is just by trying to change the internal memory by giving it more interesting lines so we're really just beginning this journey we're extremely excited by what we're going to produce we're hoping to release this to the public pretty soon so that all of you can write the next top 40 hits and we're hoping to get more feedback from more artists and people that are excited by these technologies but there's a lot that we want to work on and there's a lot that we can work on but we're hoping that this promotes a new field in terms of interacting agents interacting musical assistants using AI technologies thank you very much and as a special coda we decided that because David rewrote a song we decided that since he's here he should sing it for us hello so the trick has been to find lyrics that push the songwriter outside of their normal comfort zone so I rewrote one of my original songs and this is I think one of the first examples of lyrics written by an AI in collaboration with a human me lay you out under the stars the skies on fire streets are all empty tonight I'm by your side a little more time God won't deny lay you out under the stars tonight sparkle and shine secrets we held close for years they all faith is the last thing we lost just one more night if God doesn't mind secrets begin to unwind sparkle and shine thank you very much ok so now we want to open it up to questions from you guys ok there are two mics down here if you have questions just come down to the mics and we can start off really while you filter down I'm going to go with the first question and I thought I would start with the high level stuff how far do you think we're going to go AGI Artificial General Intelligence that is artificial intelligence that can work at a high level across many verticals in industries the stuff you see in movies are we going to get there are we on this exponential path as we've seen with computer hardware and how long is it going to take if we are I open I can start absolutely so this idea that you would get an artificial general intelligence that would become exponentially better is rooting the idea that you would have an artificial intelligence that figures out hey I can improve myself and starts improving themselves and it's all rooted in this as a question do you think you can reprogram your brain well do I feel like I can reprogram my brain I think I think that I personally think that we are going to follow an exponential curve I think that we've seen we have we go from the examples of the past to the examples of where it's going to go and we've seen in the past that we really have been on an exponential curve with hardware and software and so seeing as it's purely digital I see the same path where we'll have simulated AGI at first where we'll have all these technologies that are simulating what real consciousness is without being conscious and then the lines between the two things will begin to blur where we won't be able to tell the difference really that's my own personal I feel a lot of work to do on the software side right so like the hardware you know hardware is awesome and you know our cellphones are better than the supercomputers that took us to the moon and that's all really good but the software I think we still need to understand some basic principles right so right now we can build really good programs that do some very specific things like playing go or labeling cancers and so on but we don't know how to make one brain that does all of those things right and maybe it's too ambitious to say one brain to go master and do this medical stuff but to make a brain that is reasonably competent at many things right that's something that we should strive to achieve and we don't quite have it yet although I think we're making some progress understanding the basic principles and it's interesting to think about it not just because we want to build AGI but I think also it gives us insights into the way that our own brain works I actually am not sure I completely believe in AGI so one of the difficulties I have with fully believing it is this issue of self-reference so as humans we're able to do self-reference and we're able to think about ourselves and think of ourselves thinking about ourselves and you can go on and on and so all of these these methods are built on mathematics and now it's well known that mathematics is incomplete so there are certain statements that are neither true nor false in mathematics and there's a close relationship to this idea of self-reference where at some point it just bottoms out and having systems that are built on this mechanism that has we know fundamental flaws it's not clear to me that that's enough like our brains aren't necessarily following the mathematical rules that we use for all of these mechanisms and maybe there's another formalism that we just haven't discovered but it's kind of hard to predict but within the current mathematical framework that we have I have some difficulties fully being a believer come on I must say I overdue on this question I find it incredibly difficult to make predictions for the long term I find it really really very very challenging things are changing so quickly I will add perhaps it may be hard to know when or if or how we get there for several reasons it may be very very hard to know whether they are able to have this self-reference whether they are able to have consciousness and so on and so at some level I think you know we always have these tests right can you play chess better than a grandmaster can you create a coral that is indistinguishable from the style of Bach all of these tests sort of by definition are very narrow in nature and I don't know what is a good test for general intelligence and without a test as a scientist it is very hard for me to know whether we have reached there or not so there is no doubt that we have much further to get and there is no doubt that we are progressing along that curve absolutely I would like to open it up to presentations it was really interesting one of the questions that I wanted to ask is how do you see ethics and AI coming to play in your day to day work and what do you see the future in that area do you see it similar to what is happening with corporate social responsibility or do you see a bigger more substantive role I can start I think we have talked about several applications where ethics arise very very quickly it is an integral part of the projects from day one one of the areas where it starts there is this notion of reinforcement learning which Doina brought up how do we set these consequences for a system to train our system and there is really tough challenges in terms of defining what is the right behavior of the system how should it be rewarded and so in particular when you think of the use of machine learning for medical systems if you have a system that is making decisions about treatment of patients with chronic conditions you often have many parameters that you have to trade off in terms of the individual's quality of life in terms of their prognosis in terms of the use of their data in training the system which reaches into privacy security and so on and so for many of our projects more and more it is an integral part of the discussion that we are having I think in some projects there is a tension between the individual interests and the concerns of the societal level and so that is another discussion that is starting to enter many projects it is a really important discussion to have I think a lot of our organizations are actually preoccupied with this there is an umbrella organization called the partnership on AI all of our organizations are actually part of and supporting whose role in some sense is to these questions but I think ultimately ethics will have to come in much stronger in the whole of computer science like it has in medicine for example and in fact one of the ways to think about ethics on AI is to think about the example of genetics which is also a very fraught field but where actually people have found interesting ways to take the data, make the data public for the greater good but at the same time to protect individuals and to put in all kinds of checks and balances and I think in AI we are going to see the same kind of thing and a lot of different organizations in different parts of the world are actually thinking about how to set up this kind of framework I think it is a crucial thing to have in mind for all researchers at all levels I think our community has been seeing a lot of growing interest in thinking about ethics and morality as we are building these things so I can build this thing, should I build this thing that is something that we should all be asking ourselves but I think also beyond that this question of ethics, these mechanisms are immoral, they don't have their technology it is how you put them to use that really makes a difference and how they are allowed to be put to use and this is where larger organizations like governments can play a big role but in order to play a big effective role they need to be educated in terms of what these technologies are and what these technologies aren't so these types of events where we are interacting with people that aren't necessarily research in the field I think are extremely important to make sure that the general level of understanding of these technologies is enough that good policy can be developed and like all of that right but if I had to add something I think that very often we think of ethics of AI as in this team of like AGI that we have there's a lot of scary things that come to mind when we think about the future of artificial intelligence and if we try to build ethics on this it becomes very challenging it becomes very hard to think about that I've heard someone say that it's a bit like trying to do the ethics of mathematics it's such a wide field it's so diverse I think the important thing to keep in mind is technology, artificial intelligence is opening new tools new ways to do things in a bunch of different industries for a bunch of different applications and each time we do that we should take a close look at what does it mean in that specific industry for that specific application and I don't think we're doing that systematically enough and that's what I would encourage us to do as a society we've reached a point where technological growth is so fast that we are coming up so quickly that we should be a lot more systematic about analyzing their impact on individuals and on society more generally and this is not limited to AI it's any technological advance in my opinion Thank you Hi, how are you doing? I've been listening to a lot of stuff on AI lately and I'm sure everyone here is sort of Alex Benet and because he was here originally at the museum Fascinating Are you familiar with the film Do You Trust This Computer that Elon Musk put out? It's an alarmist, 100% alarmist but I mean it is a consideration is AI is it possible that AI could become smarter than us you talk about ethics and I love AI, I absolutely embrace it like bring it on but computers are ones and zeros and ethics runs in the grey so if a computer becomes smarter than you how does it learn ethics and morals and culture and make decisions like in healthcare if it's going to get rid of cancer does it get rid of the host and decisions like that I don't think we're that clear I think what you what Pablo said these things are amoral they just mimic the rewards that we give them so when we build one of these applications it's really important to ask this question what do we want this thing to build the classic example is what was it the paper clip optimizer that starts finding iron everywhere and just doesn't care about anything well if the only reward they have is I created a paper clip therefore I'm happy and we give infinite power to these systems this might happen so this is where we should ask ourselves should we keep that red button because we know we'll never get the rewards perfectly right how do we build these systems how do we place the right framework so that we can build systems that we can trust and that can evolve if we try to go beyond that I think we're living in that dream of AI that we're afraid of on an existential level I guess a bit like people were afraid of machines when machines were coming and taking their job this is the most productive use of our time but every time we use AI to build something we ask ourselves these tough questions you know how do we make sure this thing can keep on benefiting society I think this is where we make the best use of our of our time I think there's two aspects to this right one is when you have young kids and you want them to learn morals and values and so on that is you demonstrate the behavior and you demonstrate what's important to you if you have value education you demonstrate that they're like sponges they will learn that and I honestly think we should think in the same way about AI we provide data sets we provide the reinforcement signal we need to model what is acceptable and not acceptable in a similar way and we need to be patient and sort of put some thought into that the other aspect of this is AI is not going to be by itself taking over the world it's going to work with us it will have to be people in the loop it will have to be AIs that can actually take this information as the system is going along and immediately kind of internalize that and correct any kind of behavior so we need to find ways to interface with the system and to provide that information right away to correct problems I also think there's a lot of emphasis nowadays with all the hype that AI is getting on the killer robot scenario and will AI become smarter than humans I think we're very far from reaching that anytime soon and there's when we reach that point if we reach that point the landscape is going to be vastly different than what it is now so I think it's more important right now to focus our efforts like Philippe alluded to specific cases where AI is being used and make sure we're applying AI in those specific cases in a very ethical and moral way and if we do this as we iterate and at each step we're making sure that we're following the laws of ethics then this reduces the likelihood that we'll reach a point where the AI just goes rogue and we lost complete control I'll add one point and I'll start with something a little bit technical right you mentioned computers are just a bunch of zeros and ones right but if you give it enough zeros and ones the computer can actually represent you know a thousand shades of gray right so it can capture that now one of the difficult questions becomes what's your favorite shade of gray and maybe you like a particular dark gray and I like a particular light gray right and so when it comes to telling the computer which is the best the most moral the most ethical shade of gray even if we have humans in the loop even if we're trying to teach it the biggest challenge maybe us as a society agreeing on what is that right shade of gray that the computer should prefer what are those right morals what are those right ethical principles by which we want this machines to behave and I think for us as a society as we integrate this technology in our activities that's going to be really the largest challenge it's not going to be controlling the machine it's going to be really figuring out what are the right set of principles that we want to program in these machines thank you very much it's not the machine I was worried about it's us but thank you very much and just to keep it going a little bit not talking about the super intelligence taking over the world but talking about the distance between the research and the application right because really in the applied world it's building tools that actually tools and algorithms that that business uses right and how we prevent for you know certain governments certain bad actors from co-opting what's being built in the research lab with good intentions and good control to being put into the world and being used in ways that it was never intended that's a you know the distance between the research and the application is a long way so maybe we don't need to have super intelligence to have AI really mess things up but I think it ultimately boils down to how it's used and it's how humans are using them and humans that we have laws some better than others but we iterate on these laws and I think that's where really policy can play a big role at least for now in how these technologies are implemented and I think more education and more awareness of these technologies from non-researchers and these types of interactions between the researchers and the general public are huge and important I think all of us are very active in research where we are able to you know think about very abstract mathematical models but we're also all involved in very practical problems and as a scientist it's a very rich space to be in to navigate you know the purely mathematical very applied and everything in between and one of the reasons this is very rich is because on the one hand doing applications gives us an opportunity to see how our theoretical models can actually solve problems but it also feeds back where when I solve a practical problem it keeps me very honest about what are the right assumptions that I can make in my mathematical model so the two sides really feed off of each other and I think it's important that researchers do stay involved in practical problems that there is something about ethics research and computer science that's interesting if you think about ethics and physics you know theoretical physics they had the atomic bomb right this grounds you as a researcher when you see you can have that kind of impact in the world and suddenly I think we became very aware of the importance of integrating ethics very deeply within our field and I'm not entirely sure this we have this deep understanding in computer science so the importance of considering ethics as an integral part of our research work think people are very well-intentioned everybody but this idea that the impact can be really big is maybe not as present in computer science research as it could be in some other field so I think let's not wait until this happen and try to drive this and also the ability for very small teams to co-opt what's been built and change what it's being used on because it is you know it's computer programming you can get a very small team to do something that the original algorithm or program was never intended to do it's very simple to do it's hacking really and really especially in this context where a lot of the work that we do is open-sourced it's accessible to everyone the code that we develop within the universities is accessible in many cases the code that we develop within the companies is also open-sourced and accessible we basically also need right to be met sort of halfway by social scientists by ethicists by policy makers we have the expertise for designing laws and policies and regulations and I think right now like Pablo was saying we go out in the world and we kind of talk about what the technology is and what it isn't and where we are now in the hope that this will sort of bring everybody to a better understanding perhaps of what kind of tools AI provides but at some point I think we would really welcome also interaction with government and with policy forums to discuss how to make sure that the technology continues to be used in a good way I really like this example of a hammer so a hammer is a very useful tool we all have a hammer in our house you can really hurt yourself with it but we kind of all know how to use it in a way that does not hurt ourselves or other people or for most part and so AI is no different it's a fancy tool but it is a tool and we need to understand how to use it properly first I wanted to say thank you very much to the museum and to the panel this has been phenomenal and very very accessible so thank you I want to pose a fun question moving away from the ethics has this work you know into understanding how far can this technology go made you think more about what makes us human and if so what it is and if not if there are any other interesting philosophical questions they came out out of this thank you it's fun but it's not easier yes I think you know if you look at a lot of breakthroughs and technologies I think we often come back to these questions that the first person who saw a car that could go faster than a human and their claim to fame was I'm the fastest human in the world it's kind of it's something to face and right now I remember I'm kind of in a matter go play on myself and I stayed up really really late two years ago or something when there was a first match between Lisadel and AlphaGo I just remember the face of Lisadel as he was figuring out that he was losing right this was the best thing in the world that could play Go was a human and suddenly it wasn't anymore and the entire world of Go was shook by that and it was it was a telling story because everybody believed computers were so far out but chess it had happened in chess before right and what was fun was to follow the world of Go fun and poignant it was to watch the world of Go come to this realization and they eventually came to the fact that actually maybe we play Go for fun maybe we play Go because we enjoyed the challenge that it poses us maybe we can leverage these new tools that are coming to us and try to understand Go better find beauty in it and still have fun playing it and this is a question that they had to as themselves it's a very precise problem it's the kind of thing that AI can do we're really far from being able to do that on imprecise problems but I I don't know I think this is a kind of realization that we'll get more and more and eventually we'll come you know I think we'll understand better what it means to be human so I think it's really interesting to think about this idea of you know what what makes us human in some ways AI actually does help us understand perhaps a little bit better you know maybe how intelligence works what's important what are the principles that are important for intelligence what do we actually need and what is sort of you know maybe an artifact of nature or of evolution but one thing that I'll say is my kids for example don't seem to be phased at all by the fact that their ability to translate from English to French for individual sentences is actually worse than that of Google Translate right they just ended you know they don't think that they feel inferior in any way they're just so happy about it right they'll go and they tap it in and it comes out and it's just a very powerful thing and I think it really depends on the mindset right and the generation that we have coming up now they've grown with technology and it's very natural and it does not face them at all right for me it's very strange I grew up without Google and so I know I used to go to the library and go through all these books and find the information that they don't they don't understand that I think our understanding of ourselves and our view of ourselves is also going to evolve and I agree with Philip we're just going to continue to have fun so I think the idea of being human and maybe to bring it back a little bit the idea what the concept of intelligence is is somewhat ill-defined so we have this abstract notion of intelligence but intelligence is a whole spectrum if you're very good at math does that mean you're intelligent if you're not good at math does it mean you're not intelligent but you're really good at writing novels I think there's a really wide spectrum and I think perhaps one of the things that we can learn from AI and as Joel alluded to AI is very good at very specific tasks and so you have this plethora of AI agents they're all intelligent in their own way and maybe what we can learn from this is that we're all human in our own way and these technologies that we're developing hopefully can help us improve our lives as our own individual people that we are in ways that weren't able to before so personalized healthcare or personalized healthcare is a game partner or what have you much to how do I think my panelists are I know that some people still had some really great questions and we could continue to talk about artificial intelligence for days I'm sure but I am going to cut it off here just because you know some people might need to go home and have early mornings tomorrow so thank you so much for coming tonight thank you so much our amazing panelists I think we're very lucky to have such an amazing group with us we are going to continue out in the lobby there will be some refreshments we have some examples of artificial intelligence games that you guys can try to play and the panelists aren't leaving yet so if you did have a burning question please come and approach them in person and they'll be happy to talk with you so thank you guys and thank you so much for this amazing group tonight