 Thank you for coming this evening. This is the fourth Purdue engineering distinguished lecture this semester and this lecture series actually started in 2018 and the intent is really to get some world-renowned scholars and professionals from around the world to come to Purdue engineering and really spend some time involving themselves in deep conversations with their students and faculty about the grand challenges really in the discipline and as a part of the whole visit they of course meet with a lot of people But they also do a lecture, which is going to happen now And they also do a panel or a fireside chat that just happened for those of you who were able to make it for that excellent Event as well. So without further ado, I'd like to introduce Dr. Dmitrius Perolis The Michael and Catherine head of the School of Electrical and Computer Engineering as well as the Riley Professor of Electrical and Computer Engineering To introduce our speaker for this evening Hello, good evening everyone It's really my special Pleasure and is our is our wonderful honor to have miss Lila Ibrahim with us tonight Our very own as I would like to mention Because Miss Ibrahim not only got bachelor's degree from the School of Electrical and Computer Engineering But she was also raised in West Lafayette And we also have the pleasure of having her parents here with us tonight. So let's give a round of applause for the whole family It's really wonderful now the more technical note I would like to say that Miss Ibrahim is actually now with deep mind in fact is the chief operations officer as of last April April of 2019 and she's really taking deep mind to the next level in managing the next phase of growth. So This is actually their first ever chief operations officer now before that She was the chief operations officer of Coursera and She was responsible there for talent people operations and business operations in finance IT and facilities Now one at Coursera. She was also the senior operating partner at Kleiner Perkins and There she focused on the firm's digital and green tech portfolios until 2015 As if that was not amazing Before joining Kleiner Perkins. She was actually with Intel for 18 years Where she actually spanned technical marketing and management positions all of them at the leadership level at Intel And she also served as a chief of staff to Intel chairman Craig Barrett Among the many honors She is the Henry Crown Fellow at the Aspen Institute. This is an honor bestowed on emerging leaders And I would also like to mention something on the more personal side that outside of work Lila is also Has established several computer labs at the Lebanese orphanage where her father was raised See holds as I already mentioned a bachelor's degree and again is a huge honor to welcome Lila. Thank you so much Thank you all for joining me tonight and to those of you online as well It really is a great honor to be back at Purdue. I As mentioned, I joined DeepMind 18 months ago And I'm excited to share what I've learned as DeepMind's first chief operating officer and know that we're on this journey of learning Including my time here at Purdue where I've spent a very packed Visit learning about the exciting things happening on campus today's talk will really focus on artificial intelligence and I hope that I leave you with some optimism about what this Transformational technology can bring but also bring some awareness of the types of questions and considerations that we have underway at DeepMind This is a picture of my visit in July. I'll get out of the way so you can see my cute twin daughters as mentioned I was raised in Lafayette and I was back here over the summer and just happened to be the Apollo 50th anniversary and it was so exciting to bring my kids here and to participate in Purdue's activities and to get them excited about engineering as well and The reason I wanted to share this picture not only because it's truly an honor to be here But also because when I took this job and I'm working on AI. This is what I keep in mind My daughters are growing up with technology That is unprecedented in the way that it is Impacted all of our lives So what their future is going to look like with AI as an ingredient in their lives is something that's very much on my mind What's also on my mind is my responsibility pioneering in this field and the type of world that I'm creating that my daughter and this generation will live in So I So that just so you understand kind of my own personal motivations for some of the work in AI It's right. It's right there So today we'll cover three topics first. I'll give a brief overview of deep mind just a quick hand How many of you had heard of deep mind before my visit? Oh, wow, great. Okay I'll then talk a little bit about research the kind of research we're doing and about the Responsibilities that we have in doing and why we have to proceed with the utmost Upmost kind of caution and putting ourselves from thinking about things from the highest standards And then on the last one, I'll do a little bit of reflection in the last section And what we're doing and then we'll open it up for Q&A just to kind of give you an idea of what to expect tonight so first a quick introduction and I Want to say that what did I do to prepare for this role? So first chief operating officer my responsibilities include engineering research engineering We have an environment games environment in which we train and test our agents traditional operations People finance legal. I have ethics ethics researchers working for me as well So it's a pretty broad scope and in some ways I think my time at Purdue actually really prepared me for this as did my career So maybe before I get into deep mind let me take a moment to introduce myself and As I'm talking a little bit about my background keep in mind I was reading on the Purdue website about the engineering 2020 Initiative and the type of work that Purdue is really trying to invest in having leadership roles for Purdue alum To respond to global technology economic and societal challenges of the 21st century. So I think you've made a good choice by being here So first step on the way of my life right there pictures of me as a child here in Lafayette and This is really formative time for me because I grew up as the dark-haired kid in my Neighborhood and in my school. I was different. I was a child of immigrants. I I English was my second language and so I didn't know some of the basic terms that were part of American culture In fact, yesterday I caught up with an elementary school classmate over at the Burke Building and I remembered like listening to American radio and having What was it chicken dumplings for the first time? So this is my childhood and what was really transformational Impactful about this time was the Midwest values that I grew up with and really being Being comfortable with being different one of those pictures up there is from my time as an exchange student in Japan in 1986 where I went from being the foreigner in America to being the American in a city of 40,000 people in Japan I stuck out a lot But I learned Japanese and it really got me comfortable being in an environment that I didn't know much about and having to sort through things I Joined Purdue as an electrical engineer. I was a co-op at Purdue and I co-opped with a company called Intel you've heard of Intel. Okay Before I went to my co-op interviews. My dad said don't tell them. You don't like computers So that was very good advice I got the job at Intel the co-op. I worked on something called the Pentium processor as my first co-op experience I then went on for an 18 year career at Intel that included working in eight different roles in three different countries so I worked on DVD standards in Japan at a time when people didn't think you would watch movies on a computer I Got to help build the internet and compute infrastructure in countries that didn't have internet access or communities that didn't have internet access There was fear about jobs going away because of The internet access or computers and instead we had to take an approach of how do we involve the local community? To integrate this technology in a way that was deliberate and purposeful and have a positive impact My time at Intel was phenomenal Including as mentioned chief of staff to the CEO and chairman who made Intel a powerhouse his Background in order to be CEO of Intel a 40 billion dollar company at the time 85,000 employees Was a PhD in material science and a lot of on-the-job training So when you hear my background, I just have a bachelor's degree in electrical engineering and a lot of really diverse experiences So I was recruited out of Intel to Kleiner Perkins This is the venture capital firm that made the first investments and companies. You'll have heard of like Google Like Amazon and so I went into a well established prominent venture capital And I took my intrapreneurial skills from Intel and implied them into an entrepreneurial environment From there one of the companies I did diligence on a lot of companies including Twitter including Coursera Including handfuls of others and one when I after doing the diligence on a company which became Coursera I went in as before the company was even 40 people as the first business executive outside business Executive to partner with the two founders in order to who were both faculty members at Stanford and needed some help in building the company and my generalist skill sets my my Diversity of experiences that I had but we've founded in a technical roots enabled me to do that job. I Then decided I would take a year off to go catch up with life because it had been pretty busy and about One month into my year off. I got an opportunity to interview with DeepMind in London and Moving from Silicon Valley to London was quite a transformation When I was interviewing in January with London weather, so I had the Indiana weather had been beating out of me by then and Today this is really going to be my focus But I thought it was important to give you my background because you'll see here that I've done a little bit of a lot of different things and To move into an executive role at a company that is pioneering and doing something that's never been done before Relies a lot on having people at the top who can help navigate who are grounded in And values who understand the impact that technology can have on the world And one of the things I've learned in my career is I have a passion for where technology intersects with societal impact and how can we make technology? benefit people Benefit the economy and benefit and create more opportunity so I Joined DeepMind and when I first interviewed I looked at this this mission. Here's our Phenomenal mission that I thought was quite audacious solve intelligence Okay, and then use that to solve everything else. This is DeepMind's mission pretty straightforward, right? So let me give you a little bit of background of DeepMind in 2010 DeepMind was founded by three people Demis Hassibis Shane leg and Mustafa Suleiman and they founded it in London Where Shane and Demis had both met at a university and had this crazy idea about how to solve intelligence And they got there to the same idea, but from very different routes. One was a child chess Prodigy gaming had done gaming companies and studied neuroscience and the other one came up from a math background And the fact that they had gotten the line was was pretty phenomenal They in 2014 were acquired by Google But then Google became you know alphabet had the big restructuring so right now DeepMind as a sister company to Google within Alphabet the founders were inspired if they could understand what intelligence really means Then we can create recreate it if you can recreate intelligence Could we create a tool that could help solve some of the questions facing society and the underlying impact Underlying aspect was having a positive impact Now what is intelligence? Surely this is there's got to be like some roadmap to this right well It turns out that there's it's a pretty contentious question because there's not a clear definition So here's one from Richard Gregory. He's one of the most prominent psychologists of our time who said in 1998 Innumerable tests are available for measuring intelligence yet. No one is quite certain of what intelligence is Or even what they're measuring So it's really not that clear. I had a deep-mind research scientist explain it to me this way So she said give me a name your friend. I said D. She said okay Let's say D is the representative of human intelligence If I asked D to take a seat she will know what a seat is she'll know what it means to sit and then I asked D Hey, can you play a game of checkers with me? Can you navigate your way around Purdue campus? Can't let's cook dinner together like all of those things you could probably ask D and D would Generally understand and be able to do that She did just fine So it's one learning system one brain Doing a lot of different activities and performing adequately in all of them. So think of that as like general intelligence That's how we think of it within deep line Contrast a less intelligent system for lack of a better example The Indiana state bird is a cardinal. So a cardinal can fly around and navigate Indiana, but you ask it to play a game of checkers and you're probably not going to have much luck So again, we think of it as like generality The generality aspect and I'll talk more about that is how we think of intelligence within D mind Now most people think of a plot artificial intelligence and they think of one of these activity Who's use Google translate? Oh Yes, great Thank you Watched a movie from Netflix or listened to something from Spotify because it was recommended Okay, or you're typing a text message and there is an auto predict Okay, these are all examples of narrow AI feed a lot of data based on the data We're giving a specific direction a specific action so This is really what we think of as narrow AI, but what is deep mind doing that's different? I say I I show you this very colorful background and I think okay Advanced AI we want something that can process a large amount of data different types of information around the world about the world around us we want an system that's going to make sense of this data and learn from it and The kind of AI we want to create We don't it doesn't need to be told what to do and then maybe it doesn't even know how to solve the problem Or even what the problem really is But we think it could find patterns that humans might take years to spot or maybe never even see So one example of this if you think about it Maybe you look up at the stars at night and you wonder is there a pattern here? Is there something that I don't know Despite all the brain power Then and that humans have there are so many problems that we haven't been able to solve and the idea with this Artificial general intelligence is can it help us be used as a tool to help us solve some of these problems in spot trends that we can't even imagine So the world is full of complex problems. How do we make sense of the universe? What could we do about climate change? How do we think about new materials? Can we prevent disease? Can we manage disease? As I mentioned like the human brain power within this room here this campus this state this country This world at this time has still not solved so many of the challenges that are facing humanity So our idea is to build this tool this system this advanced artificial Intelligence system that can learn how to complete a broad a set of tasks and perform at human level against those different tasks It could be one of humanity's most useful Inventions at least that's why we're excited about it We think that Artificial and Intelligence can help scientists engineers and others kind of unlock knowledge and answer some of the fundamental questions Now at D-Mind we think about this in two different ways We think of AI as a science and AI for science So AI as a science is really trying to say how do we think about artificial advanced artificial intelligence? And how what can we do to advance what that really means and what it's capable of? So if I was at a product organization, which I've been in I would think about if I was developing a product I would think about okay. What problem am I trying to solve? Who are my users? how can I get a product prototype and then test it and Iterate quickly and get it to market But we're talking about what are the fundamental building blocks of intelligence? I bet if you talk to your neighbor and gave each other answers You'd be completely different things of like what you think the fundamental building blocks are for intelligence So we've taken a very scientific approach at D-Mind, which is we come up with our hypothesis We do this every six months. What's our hypothesis on the research? Then we test it we build it out our research engineers help us build it out We test it and then we pivot based on that So it's not like there's a super clear roadmap instead. We're taking a very long term scientific approach to solving this problem There are three main Hypotheses that underpin a lot of what we're doing. Those are Generality kind of what I mentioned before of building a system that can learn adapt and perform in many different tasks There's creativity So if you want an artificial intelligence system to actually learn on its own You have to give it the ability to really learn and make inferences for itself and from that comes an element of creativity You also want to build something that can not that can handle the complexities of the real world Now the real world is messy and imperfect and ill-defined So these three things while they seem pretty simple are Main vectors for our research are still pretty complex in themselves. So I'm going to step through an example for each of these Has anyone heard of AlphaGo So I appreciate the audience participation. Thank you So go is a game that's played in Asia It's over 2,000 years old. So think about it this for 2,000 years Humans have been playing this game. They have been studying it. They have been perfecting it. They study strategies They adapt and so it's become this holy grail of artificial intelligence Because certainly no machine could learn how to play go. There are more moves than atoms in the world It is literally impossible to predict every single move and program it and so What happened was in 2016 we published a nature paper that Described in an artificial intelligence system that learned how to play go on its own This was about a decade ahead of what people thought was remotely possible and I Really can't underscore the importance and what a significant scientific breakthrough this was but fortunately There's a movie made about it. So I'm going to show you a quick snippet to give you a sense of What AlphaGo was really about the world's oldest continuously played board game It is one of the simplest and also most abstract Beating a professional player at go is a long-standing challenge of artificial intelligence Everything we've ever tried in AI just falls over when you try the game of go a number of possible Configurations of the board is more than the number of atoms in the universe AlphaGo found a way to learn how to play go So far AlphaGo has beaten every challenge we've given it But we won't know its true strength until we play somebody who is at the top of the world likely to die But not like no other is about to get underway in South Korea This at all is to go what Roger Federer is to tenants Just the very thought of a machine playing a human is inherently intriguing. The place is a madhouse Welcome to the deep-mind challenge full world. It's watching. Can these two doll find AlphaGo's weakness? Whoa Is there in fact a weakness? The game kind of turned on its axis. Look at his face. No, it's not a constant face Developing into a very very dangerous fight who hold the phone Lee has left the room in the end. It is about pride I think there's something went wrong. Yeah, it's made in tech. It's got to be that's a miracle These ideas that are driving AlphaGo are going to drive our future. This is it folks I really recommend the movie as you can tell you wouldn't think a documentary about The game of go could be so emotionally compelling But I definitely think it is so last year we took the same techniques from AlphaGo and We applied them to Alpha to something we called Alpha zero so an evolution of AlphaGo It learned how to play go on its own but also and also win and Same for chess and same for Shogi so What was interesting is a single? algorithm Did those three games learn to play them learn to win? But it's not about the winning. It was about how how this happened It was without human training without human instruction. It was basically been given basic rules and told whether it's one or lost and We used a process for those of you in this field who will know reinforcement learning It was General generalizable and that's kind of what the alpha zero proved as we could take the alpha go and we could then generalize that learning to other games Now one thing I found very interesting as a hardware engineer is Imagine taking that algorithm and putting it in a time machine and like moving it back To maybe when I was at school at Purdue It couldn't have run the compute power wasn't there So this is part of the reason you're seeing this now all the data that's available the compute power and having the algorithms in hand So the other thing that was interesting about AlphaGo and Alpha zero was that it became Creative it introduced us to new ways about thinking about decade about very old well-played games So part of this is because it wasn't trained by a human it wasn't bound by the human way of playing So there's a move in there in the movie AlphaGo about move 37 Which is quite famous because it was the moment where everyone thought oh AlphaGo did something wrong like that. What is this it shouldn't have played this it's going to lose but actually it turns out that that was the moment that the whole game changed and What Lisa doll said afterwards was I thought AlphaGo was based on probability calculation, and it was merely a machine But when I saw this movie when I saw this move I changed my mind surely AlphaGo is creative and what he has said since then is in fact that He has really rethought many ways and many strategies based on his experience in playing AlphaGo Also on Alpha zero, which was the one that played go and shogi and chess We had a couple players were so fascinated by the moves that they actually studied Intensively Alpha zero for six months and wrote a book about some of the moves that Alpha zero may so again kind of reimagining the way that we Were looking at the strategies of these games Okay, so that was on generality and creativity So and and they were board games and while they're complex. They're pretty straightforward Now let's talk about real-time strategy games because the real world is messy and imperfect and complicated So we took Starcraft any Starcraft players Okay, couple so we took Starcraft and we said now here's a complicated game. It is I'm not a Starcraft player. So I Might I hope I get this right the folks in the audience will set me straight if not but because it is one of the most challenging games and It also has a lot of pros. So there if we want to play World-class folks we can do that with Starcraft 2 In the way that the game works in my understanding is players balance the short-term tasks of Like constructing buildings or controlling units with like long-term strategies to try to win the game managing resources And it's not turn-based. It's not my term than your term Things are happening all at the same time and much of the gameplay is actually hidden from the hidden from others the gameplay map It's more complex than chess So players control hundreds of units at any one time. It's more complex than go there's 10 to the 26 possible choices for every move and Players have less information than about their implements than even a game like poker. Did I kind of get that right? Okay, so far So last month we published a paper on an advanced AI system called alpha star And alpha star was a general system that played and won beat pros on a fully unrestricted game of Starcraft 2 and it was a world's first and You know, I wasn't aware of this But a lot of universities actually use Starcraft 2 as a training platform for students on AI So this was pretty significant This shows us that general purpose learning techniques can scale to complex situations like starcraft craft 2 now I think To kind of wrap up this section. We see That we're basing general intelligence on general is it generalizable? Is it creative? Can we have it deal with the complexity of the real world? What's been exciting is that our scientific focus on AI we've published 10 papers in top-tier academic journals and you can kind of see that the history of like Atari to go to chess Shogi and checker and Shogi and chess to Now Starcraft 2 so kind of the evolution we and this is just This week's cover of Nature magazine We've also published over 600 papers and peer reviewed that are on archive and I think all of this kind of shows of AI as a research focus. This is the kind of work with long-term work that deep mind does but What about Applying what we've built here into real-world problems that second part of the mission of AI for science What if we could do things like simulate proteins enough to design better enzymes for catalyzing waste processing or More efficiently sequester carbon from the atmosphere or predict predict mechanisms of common and rare diseases alike or help Researchers accelerate other scientific discoveries So this is kind of where we're taking the algorithms and what we've learned in developing AI and trying to apply it to real-world problems Much in the same way telescopes really gave us information about the universe So I'm going to give you two examples here of using AI for science. The first here is a protein folding It was our first significant milestone in demonstrating how AI could be used for scientific Discovery so every function in our body is for can be traced back to proteins and how the proteins fold Determine what activity and what function they have within our body and predicting how proteins fold Really has been a long-standing biology challenge If you can predict how proteins fold you can also predict how they misfold and that could really give us better insight into things like Alzheimer's or cystic fibrosis and so there's been a long-standing challenge of can you predict protein folding and therefore misfolding in 2018 at the end of last year we introduced some our AI system called alpha fold and It uses the same deep learning techniques and it to try to predict the 3d models of proteins this This is not done. It's still very early But we made significant scientific advancements in this space in a relatively short amount of time given the nature of the challenge So there's still a lot of work to be done here But it is an example of how we're partnering with the scientific community to try to advance research in some of these areas Another area that I'm personally quite passionate about is climate change. So here we have two examples on on the top We know that wind Can has really taken off the past few years in terms of trying to reduce carbon emission An important source of carbon-free electricity the problem with the wind. It's not reliable It varies So if you want to generate some amount of power, it's not like I can say at the certain time I need this type of type of output from the wind farm. So what we did is we partnered with Google and we took a Data that's readily available about Weather forecasts and historical turbine data and we were able to predict the wind output 36 hours in advance of actual generation Which allowed us to actually do a full day in advance of committing that same wind energy into the grid So if the net of this was making the wind The wind 20% more valuable on an existing infrastructure of a wind farm. So just adding in our alpha Sorry, this one was not an alpha adding in our algorithms now That's energy generation. What about energy consumption? This is a picture of a Google data center data centers consume about 3% of the world's energy Huge issue. So the idea was it could we we piloted this could we make an impact by applying some of our artificial intelligence systems into data centers We were able to reduce the power for cooling Google specific data centers by 30% again existing infrastructure And still early still applying AI to these problems small step. It's not just deep mind working on these Problems, but it does kind of show that we can start applying the artificial intelligence to help us Create a better environment so It's a little bit about what deep mind is doing now If you have questions about all of this you should like If you're kind of conflicted of like, okay, I this is really cool But I have all of these other questions Lila. Are you gonna talk about those? So here's where I really wanted to take a step back and say We don't think you make the world the better place by accident You have to be deliberate you have to be purposeful you have to ask yourself really uncomfortable questions and have tough conversations About what does this mean? What implications could this have? in order to To basically pioneer responsibly And as I mentioned this was really a big decision point for me when I moved to deep mind Now I'm like looking at kind of my my role and saying well, I'm leading this organization That's doing phenomenal advanced AI work It and yet it's so hard to conceive what the future will look like I go back to when I was a student here could I have imagined? The kind of work I had been able to do and taking technology forward in the advancements That people would be sitting there where their their phones taking pictures or watching videos on something that you know I mean just amazing the transformation we've seen So how do we balance the short-term mid-term and long-term risks and the implications? I ask myself this all the time and it's actually been one of the main areas of my focus within deep mind of Figuring out how do we operationalize some of these questions within the company? So one of the things I I do as I mentioned is I oversee our cross-functional Cross-functional efforts to really look at The ethics questions related to our work and it really requires three things. So being the non AI person I pull people around the table and The collaboration is key example of this would be Can we bring a diverse set of stakeholders? So we have engineers around the table? We have got legal government affairs. We have people dealing with public engagement. We have ethics researchers We're having a conversation of what implications does this research have or could have? We're thoughtful and open about talking about what the challenges could be around this research Even though we may not all have the answers. We feel it's important to have the conversation And when we make a decision we recognize the decision We're making today may not be relevant tomorrow or the next day or the next year So how do we allow ourselves at adaptability? I'm not going to summarize all the challenges around how do we pioneer responsibly I will touch on three very quickly Safety is a long-term issue. This is how do we make sure systems act the way that we want them to act? multi-use there's challenges and opportunity for the technologies. How do we make those trade-offs and Bias and fairness how do we make sure the systems that we're developing are actually fair? So first let's go into safety Common question I hear from people thanks to Hollywood. What about determinators and oh those out of control robots Do you have out of control robots? We don't I? Actually believe this is kind of an unlikely scenario given the type of research we do That said we are interested in AI safety and this really means How do you ensure that humans are always in the loop? That humans are part of the process that's happening in case the system malfunctions or it operates in a way that we Didn't intend it to so this is the field of AI safety. There's a lot of work going on here Here's an example from one of our Our colleagues at open AI so the game of this game is called coastrunners and the goal of the game is As understood by humans is to finish the boat race quickly and preferably ahead of the other players That's not what the boat is doing is it no it keeps crashing into things and racking up points because Open AI wanted this system to actually learn how to play fairly and learn the game skills But as you can see it's just busy racking up points because that's how it's think thinks it's winning. So it's an example of Why it's important to have safety measures in place because if it does something unintended you want to be able to understand it and stop it And I think it is going to be increasingly important as the problems we tackle become more of real real-world issues Now our approach here. What are we doing? It's much like designing a car You don't design a car and then say oh, let's add a seat belt and some airbags No, you design it in from the start not as bolt-ons. So we take tech safety the same way. We have a team dedicated to it Much like software engineers have quality and reliability testing. We take that same approach with our AI safety And but it is more than a technical approach. So an example would be an engineer says How can we encode values or goals in our AI system? And then the next question comes from our ethicist he says what goals or values should be should the AI be aligned with So we need these ethicists and philosophers all working together and it is beyond one company It is take an industry We do a lot of work with an organization called Partnership for AI to kind of come to share an understanding of what can happen what we can all do here dual use This is kind of that an old age old question But an important one of how do you have technology? That's designed to help society, but also mitigate risk that that Might undermine that so we believe that anyone developing this technology should really should be responsible and thinking both About the potential which I think is very easy for you know A lot of engineers are focused and scientists are focused on the upside opportunity Of the downside. I don't know that it's not my area of expertise I'm not going to worry about it and what we're trying to say is actually we need to consider it So we had this example of text-to-speech where we had done some research and we had a Great opportunity for this technology ALS you could use it to like capture voice and then as people Advance in the disease. They'd still have their voice and yet You worry about someone trying to imitate Voice in a negative way, so we had a paper we were about to publish and we did an analysis Should we publish the paper or not and the conversation we had was let's go ahead and proceed with it because we felt that The type of information that we needed to record the way that we recorded the data actually had to be high quality enough and That there were a few other things that really were hard to replicate So we said for now let's proceed with us. Let's also invest in mitigating risks So are there partners out there who are looking at how to do Ways of mitigating voice imitation. What can we do to fund them and accelerate their research as well? So we took kind of a we took that approach I Think I've just covered that the other thing I just want to highlight here Is a lot of this stuff just takes time like to have the proper conversations To really think through things they are not easy answers and they are not perfect answer So you but you have to allow the time in order and bring people with different skill sets to the table to do this Finally on fairness I think this is pretty well known issue. It is a near term. It is a very real issue. I I think there are examples of this of like higher job off higher Offered paid higher paid jobs get offered to men kind of queue up than they do to women Like so we're propagating a lot of the biases that already exist We did this experiment on trying Here it is kinetic Connected human action video data set so AI systems are really good at image classification we wanted to see if you could do put together a video and Have the AI system say what kind of human actions are taking place like human-to-human interactions Like hugging or shaking hands human-to-object like playing an instrument for each class. We had about 600 video clips from YouTube and what the team wasn't aware of is because the input was already kind of bias that They were worried that it would bias the The classification so for example If shaving a beard or dunking a basketball were mostly male videos Then if a female dunked a basketball, would it recognize that that was the action taking place? in the end we did enough research to say that We did enough research to say that in fact it wasn't Bias, but we highlighted this in the paper that we wrote such that we could at least raise awareness I'm getting the time queue so I'm trying to speed up so we can answer some of your questions We are continuing to work in this area there is no simple answer I think being aware of it and again we we recently did a paper between one of our computer scientists and an ethicist on How we might address this in a different way I'm going to quickly go through here and hope that you've gotten the point so far diversity matters if we're developing AI systems that are supposed to solve intelligence and Use that to solve other problems then we need to have representative views diversity views Representatives of society around the table that could be within the organization It can be broader than our organization in partnerships with with different organizations And so I think this is just something that is very much on all of our minds We're making a lot of steps in this way, but we we need partnership to help here I do want to highlight one thing We had a research scientist who Was trying to figure out how to make this received the AI system received a jumble mess of inputs and Retain them and applied the knowledge of different contexts much like a human brain But she's a neurologist and not a computer scientist So she broke one of the most important rules in an AI and she actually tampered with the weights of her AI system Which is something you wouldn't do so again bringing diversity into the tape into the picture But if she hadn't trusted her instincts and done that she would not have uncovered some really great advancements and unsupervised learning So we need people that aren't familiar with these topics actually experimenting and testing the boundaries so that we can get this right We're in uncharted territory and We acknowledge that and and I think it's just it's really important for all of us to be thinking that if AI is Going to play such an important role in our future. What are what is our responsibility and how can we all? Participate in defining what this could be Here are a few resources for you If you have interest in careers You can go to our website and we have a careers page. I often get asked about internships Which are tend to be graduate level If you want to learn more about artificial intelligence machine learning Coursera has a lot of classes on the website including Andrews course, which now has over two million learners So plenty of opportunities for people of all levels if you want to learn more about machine learning and Then we've recently launched a podcast series that addresses a wide range of topics and A wide range of expertise with Hannah Fry and you can find that on our website as well There's also the movie alpha go which I mentioned now As I said earlier, I wasn't sure about moving to deep mind and to take on this mission But I think if you can actually imagine How we could use artificial intelligence to actually solve some of these amazing Challenges that we've been grappling with that we struggle with and This is where I think About how AI could be one of the most transformative technologies in our future Not just my future, but my daughter's futures And I think that this is a super exciting time. We're very early in the work that we're doing here And I'm really excited to see not only what deep mind does But what the world does and what Purdue alumni How Purdue alumni contribute to making this happen. Thank you Thank you Lila for such an exciting passionate presentation of Generality creativity complexity in the context of deep mind. Well, then you listened Maybe not deep And AI for science as well as covering safety ethics bias and the opportunities and challenges for AI in the future So the floor is open for questions now Hi My name is Isha and I'm a computer engineering student in vertically integrated projects And my question is with regards to the building of a generally intelligent system So when you're building a system that's supposed to mimic human intelligence, how it is deep mind defines success for such a system So this is kind of a two-part question If the system mimics Human intelligence or exceeds it is that defined as success or if it mimics it in a very Similar pattern to how humans generally perform. So like sometimes winning games sometimes losing games is that defined as success? And the other part to this is is deep mind taking into account the hardware constraints of Building this kind of system. So for example low power systems to mimic the human brains constraints Like you mentioned before AlphaGo works because of the whole computational resources that are available But it wouldn't work on it like a low power system So could you speak about those kinds of constraints and how to define success? excellent questions on the first the second one on the compute infrastructure Alpha zero, which was the successor to AlphaGo actually what use less compute So we were able to make it more efficient. So as it continues to learn and we gain some of these efficiencies So some is happening in the algorithm, but absolutely on the hardware from a hardware infrastructure perspective We're thinking about what is the compute capability Required to do this. So I think it's you know we have a team that's dedicated to thinking about energy and They just earlier this week presented on some of our kind of consumption aspects. So We're not particularly focused on getting it to brain level, but we are thinking about how do we Remove the power consumption or power and compute power required I'm sorry. And the first one was Oh the brain what success look like What is success like like for humans it's often about learning are we learning so Actually, as I said, we kind of look at our research in six month chunks. Did we learn something and the fact that Alpha Star learn how to play Starcraft was like a huge success didn't matter if it won or lost it We were trying to understand it eventually Played itself enough then that and played others that it learned to win But it wasn't as much about the winning it was really about it was continuously learning and improving that Creativity element the generalizable the generalizing generalizing. Sorry And the creative aspects are actually quite important for us. So the how And what the outputs are our ways that we celebrate progress My name is Jonathan Snyder. So one of the biggest concerns I have is the militarization of AI specifically using it for military purposes In weapons or lethal autonomous systems. So what are your thoughts on that? Do you think that can be done safely are the risks too great to ever do that? Should we even do that? I think the challenge here is on a question like that. Everybody has different opinions we abide by the AI principles that Google has pushed put out which We also take an active role in both defining and implementing which has some Limitations around how much we do with military And the way that deep mind commercializes our work is we do it through Google products And so that kind of gives which are then subjected to the AI principles So we have a more red lines Excuse me So we don't from our perspective and the kind of work that we do that is not within our scope Hi, my name is should tell you and my question is so as your reinforcement bots become better and better Like more and more people will think that the best way to get this thing done is via reinforcement learning so my question becomes like what do you think the relationship between traditional scientists and Reinforcement learning like computer scientists looking to solve a problem becomes like if I were a biologist And I wanted to look at protein folding Do you think that in the future a future biologist might start coding a reinforcement learning agent instead of like I don't know going to like reading through papers and stuff on how things should fold. We actually have biologists you Yeah, and we have neuroscientists who don't code So we actually think and sometimes when we have a strong opinion about an approach We realize that many of our other employees don't feel that way So we almost sometimes worry that we're over biased in our our beliefs that we invest in other Approaches as well like we want that diversity of opinion Which is again the fact that we don't have a roadmap built out and it's more in the building blocks aren't clearly defined We do have the way we've structured our research is that we have research scientists And not all of them code We have research engineers who they partner with to try to bring their ideas and experiments to life Hi, my name is Effa. I'm a computer engineer alignment freshman in computer engineering So I've noticed that most if not all deep minds projects are actually analogies of major challenges Going through artificial intelligence So you're first solving a board game and you're solving multiple board games in one algorithm And I'm curious as to have you evaluate as a company the relevance of these analogies So of these tasks to the actual challenges that you're solving So how are you determining that they're actually successful analogies? So one of the founders of the CEO demos hospice who you saw in the video He was a world champ champion chess player as a child as a as a youth and Really believes that kind of games are easy not easy games are good ways to like test To simulate the intelligence because you know if you've won or lost, you know when the game is over So it's it provides a confined system We do other approaches as well that You know things that you can test things like memory in here or planning for Other aspects so what we've tried to do is say what are some of the capabilities that we want to develop? How does that relate to some of the games that are on the market? and And take more of a capability approach So some of these things as you've mentioned are like long-standing challenges and AI But we are taking a broader view of both games and also kind of creating our own games based on the types of tasks that we want to test for general to see how If we're able to do a lot of tasks with the same algorithm Time for a couple of questions Hello, I'm Manu and This question is Somewhat off what you showed on the slide. So what since you are at the top position I want to ask you this so what is your business model and how do you pay your employees and Since you do research It's mainly research. Do you have any pressure from the top level so that you generate revenue? as you showed I Don't I didn't see any bounds like that. So you do whatever you feel like doing without any Restrictions To commercialize what you're doing. So without such things How do you pay your employees and if say I want to start a similar company like that? So what is the business? Thank you so it was a conscious and Proactive decision that the company was acquired by Alphabet Because what that does is secure long-term funding and it was an agreement from the start I mean if you think about the founders of Google they were you know, Larry Page and Sergey They thought about organizing the world's information. That's basically AI, right? and so having that long-term view Alphabet has a lot of different bets that they've made in You know Waymo for autonomous cars Lune for internet, etc. So this is seen as like a long-term Research now as I mentioned we do take some of our research and commercialize it through Google where we provide internal value so and We get paid in cash and Google stock And really good beverages and snacks in our kitchenettes No, seriously, I do think it is a big question of like people always ask me this of like what's your business model and it's We're not a traditional company Okay, we are more like a research Institute as a scientific Scientific organization and we think about our work not in terms of product launches and Monetization but rather like an Apollo like program Right with a long-term vision kind of audacious. How are we going to get there? And again, that is the partnership that we have and the acquisition that we had with alphabet Hi So it's hard to imagine a machine or a computer being creative because it's not usually you associate machines with creativity So how do you even start making a computer be creative? That's a good philosophical question And I think this is actually My advice is watch Alpha go because you will see You will see that that move and where again when you're not telling You know, we do a lot of things because that's how it's been passed down to us Or we imitate how others are doing it and how we perceive it to be successful But if you're unconstrained by that if you can think of new ways of doing things and getting to places Then you might have that same creativity And I think that's really what we're learning from our work with Through our advanced artificial intelligence systems great Let's wear a time. So let's Give Lila a big hand for not only the great talk But also for the inspiring example she says through her own life experience. So thank you. Thank you