 We're gonna get started now. My name is Mackenzie Smith, and I'm the University Librarian and Vice Provost for Digital Scholarship here at UC Davis. So we're your host tonight, and this is the first time that the library has hosted this event, which is in its fourth or fifth of the year and is being done every month, really from now until the end of this academic year. And we're really happy to be hosting this in the library because, A, there are a lot of students already here, so it's very easy for you to come and participate in these really interesting events, and B, because it's part of our mission to make sure that we're helping students be successful, and I think this series is really important in the way that it's framing the question of how you shape a career and how you learn about industries and the future of employment in lots of different industries that are really going to be shaping our country and our world in the future. So we're super excited to be hosting this series and very grateful to the Morris Twins who are really the brains behind this organization, this whole effort, and have put together this program for you tonight and all the future ones, which I'm sure they will tell you about today. So with that, thank you all for coming. I hope you ate all the food, and I will let Julia explain what is going on. All right. Hello. Thank you all for coming. For those of you who are not familiar with this series, it's relatively new. The goal of the Future of Work series is to get students to think more critically about the science and technology driven problem spaces that are going to demand extra attention in the coming years, and to encourage you all to explore how these problems can be turned into opportunities that can transform your career paths. So for each talk, we'll pair one UC Davis faculty member or administrator with one professional from industry, non-profit, government startups, have both groups interview each other, and then answer the same questions from different perspectives. So as the series progresses, we plan to explore things like data, AI, robotics, clean tech, biotech, media governance, you give the drift. Ultimately, we want to embolden you all to look beyond job titles and think more adventurously when it comes to what kinds of problems you want to solve, be that they're working in entirely new industries or helping transform and modernize existing ones. Before we begin, I'd like to very quickly acknowledge some of the people who have made today's event possible. First, thank you to our team of undergraduates, Pamela, Emily, Kai, Jessica, Vivian, and of course my sister, Olivia. This series is currently co-sponsored by the Office of the Provost and the UC Davis Library, and there are many different individuals who are now involved in bringing this series to life. Our advisory board comprised of Beth Broom, Mackenzie Smith, Martin Kenney, Mark Faciali, Hemant Bargava, Colin Milburn, and Dan Flynn has been instrumental in shaping this series. I'd also like to take a moment to thank Beth Callahan, Debbie Snap, Jessica Nussbaum, Bill Garrity, and Lorella Geno, and everyone else at the UC Davis Library who has supported us throughout our partnership. Which brings us to our event today. Mark Nitzberg is an AI scientist, entrepreneur, and consultant to industry and government. He's the executive director of the Center for Human-Compatible Artificial Intelligence at UC Berkeley, head of strategic outreach for the Berkeley AI Research Lab, and a principal at the Cambrian AI think tank network. He served as a principal at Vioweb, which built the world's first e-commerce platform, and most recently, he was the director of computer vision products at Amazon A9 following their acquisition of the Blindsight Corporation, maker of assistive technologies for low vision and active aging, where he was the founding CEO. He is the co-author of the book Solomon's Code, Humanity in a World of Thinking Machines. He began studying AI as a stowaway student at MIT in the AI wave of the early 1980s, and went on to complete his PhD in computer vision and human perception at Harvard. Montabar Gava is an expert in technology management and the information technology industry. He also studies the use of IT in clinical health care, and has previously worked on data-driven and analytical decision-making in organizations. He earned his PhD in information systems, operations, and economics from the Wharton School of Business at the University of Pennsylvania, and he currently serves as Jerome and Elsie Serns chair in technology management. He was also the founding director of the MSBA program at the UC Davis Graduate School of Management. He is currently in the process of creating the Center for X Transformation at UC Davis. After our speakers have finished their 45 minute long fireside chat, we're going to move into a Q&A. It's powered by anonymous polling, and we encourage you to submit your questions real time as you learn from our speakers. So, with all that said, please give a warm welcome to Professor Hamant Bargava and Professor Mark Nitzberg. All right, Mark, so where do we begin? Let's talk about artificial intelligence, all right, or sophisticated memory. Okay. Yeah, so we thought it would be interesting to get a sense of the audience and just through a very, very quick show of hands, how many of the students here in particular are from the tech side of campus, which would be computing, math, engineering, and so forth. Can you just have a quick show? All right. Okay. And you want to tell me what would the other half be, Mark? Well, how many of you are worried about that half? Doing something that would limit your lives or make them make it make things worse, right? Okay. You know, healthy skepticism, right? So we thought it might be interesting to begin, you know, right at the bottom with giving you a quick sense of what is AI, because it has gone through so much change in the last 40 or 50 years, and I think Mark and I both were fortunate to have begun our professional careers. And I don't know, would you describe that as like the stone age of AI or medieval age of AI? And so maybe do you want to kick us off with what AI is or has been? And then we take it from there. Artificial intelligence is a great term. It's so evocative. And we tend to anthropomorphize as it is. But back when it was first imagined that you could give machine instructions and those instructions would teach it to do what we do. You know, it's captured the imagination. And so the only problem is that when it started, you know, it was a very rudimentary machine that required a lot of attendance feeding it. And so the bar for what constituted artificial intelligence was was set rather low, and then it just kept moving. So artificial intelligence was defined as whatever you couldn't do at the time. And I think the early the early bar was if it could play chess. And then it moved from there. And at this point it's now become you know, artificial intelligence the term is used in several different ways in you know in Hollywood and in common parlance at parties and so forth. It really means artificial general intelligence that that term of art that refers to the ability of a machine to perform any cognitive task at a human level. And and more than that, that a machine will open its eyes and and and be conscious. And so that's that's a something that I'd like to impress upon you is that that's really not around the corner. If you stop any of the professors at the AI lab at Berkeley and say I've got a billion dollars. Can you make me consciousness? They'll say no, right? I can't do that not during my lifetime. And so there's another way in which the term is used. Artificial intelligence is is a you know, thanks to recent breakthroughs an application of machine learning to generally to large amounts of data. And that is is I think where we get the the concept of mimicry. It you know enables a kind of automated and amplified version of something that we do and it gives the appearance of human behavior. And and that's you know today's artificial intelligence can be easily confused for consciousness. If you don't really know what you're you know, what you're seeing, but it's it's much more superficial. It's really statistical inference. So one of the points I would like to mention or maybe impress is the AI of course cannot be separated from computation or automation more generally, right? And as we talk about you know other issues that we should discuss today, future of work and ethics and business of AI, it's it's really intertwined with the business or ethics or work impacts of automation in general. And I think for many of you it might be really useful to keep in mind that even these devices we have in our hands today hold computation power, which is if you think of the days when AI work started back around you know 50, 60 years ago, this much computation power was not available to any of the AI researchers or either computing power or the amount of data of course that we're talking about. And and so the goals of AI have really changed so much from this you know emulated being like human intelligent humans to producing outcomes that look like they might have come from humans. But also the methods of AI have changed so much from trying to teach machines intelligent things and logic in particular and how to reason with uncertainty or how to reason with time. You know those sorts of very general reasoning sorts of concepts. You're really letting machines learn from massive amounts of data. And so this whole wave of transformation in AI has been accompanied by the fact that we have massive computational power at our disposal. We've learned how to use that massive computation power smartly through parallel computation. Right. And then of course this whole thing about big data that that learning can occur if you teach give a machine lots and lots of examples. And those examples are not available to us in digital form. So you can feed them to the machine very very very quickly. So the goals have changed. The methods have changed. The scale of AI has really increased so vastly. And so that's its computation and automation. But then also there's analytics or ability to use you know build algorithms that are extremely powerful. So in particular when we talk about AI and machine learning there is this whole transition from you know thinking of doing AI through sort of mimicking what happens in the brain with neural networks. Right. And the idea of multi-layered neural networks has just existed for a really long time. But it was hard to optimize the the parameters that go into these networks. And that's what we've been seeing in the last 10 years of you know progress in deep learning. So one of the things you know I find very interesting is as we look at where you know how AI is going to impact our society workplace business and so forth. One of the things I find interesting and we mentioned very briefly was whether you know if you look at these 50 or 60 years of AI where it has been through ups and downs and had a few major breakthroughs. Are we looking at the next 20 years of AI. Producing major scientific breakthroughs in machine learning, computation and so forth. Or are we at an age where we've got a very general purpose to through you know the earlier progress in deep learning. That really the time is now to say how we can take these concepts of deep learning and you know combine the big data and algorithms to then produce AI applications in just one area after another. What do you think about that? Well there's certainly you know a variety of opinions. The venture capitalist Kai Fu Li wrote a book about this shift from the era of research to the era of applications in AI and his book underscores that the research era is over and that was mostly done in the United States and the era of application is now in China and that's why the book is called AI Superpowers and it makes very nice point counterpoint discussion. But I think it's a bit more nuanced. We have made you know we've seen a breakthrough with deep networks and that's given us you know essentially unparalleled image recognition, speech recognition and the ability to predict what you're likely to buy next and so forth. But in order to move to the next level of intelligence we need further breakthroughs. Systems are at this point not actually understanding when you speak they're transcribing and then they might have the ability to answer a question but understanding what a sensible question would be under the circumstances and this kind of thing is beyond it. Yeah but if you take that point about that you made about general AI or artificial general intelligence versus applying AI in narrow context how much does it matter if a machine understood the sentence truly or you know how to ask sensible questions if it could actually produce sensible answers in new contexts and be able to keep doing that in new and new ways because it keeps getting fed more examples more data through which it can learn how to do things. It's a good point. The experiment in the in the early 60s by Joseph Weitzelbaum of making a sort of psychologist program it's a very short program you can now run it in your browser. Are you referring to Eliza? Yeah Eliza right you know was was convincing to to people and then they wanted to spend a long time talking to it and they would send him out say I'm still talking to her and and that was a long time ago. So so you can give the appearance the superficial appearance of this of this understanding and get quite far. Yeah so one example is for instance conversation service automation at customer service where normally when you make a phone call to a customer service agent you're talking to a human being in could be India or Indonesia or somewhere but today we're at a point where many of those calls are being handled by bots and especially if your customer service interaction is occurring through text rather than voice there is an incredible role for chatbots and in that case I think many of us probably have it and we do this with voice recognition systems all the time where we interact with computers and we don't quite know that we're talking to a machine. So I think it's getting back to this point that with with massive amounts of data and huge amounts of computing power that machines are able to compute tasks in all the way from these so if you define what is intelligence right there people thought natural language speech is intelligent you know many of you might think that solving a system of million equations is intelligent most people would not be able to do that right. So if we deploy computing power with the right kind of algorithmic machinery and that's where AI gets really interesting today right when we think of an algorithm as being a set of steps to perform a certain task right and and ideally we should be able to trace back a result coming out of a computer system to how was it built how was it coded and and nowadays with machine learning when we feed a machine a lot of examples and produce certain answers we cannot quite trace or understand where those answers came from and yet it satisfies that definition of producing outcomes that look like they came from a human or sort of the Turing test of I'm talking to a customer service agent and I don't quite know what I'm talking to a machine or a real person. There are a couple of things that this makes me think first of all there's a law in California and this is the first place that this kind of law has been enacted that if you are a bot and you're trying to sell something or promote something political then you must declare that you're not a person on pain of I don't know what so it's a question of whether that's enforceable but I think that that that makes a point that the is that that's not part of the recent privacy record no that actually that actually was enacted last year in July and so bots must declare that they're bots if they're trying to sell you something and and I think that's it's a good concept we'll see whether it can be enforced but there is a certain limit to what you can do with the this this you know imitation kind of technology so by way of example if you are a translator the things that you translate generally referred down to an human experience so you read something in one language and then it is grounded in some human experience and then you express it in the other language and in the case of a translation engine these days it's statistical and so it really is a fantastically useful matching system and it matches phrase for phrase and it gives you the best phrase under the circumstances in the context given the you know the statistical collection of words that appeared before and perhaps after and that was an important intuition but again it gets to context outside of the words and into the the human experience that that's missing and we talk about breakthroughs one of the necessary breakthroughs is to capture the physics of the world and the way people interact and the likely you know situations and so forth so we're we're we do want to understand when the machine is is giving a good you know imitation but we should also understand that it does not know the purpose of an object when it recognizes it that's a chair it doesn't know what it's for or how it's generally used and that's that's you know that's another step so two things you two words you mentioned past experiences and statistical I'd love to pick them up on because especially in today's AI which again based so much on learning from data that what you feed the machine in is obviously going to bias it's learning and its outcomes in various ways and in particular if I think of you know I mean there have been so many experiments of feeding machines the whole Lexus Nexus database for instance or you know various corpuses of books and other materials and in particular if you look at any society and let's take the US if you think of the content the knowledge that is or examples that are being given to these natural language or other systems you know we're a country where for instance if you think of equality right there are so many groups that were not considered equal until 20 years ago 40 years ago 60 years ago if you think of the right for women to vote or the right for non-land owners to vote or you know these decisions based on race so so much of the content that is being fed to these learning systems what happens to the ability of the machine to produce results that are contemporary to our life today when it's learning from a lot of data that you know it's not representative of the values we have now. So the great metaphor that I use is just an amplifier the machine is a very sophisticated amplifier it will take care of the the task at hand for example determining whether this particular inmate should be sprung from jail or you know whether someone is performing the duties of their job well and and follow the same steps that that the last 30 people in that job were following and it will include all of their you know worst biases right so actually that brings to my mind what happened at Facebook over the last four years I think around four years ago they were accused of having a left leaning liberal bias because some people on the team were pushing their human curators to you know the goal was to to promote news items that were popular right and that itself there's a herd effect there when you go after popularity and keep keep pushing that but they had a little bit of bias there and were called out on that and decided to make curation more automated and that led to this whole fiasco of fake news that became popular and because it was now algorithmic it got just amplified you know the term that you used enormously so so I think that's another issue when we start thinking about what kinds of work AI machines can do versus humans I know you have had experience in your work at the government level and other places working on these issues I love to hear your perspective on how how we bring some level of ethics and a reflection of our values and in particular I think I would draw a contrast between AI work that happens in the government and you know we were talking about for instance military applications of AI where they might be very sophisticated machines and drones that have enormous autonomy and intelligence and route finding and chasing certain targets and yet there is both a human in the loop in making the final decision and moreover even to the extent that when there is not a human in the loop the decisions of the machine are programmed under certain policy and an illegal framework and then you have the other sector of private AI based products and enterprises where it's really not clear to me that that kind of framework is being followed and in particular you know I think at the start we ask people who have technical backgrounds and non-technical backgrounds and very often in businesses it might be the case that there is a programming team that has handed the task of producing some intelligent software to replace you know how what products or promotions are given to customers or what you know how a profiling machine might work at an airport and sift people through on one side or the other and very often these ethical choices are made at the lowest level by programmers because that question was never recognized higher up in the institution so what what what framework or what hope do we have that as more and more AI deployments occur right that we don't run into the Facebook fake news fiascos and other things of that sort that can be very detrimental we talk about you know there have been for at least three years a lot of meetings convened by organizations with with the concern and we refer to this as hand-wringing we're worried and then the results of those meetings were generally principles principles by which AI should be developed and deployed and tested and and governed and so there are at least 40 sets of principles that came up all over the world and you could look them up and look at the selenar principles from the future of life Institute and the IEEE principles of ethical design and and they're all a good start but they all to me they all amount to different ways in which we should aim to you know on the side of virtue and and avoid the harsh side but these these are starting to be transformed now and translated into policy and so there's a the general data protection regulation that came out a year and some ago in Europe and then ours in California the California consumer privacy code that that just is is down one month old and and probably the reason that you're seeing a lot of of banners at the bottom whenever you go to a website and it says would you like would you like us to continue selling your data or would you rather opt out so if I may interrupt you on the principles issue right we brought up the thing that you mentioned earlier about much of the work now moving you know AI work being moving from being very US centric to certainly China being one of the leading countries where AI work is done and I want to connect it to this point about development of principles and to what extent principles developed in certain areas might equally apply elsewhere I think so to give sort of give a couple of examples one if you look just at numbers I think for last year's investments in AI over 50% is happening in China but the US at around 40% of investments and in the you know and I study technologies and platforms and in the last 15 years of you know internet-based companies if you look at the major platforms that have been built in the US Google search YouTube Facebook LinkedIn various other things in almost every case there has been a Chinese imitator down to the level of copying colors page designs even logos right and there's a really fascinating story about the Facebook equivalent in China which many of you here may recognize I don't know how many of you know about Rand Rand and Kaishin and that's really interesting because Rand Rand was a Facebook copy which then got copied by Kaishin and unfortunately the company that developed that had to pick a domain name of Kaishin 001 because Kaishin wasn't available and then Rand Rand came in and developed bought over Kaishin so it's a copy identical product design and logos apparently was not a violation of principles or ethics and more recently we've seen like work in stem cell research and other areas where actions have been taken that do not seem to conform to the similar ethical principles so how do we get you know we can certainly have countries and maybe even groups of countries developing these principles but if they're not going to be followed identically in a cut throat competitive business world then that leads to issues where you know you may have these sorts of outcomes and another thing coming back to the investments in AI that I find interesting or boring is not just how machines may resolve ethical dilemmas which you know people talk with famous examples of a car autonomous car driving down the street and will it let you die or will it kill five with a pedestrians to save you right those sorts of things but also what do we what do we choose to do what AI machines do we choose to build and I find it quite interesting that you know in the last year 10% of investments in AI happen in autonomous cars and only 5% in medical research and is that business imperative is that that it's easier to produce breakthroughs in one field versus another or I mean it doesn't seem probably would not seem to agree with most people's ethics that more money should go into autonomous cars which will end up replacing human drivers versus in medical research that may produce better outcomes and I'm trying to relate this back to to work and jobs one is going to eliminate some jobs and the other man actually has the potential to bring more jobs and improve outcomes just to take a hypothetical hypothetical alternative view there's there's a sense in which one could say the market forces are driving the investments in autonomous vehicles there there's a market and there are investors the investors want to to to multiply their holdings and create value and so and then they look at the the you know likely what they're optimizing for is you know shareholder value and so it's better to put it in cars than in health care and and that would that you know that speaks to the role of government in encouraging you know encouraging us to nudging us towards you know away from our our worst instincts and towards our better instincts and and so that that that is you know the seed of the the role of the directing you know directing investment at the research level yeah and I think I'm glad you brought up Kai foley spoke because of this discussion between us versus China and what might happen and where might it happen because in China there is a policy for AI there's a state policy for I know it may not be great or it may not be you know maybe not be implemented exactly as it may be conceived or it may be implemented more for the benefit of the state rather than its people right but in the US where it's predominantly private and market-based forces the dangerous people chase often these market level investments are inefficient because a number of companies are trying to build these autonomous cars because each of them things they might be the sole or the first company that builds it and can then create enormous competitive advantage which is actually not likely to happen because if we do get autonomous cars then for instance if uber has this dream of putting millions of autonomous cars on the market and therefore making a lot of profit that actually will not happen because the price of an autonomously driven car may be less than what it is today a dollar per mile it may fall down to 10 cents a mile and therefore it would have been really bad investments made by the market right I don't have personal experience with the Central Committee in China but but reading their their their plan it does look like there's thought put into it and and determining where the resources will go and and I think that there was some thought as to how the US should approach its investments in AI and and the Office of Science and Technology Policy had some studies in I think 2016 and then it sort of died down for a while waiting waiting for some more policy yeah and I recall in the 1990s the you when again there was some work in AI the US was scared about Japan because you had the Ministry of Information Technology and Industry pushing AI as a major goal and trying to make Japan the superpower and that obviously you know did not happen and Japan has fallen way behind so then we can be similar similarly hopeful that innovation and application work in AI will continue to happen you know or that the US would have a lead in doing that because our methods and incentives and markets will still lead to good outcomes it's you know it's it's hard to pick a winner but I I would I would say that I've seen some very impressive advances in robotics coming out of China and and at the same time you know being born raised here it's hard to forget that you know transistor and integrated circuit and you know personal computer and so much came out of the United States so but what it remains we see all right so I think we are getting the signal to stop so we'll quit at this point but I'm sure we'll take questions from your audio right so this brings us to the question and answer portion of the evening if you haven't submitted questions yet but would like to you can just go to slido.com and use the code on the screen fo w ucd and if you don't want to ask a question but you see one and you wish that you would have asked it you can hit the upvote button and it'll bring it closer to the top so I will start at the very top and that question is as businesses continue to use algorithms to personalize content is there a chance that this could negatively impact a user's thinking patterns or mental schema can I take that one sure I have a very strong opinion about this because the the answer to that is there is a 100% chance because it's already happened so so there's a there's a simple AI algorithm called adaptive reinforcement learning that you know is the 40 lines of code and that is that is used by let's just say Facebook to determine which out of the thousand posts that your hundred friends have put on since last night which you know three you're going to look at because you only have a few minutes right and and the the the choice is sold as a choice to to to assure that you're seeing something that's interesting and important to you based on your past behavior it understands your interests and it's got your number and it gives you those those those you know most interesting things but but the fact is that you don't have a fixed set of preferences and in in presenting you with the things that you're most likely to click on it starts to move you in a certain direction and it turns out that that direction is the direction of extremes and so that's where we have as Stuart Russell likes to say grandma becoming a rabid neo-fascist it starts off looking at the things she's interested in and it just gets a little more interesting to click on that next thing that's a little more extreme until we you know and at a limit it's part of what is going on that that that I think is you know threatening to dismantle democracy as we know it if I can just expand on that a little bit you know you talked about consciousness at the start and I would twist that into free will and how many of you here would think that you have free will more than a computer or an AI system right humans have free will right but if you take what Mark just described you know I could be looking at a Facebook post and of course this algorithm is driving me in certain directions I'm reading those posts not because I intended to but it's really I'm beginning to lose free will and moreover I may end up spending a lot more time than I maybe I moved to the machine because there was a notification and I wanted to just read that one single item and then through other things that are related to it I get pushed to more and more and then perhaps I get directed which happens to me a lot I find a video on a you know tennis match from 2017 and I'll switch over to YouTube and watch 15 minutes of that and that then YouTube begins taking over those recommendations so we really wonder at what point have we as humans lost free will because I'm not spending my day or time or that hour the way I wanted to and it's really being controlled by machines somebody's algorithm but ultimately is being controlled by all the data that is driving learning systems before we have a funeral about free will no I think it's true that when I was growing up there was a television and I would turn it on and it would be very interesting with people moving and you know social situations and I wanted to see what happened next and I watched a lot more television than I would have if there were no television and so in that sense I was giving up some free will but when I turned 13 I remember I had a younger brother and I gave my television to my younger brother and that was my poor guy I will ask the question in the number two spot right now just because I think it's so complimentary to what you guys were just talking about which is AI is increasingly everywhere yet we see very little evidence that we are educating people about the technology so what do you think the population needs to understand well I would I would say that it's it's it's very important to understand that that that artificial intelligence you know follows a very long line of of the human endeavor to imagine that we can create something that is alive and that that so far that's never happened and that's that's and it may be quite a long time to never that would happen if the machine is not opening its eyes and and waking up but but a convincing imitation of life sometimes but it's like the mechanical Turk the real the real human part of that of that chatbot that you're texting with is when it goes off the rails and it's handed over to an actual human and then the conversation kind of picks up a little bit and they ask you about the weather and you know it becomes a little more human chatbot's ask you about the weather to you know that's a real yeah chatbot's ask you about the weather they absolutely do yeah they are okay being probably you know absolutely yeah but I think you're absolutely right it's not just AI being everywhere you know for centuries we've grown up it's sort of the three hours of literacy but I think we need to add something about AI or automation more generally because if you think of the the problems that occur right the ethical and biases and other problems that occur it's at to some extent we have to exert our rights as consumers as shareholders in the companies that are producing these problems as citizens in general and I draw an analogy with things that you know we've been able to fix over time right tobacco or consumption of coats that were produced from endangered animals for are many of these other things where over time it's the backlash from consumers and citizens and maybe in some cases shareholders that cause companies to stop doing things what was were just most profitable but do things that were more responsible so I think many people talk about having a sort of corporate social responsibility on AI as being one of the needs but I think we really have to bring it back to the other side from the business to the to us right and that's I think that's the education that people as a whole need to be more educated about technology automation AI whatever words you might want to put around that which brings us to our next question which is slightly controversial but don't shoot the messenger how can AI be trusted with ethical decisions when the coders produce code echoing racial and ideological bigotry unchecked and in service of the political state heavy so I would start with what I just said clearly can't be trusted I mean it's a sort of a question that answers itself there yeah I think that that is what we're working on right that that is in fact you know we have a duty and enlightened let's just say corporate leaders you know seek to understand what it means to be responsible digital leaders they need to learn what that means and how to how to assure it yeah and I would push back on that I mean they certainly need to but we need to push them and you know just like Nike produce shoes that you know with child labor or you know there's a lot of carpet weaving that occurs because you need nimble fingers with you know children who are eight ten or eleven years old and and when consumers discover that that's what's going on under the hood when they become aware and educated enough to know that they're able to reflect their values onto the businesses and I think that's the issue with AI that when we interact with computer systems and algorithms and have no idea what's going underneath if because we cannot get ourselves educated then we are in no position to stop the companies that take the routes that may lead to these problems so I think it's really incumbent on the population to get more educated and aware and maybe us as university folks to to provide that kind of education we'll in line with that is a question asking what kind of careers exist in regards to working with AI ethics can this be approached from a perspective of law for example with a JD degree well that's a that's an affirmative I encounter legal experts and and and law students pretty much every day who are engaged in this there are there are numerous centers that that are really focused on it and I believe that there are plenty of work to do as as we discover you know what's coming out of Pandora's box here I think I think the future of work would really involve teams of people who know how to build these systems but who are working alongside people from law philosophy arts sociology psychology who can advise them or work with them collaboratively on how to get the right outcomes and it's really even even for someone who's building writing code it's very hard to be working at two levels at the same time and I think that's why it's important to have this kind of work done in teams that bring in multiple perspectives and are sort of working in tandem with each other across time that that really would be the way to produce code and systems that behave more responsibly and I would say that seeing the number of people in technology coming to this chat is is encouraging because it means that you may be writing code but you're concerned about the societal implications of what you're doing keeping with that but going a little more broad can you please speak to how artificial intelligence will impact the future of work i.e. how it will shape the landscape of employment more generally for us students well I think I think of artificial intelligence in this context as a kind of automation and automation has come for our jobs in waves for centuries that's that's where you know the the looms made weaving you know obsolete and and so we're we're in another one of these waves and there are different tasks now that that that are new you know newly possible to be automated that weren't possible to automate in the past but that really the way that it's being shaped is in a sense is it's it's turning certain tasks from from you know potential jobs to not really very you know open for as future jobs for for you what I recommend is that you know you learn how to learn and and and assume that you'll be training as part of your job no matter what that job is as it evolves yeah and I wouldn't dare to make a specific prediction about the future but I think the point is to look back historically at what automation has done to jobs and every single time we have this fear that this time automation is going to only kill jobs and not produce other jobs but what has you know if you look at the history every time there is a wave of automation it does kill a whole lot of jobs and then it provides either related employment to some of the people who've lost jobs or it produces jobs in absolutely new categories and that really gets to the point of what people call reskilling or retraining and I think it's really really important to keep that in mind so if you can build skills that enable you to learn new things in the future that's really the way to remain relevant to remain employed to remain productive because it's very hard to forecast whether an accountants job is going to be automated or the strawberry picker I was telling Mark about one of my tennis partners who among other things builds machines that robots that cut the caps of strawberries and there are there's another company that builds machines that pick strawberries from the plants right and these are both extremely challenging tasks for machines to do and obviously one impact of that is that it cuts down the need for labor it takes jobs away from the people who are picking and cutting and transporting these fruits and in some cases they may have other jobs because at this point we just I think in agriculture they destroy lose about 25 percent of the crop because there isn't enough labor so that they might actually still be work for them or they might be working packaging and doing other things with strawberries because now there's so much more production of strawberries right but in many cases some percentage of them will lose jobs and be forced to find work that requires a different set of skills and that is again historically has happened every single time so as a national policy I think it's important to keep in mind methods for retraining but at the individual level you know acquire skills that are more foundational and fundamental and can be reused in different industries those skills could be empathy for instance because that maybe one service industry gets automated but then there's another industry where those same skills could be relevant they could be computing and coding you know more technical skills but that's what what I would look for if you want to to look for areas that that that are in a sense automation proof then you really have to go up the chain of what makes humans human one of the things that's that's that's really central to what makes us human is is that we have these the ability to empathize we have you know something called mirror neurons and watch someone having an experience we have that same experience ourselves and that's in the primates and then humans it's unique and so yeah and I think in the US in what particular one opportunity would be simply if you look at demographic patterns and how they're being reshaped I had the numbers somewhere but there's the number of people who are 65 and older and similarly the number of people who are 75 and older will be doubling in the next 10 years and that will and and you know we already know today we spent about 20% of national spending on healthcare so if you put those two things together I think you get a science for where a lot of work is going to be and that maybe some of that maybe automation proof brings us to another question and I would like to meet whoever is asking these questions because you Josh have an unusually high number who have made it to the top do you think that it's ethical to advance the field of machine learning when those advances have a significant chance of suppressing the human rights of billions I assume maybe in less developed nations what do you think let me ask the the contra positive do you think it's a good idea to stop advancing machine learning you know because we we think it might have some some effects and in certain ways and I think that that that's a very hard question I have both sides of that a very hard question but but I would I would suggest that machine learning is in this you know in this simple way it's statistical you know statistical method that's been in use for thousands of years you know you take a number of samples and then you predict what the next sample will be and and in a sense it's what we do it's it's it's another part of what makes us human is that we take in a lot of samples and then we draw conclusions I mean the idea of bias it's actually just a you know the the the other side of a particular coin that makes it possible for us to draw conclusions about things with limited amount of information and and so the fact that we have this tool that that takes what you know what what we do and and multiplies it I think it's it's I would say it's very hard to to to put that back in the box so you know I think at some level of the question is about should we throttle ourselves in some way not do things that we could do right we certainly make those choices in many areas so if you think of human cloning or animal cloning we make choices not to do certain things even though we are capable of doing them if you think about mining or drilling in you know certain regions we choose to forego those gains even though we could so I think with AI I think there is a similar need to identify certain things where we might want to throttle ourselves as a society and have some global agreement on those kinds of issues the problem I think the challenge with AI unlike many of these other examples is that execution and progress in those other areas occurred over substantially long periods of time and with AI now things can move so rapidly that we as societies and governments don't really have the time to assimilate those changes and identify what are the things that we want to be able to do and what are the things where we want to throttle ourselves and it's so people have to move that at a really high speed I think to come up with the right kinds of policies and regulations you know for a little light reading before bed having trouble sleeping you can look up this this report on on malicious use of artificial intelligence the report is called malicious use of artificial intelligence and and it's essentially you know a group of sophisticated AI researchers and people who care about the topic imagining all the ways in which things could be could be used and the ways in which AI could be used for for you know harm and then what we can do you know and how to think about what we can do to to you know minimize that and that that's that you know that's a very responsible approach and I think that's that's what I spend a good bit of my time thinking about can we solve some of these problems with AI so for instance today you know we a lot of people talk about deep fake right good deep fake be prevented with AI are we at a point there well that in particular the idea of a forgery on steroids which is what deep fake is and you know you can make a copy of a signature you can make a copy of a document now you can insert the person's face in a video and create what looks like an original video of a person in a situation where they never work so so that is a bit of an arms race and the problem is that the cost of doing that has plummeted right that was possible 50 years ago but you'd have to edit every frame very carefully and then put it all together and you make this fake and so what's changed is that it's now an app there's an app and it's you know free or very close to it's so the on the other side you you have software that detects fakes and and it's really just a race I think it's something also that you know training and understanding that that's that's a possibility that that's essential artificial intelligence being used to solve problems created by artificial intelligence it's very bad and another question is seriously that's that's actually that's right beyond principles what social infrastructures are needed to prepare us for some of these unintended consequences what role might precautionary principle play in this domain well I see the the principles being discussed and discussed by by by AI researchers who care and and there there is work on on essentially putting putting to work these principles in foundations of a kind of new AI that is essentially guaranteed to be safe and so that's that's one way in which they they can be converted they can actually go into the foundations of AI but the the current stage is that you know Europe will be releasing policy and essentially law governing AI in in sectors and then I think the part of that is sector by sector and part of it is you know general code that that applies to for example the use and protection of data in all sectors but then sector by sector and and those laws will be announced in March so so there's the law and and and you know working at the level of technology and right education I agree and do you two think that the rise of machine learning would have even happened if it weren't for collecting data from the public without their consent making money from it I like that that's yeah that's actually not quite even a binary question though to say that anything that was not collected with consent implies that it was collected without consent right because so much of what is out there publicly was I don't think you can really claim that so much much of data that machines are using was necessarily collected without consent but I think if it if you if you somehow limited what data might have been used in the last many years by AI systems that would simply delay the whole thing by a few years I don't think it would change anything fundamentally you know I think the internet gave us free services and we didn't understand 20 years ago when it was possible to look at the yellow pages and get a map and all these things that just started becoming possible right that get news and all in one screen without getting up from your seat that we didn't think that our selections and choices were being archived and that that and ultimately it would turn into a machine that could predict what we want to buy next you know and so it was really a lack of understanding and not it doesn't feel to me like that was collected without our consent it was just collected without anyone really understanding what was coming and this question is more of a yes or no form but do you think that the European Union's five-year moratorium on facial recognition technology should be adopted in this country across the board on yeah I don't know how to answer that actually but I think that's going back to my point about identifying certain things that we choose to throttle because facial recognition can have so many different applications and many of them will be good applications I think in healthcare and aging populations that's going to be absolutely useful in in using AI to detect various types of incidents or events that occur so I don't think it would be a good idea to somehow ban it across the board in law enforcement and that's a good you know good proposal I might say yes to that if you have a benevolent government we have time for maybe two or three more questions one of them that's very much in line with what you were just saying is do you believe there should be legal regulations to limit the future of AI and if so what kinds of legal regulations I think there have to be I cannot don't know if I can answer the what kind question but that certainly needs need to be some legal regulations but we first have to develop in the enough understanding to define what they are and you know again thinking of other examples you know how we use nuclear technologies we have rules around that how we use chemical and radioactive tools so with every such technology you can produce great things and you can produce great harm but you've got to understand the harmful possibilities they're becoming aware in the last several years and if there are I think our policy making body should be thinking about this issue I have one particular area that I feel strongly about that should be banned and that is autonomous weapons autonomous lethal weapons are again plummeting in price it's now possible to make something that is you know a drone with a face recognizer with a bullet right that's that's there's a horrifying YouTube video that was released by the future of the life Institute called slaughter bots and if you want to be scared you can watch this it takes five minutes and then you won't want to have dinner so I but I believe that that even with a ban a ban is a start but even with a ban you know we just we are in a situation where something extremely powerful is so cheap that's very hard to know where it's being developed and I think your point about the arms race is absolutely the apt way to describe this so you might develop missiles and then missile interceptors and those drones will happen whether they are banned or not and we will need to have things that can prevent them from doing harm and it's an arms race in your opinion what do you think are some of the obstacles that currently stand in the way of AI breakthroughs but I think one obstacle is a business opportunity that we are at a point where we have enough technologies that have not yet been applied and deployed in hundreds of domains so obviously if applications create value there will be a lot of people and money chasing around doing that kind of work and less talent going towards making fundamental breakthroughs that may have thousands of other applications I would put that as number one because if you think of the obstacles historically obstacles to AI computing power data and so forth those have been met we have certain types of logic and reasoning we are not going to invent new types we might discover how to make machines do temporal reasoning and other kinds of things but I think it's really right now the calling need is for making useful applications of AI using the technologies that we've developed over the last few years to say it in a different way we are training the next generation of AI researchers who are going to discover these breakthroughs and their first year in graduate school they have a stack of job offers generally five right Facebook Google and so there's just this siren song of applied AI that that that is a dream from the actual research you know the deep research now there are research labs inside the big digital companies but they have the siren song of making money and there you know you can't deny there's some effect on but it's possible that major breakthrough will come from one of the big labs the final question that I'll ask is artificial intelligence is evolving so quickly how do non-technical people keep up I don't want to sound you know let them eat cake but I actually believe that there is so much written about it that that is hype and it doesn't take much actually to to cut through it and try to understand what's really going on and then once you get a grip on that to keep track of what's what's really being developed so I would say there there are you know there are there are some pretty straightforward texts for the general public on AI and then you know you could read scientific American or something and just keep track it's not it's I of course I have a PhD in the field and so maybe I shouldn't be talking yeah but more generally I'd say there are two ways to learn or educated ourselves one is to it's a few people manage to do and discover the meaning of life by sitting under a tree for 20 years but most of us need to look for external inputs and stimulants from books magazines articles newspapers and I think the real challenge to that is we have AI systems preventive preventing us from doing that by pushing us more and more recommendations to watch the next movie or read the next post but really I think it requires making deliberate choices to invest your time to use it productively to learn new things and they don't have to be very technical books or manuals or coding but really to pick out popular articles and sources but read about these these ideas and these well we have a few very short announcements but before we get to that can we please get big round of applause for our two speakers this thing is not working that's what we're just next oh there you go forward there so now it's time for the results of the raffle but I was going to do that I've never picked out a raffle to get do I get to read it to you or does Mark have to read it all right all right so we have Samantha Montefere my luck do you have to be here for the prize before but you snooze you lose that sort of thing you have their phone number you a friend can you give it to her oh I'm choosing very carefully ah Jessica Sanchez oh for two oh Jessica before everyone heads out please if you have the time fill out a brief survey at Slido.com same code FOWUCB it's quite short but it would really help us out with the planning and organizing for upcoming events of which we have to this quarter the first of which is on February 20th and it will be about the cannabis industry yes you heard that right cannabis and the third will be a talk on the future of media and it will be featuring the founder and former president of Fox Studios not Fox news Fox television studios in the middle so thank you