 Welcome everybody. Can everybody hear me on the back? This is the rebooting social media speaker series. Every Wednesday we're meeting here with exciting talks regarding the harms and potential approaches of mitigating the socio-technical ecosystem or mitigating the harms on the socio-technical system that we have and today we have a very exciting panel and before introducing the panel I'd love to introduce you the moderator of the panel which is Paolo Carvon. Paolo Carvon approached us with this idea of talking about AI ethics in from an industry perspective something that I think in the academic space is more and more needed and I hope that you feel welcome to engage with this debate in a critical and reflexive manner as part of the Q&A after our presentation. Just to say that Paolo is a global technology executive who led large businesses at IBM where he was a senior leadership team until last year. Since then he has acted as a strategic advisor for technology and go-to-market issues and as a venture capital investment investor and investment committee member. During his social impact fellowship at the Harvard Advanced Leadership Initiative, Harvard Ali, Paolo will focus on the intersection of technology and democracy and on entrepreneurship as a vehicle for social mobility. I ask you to welcome Paolo and thank you so much for being here. Thank you. Thank you Guzo and welcome everybody for our AI ethics and governments panel. I'm excited to have two former Harvard alum here so welcome back home and as we know AI artificial intelligence is more and more part of our lives today right either as individuals, citizens, business leaders, academics, lawyers, researchers. It's part of what we do and as with any tool it has a good side and could have a bad side. It has tremendous potential. Some say that it can add several trillion dollars to the economy through personal and also business productivity but also could have some negative impacts. So I'm glad that there's ample discussion now within our society about what are the safeguards that we have to have in place and whether and how to regulate the technology and or its use. There is a widening gap between the leading edge of technology innovation and our ability to either set standards or drive regulation and I think it's within this gap that we want to have this discussion today. It's also important to have a normative point of view related to this on how we value human creativity, human innovation and how we're going to protect it. So this week needless to say has been a extremely busy week in this area. We started that I think it was 8 a.m. on Monday with the White House issuing an executive order, a pre-comprehensive executive order in the space and today and tomorrow in the UK we're having the UK safety, AI safety summit so I'm thankful for both governments to have adjusted their calendars to meet our agenda here and but as Guzo mentioned in the beginning I felt that it was very important for us also to have an industry perspective to this topic because ultimately the industry are the ones who are developing and deploying either directly or through their clients this type of technology full disclosure as Guzo said I've been part of IBM for many years until I graduated last year so I'm looking forward to having a discussion with Christina and John so just to introduce them a little bit Christina Montgomery she's the chief privacy and trust officer at IBM in that capacity she oversees IBM's privacy program and directs all aspects of IBM's privacy policies she also chairs the IBM AI ethics board but outside of the company and in the industry she is a global leader in AI ethics and governance a member of the US Chamber of Commerce AI Commission a member of the United States National AI Advisory Committee which has been established last year to advise the president in the National AI Initiative office she's an advisory board member of the Future of Privacy Forum advisory council member of the Center of Information Policy Leadership and a member of the AI Governance Advisory Board of the International Association of Privacy Professionals and as I mentioned a Harvard Law School graduate John John Fisk is the director of data protection at Metta where he focuses on issues around privacy including fairness and accountability managing some of the most pressing privacy data protection and AI safety challenges in the world today before Metta John served as a vice president at SAP and was also a senior fellow at the Harvard Kennedy School most of our Ramani Center for Business and Government where he explored the implications of identity assurance assurance and safeguards against AI online harms so I think they bring complementary perspectives IBM it's an iconic company from a B2B perspective Metta Facebook and all of their properties operate very strongly in the B2C space I don't want to page on whole companies to these spaces I know IBM touches a lot of consumers in a B2B to see basis and Metta also has a very significant a very large B2B business but I think we can you know have different slightly different perspectives so what we plan to do is we'll have approximately 15 minutes of prepared remarks from each one of them and then this should give us you know ample time for a dialogue here in the room and also with those of you in the webcast so let Christina let's start sure can you hear me okay all right 15 minutes is a long time so if you have any any questions along the way feel free to interrupt I thought it might help by just giving you a little perspective on IBM and what we do because I find that since you know we used to have a PC business we were in front of everybody with respect to you know on a daily basis from a brand perspective but we haven't had a B2C business in a while so we are often you know not familiar to a lot of people but critically IBM I think of us as sort of the backbone of the economy in many respects we are supporting the critical infrastructure of some of the nations and the world's largest banks telecommunications companies airlines healthcare and hospitals governments we have about 4,000 government and enterprise clients around the globe we operate in 175,000 I mean 175 countries with about 250,000 employees so we're a big company and responsible as I said for supporting a lot of the critical information technology needs of some of the most critical and important industries in the world so it's really important to us as a result of that because we're managing such sensitive data and involved in supporting such critical information flows and the like that trust be the foundation for everything that we do so we started thinking about and going back from a privacy perspective you know we've had a chief privacy officer in IBM for almost a quarter of a century I think I'm the fourth chief privacy officer in IBM's history and then about it's five years now that we established principles around AI that AI should augment not replace human intelligence that it should be transparent and explainable and we've built out pillars around that that it be fair robust privacy preserving as well and that importantly for us because our business model is very different from a lot of the other platform companies and the like that the data our data belongs to our clients so we don't have you know a platform approach we very much are helping our enterprise customers be creators of AI solutions and technology sorry so we had these principles we established about five years ago but we knew as a company that we needed a mechanism by which to hold ourselves accountable like to measure and ensure that we were actually not only articulating those principles but that we were building them into our practices and importantly we also advocate for policy and so I spent a lot of my time in the policy space advocating for policy that's consistent with those principles some of the early things we did in the space after adopting the principles establishing an AI ethics board and our board by the way is very cross disciplinary in nature so it has representation from across the IBM business every business unit has a representative so whether it be users of AI solutions like in the HR space or CIO space the policy teams the legal teams the sales teams we have a very large and at scale consulting practice about a hundred thousand consultants within IBM so they're represented to give us consumer perspective research obviously is my co-chair of the board is a global AI ethicist our AI leader from an ethics perspective so it's very cross disciplinary in nature we establish that we have a project office that supports the board that sits in the chief privacy office team so we're sort of the facilitator in the enabler and building out the governance program the education program all of that around you know building these principles into practice but some of the early things that we did so this is now four years that we've had this very operationalized we helped to create and sign the Rome call for AI ethics in 2020 which is looking importantly at what do we want to use AI for right as a society like what role will humans play so big questions around AI and also we established with the University of Notre Dame the Technology Ethics Lab also in 2020 that is focused purely on socio-technical aspects of AI so practical business oriented but also research so right now there's a very nice program related to AI auditing so how do you study that what does it mean how do you standardize it so just thinking about these issues that may be in pure AI research labs hadn't been the focus of the research and attention so that's where we've been sort of paying a lot of our attention from I talked about our principles practices and advocacy sort of approach to building out compliance with the principles we also it's been almost four years now that we put out a call for regulation of AI we called it precision regulation suggesting and recommending to policymakers that they regulate the use of AI because the context is so critically important not the technology itself you'll never be able to keep up if you start regulating technology and also the harms that we see play out in the AI ecosystem and its use and the like are very context-based so 2020 we put that out and we've been following along the way the EU AI act which is the first you know that draft has been kicking around now for almost three years since 2021 we came out immediately supportive of it because of the risk-based approach and because a lot of the things that it's calling for for high-risk AI uses are very much the way we were thinking about it from an IBM perspective as well that those higher risk uses of AI you know you should make sure you're using quality data that you have a risk management system that you're transparent in the use of AI and the like so we've been following that debate and thinking about from the perspective of what will our clients need in order to comply with this regulation because again we're an enterprise that is providing tools to our clients to create their own AI we have our own foundation model so I'll bring us now to where we are in the Gen AI debate like what has happened in the last year because you don't often hear IBM in the conversations with the frontier model creators you know the the chat GPT open AI kinds of conversations so one of the first things we did so again I said we've been at this for a really long time we've been advocating we've been sort of pushing for a regulatory approach that's very use case-based not a lot of traction around the world until last year right and now because it was pretty much the EU you had a lot of conversations happening around the globe but none very getting a lot of traction really except in the EU that legislation kept progressing through and then you see chat GPT all of a sudden the world wakes up right so what have we been doing I mean since generative AI we felt the first thing we should do it's really important for the world to understand what are the new incremental risks and what is really the same so one of the early things that the board put out at IBM is this risks of you know risks and mitigation opportunities for generative AI focus on the fact that a lot of this is the same things we've been talking about transparency explainability it shouldn't be a black box etc but uniquely generative AI could amplify some risks right it could scale a lot more quickly if you can have a large language model that's capable of drafting phishing emails and sending them out around the world with you know misinformation and the like relatively very rapidly in the hands of you know millions of people so it could be misused a lot more readily it's created new areas of focus for us around IP and copyright because you're now generating new content hallucinations those types of things but a lot of it's again coming back to those same basic practices around understanding what you're using AI for understanding the data that's going into it and having transparency around that data having explainability so we very early on from a technology perspective deployed things like AI fact sheets which provides sort of an ingredient list for an AI model what data was used to train it what were the techniques used in order to like sort of curate that data you know essentially what does the person who's the next phase right who's taking that to the next stage and training it on their own data our clients what do they need to know about what they're starting with and how did they create their own fact sheet so what we've done in the in the in the space of generative AI is we are taking what I think is a unique platform approach for our clients so we are deploying what we call Watson X which is a platform that has an AI studio for training AI models common data architecture to leverage the data to train models and then a governance platform that it will enable life cycle management so we've taken a lot of the work we've done and this is one of the great things about being a privacy team within a technology company is a lot of the learnings that we have you know in the workflows we've built and the thinking we've developed ethics by design approaches and the like in our AI ethics board process has helped inform that governance platform the workflows the data capture all of that so that when a client they can start from either our own foundation models or third party models including llamas in our studio or AI studio they can pull from hugging face there's a lot of open source models in there and then take you know that starting point and train their own AI based on that with their own data that remains their data and then really importantly have this governance capability that enables them to generate fact sheets to have an auditable workflow to do things that they will be required to do if they're not already required to do it ultimately they'll be required to do by regulation particularly if they're using AI in a high-risk context so that's been our focus this very holistic approach we also reiterated a regulatory point of view recently not a lot to change there you know it is the same thing we've been calling for for four years regulate the use not the technology itself hold creators and deployers of AI accountable so importantly and we'll talk to John maybe a little bit about this like we've said like let's not repeat section 230 with broad immunity you know we need to be we need to be thinking about at this stage while we have the time to do it how do we hold creators and deployers of AI accountable for the technology they're implementing and then where we're very much aligned with like a meta of the world and and lots of others in the academic settings and the like is this open innovation approach like we are very concerned about an environment happening today where we're hearing calls for licensing of AI models the sort of regulatory capture that we see happening not just around the models themselves but potentially around the data that's going into train the models if that starts to get locked up in a closed ecosystem we're not going to you know benefit from what we firmly believe is a very open innovative approach you know we acquired red hat which is one of the most you know well-known companies in the open source world we acquired them six years ago maybe at this point in time so they're a part of IBM's portfolio yeah so when I think about high risk I don't know if you're familiar with the EU AI act but it's that's defined on as safety health and human rights like uses that could impact whether somebody gets access to public benefits to housing to alone health care safety applications that are impacting like autonomous driving those types of things we would consider high risk and there are some areas in the EU AI act which you may or may not be familiar with that are that are off limits like the the EU AI access there's a classification of high risk and there are things like I mentioned and across education and that type of thing as well but then also off limits like social scoring mechanisms like live facial recognition outside of and that's still being debated those types of things so anyway that's our latest POV we have been very focused on driving international alignment that is so important right now and there are multiple efforts I mean Paul Pelle mentioned a bunch of them earlier that are happening just this week around the executive order the safety protocols for frontier models that a lot of the companies including meta including IBM committed to in the US the G7 came out with their own set of safety models Canada has a code of conduct I keep pointing back as good practices to things like NIST's risk management framework and hoping that we see more traction in getting that deployed as a standard around the world that's great thinking on the part of the US public private partnership and collaboration about how you think about risk when you're developing deploying managing models so as much international alignment as possible I think will help support innovation while addressing risk so we're very big proponents of that and then just in general you know we're very focused and I individually am very focused on the fact that AI this is a moment where everybody's got is paying so much attention to it and there is this often rush to jump in and provide some kind of assurances because like everyone's touching it with chat GPT I was shocked with respect to how much attention even like my family my parents my grandparents are paying to this technology that they never even knew about before never nobody knew what I did until this year right so and I think a lot of politicians are hearing from their constituents that they're afraid of scams they're afraid of all the stuff there's very much a danger to regulate for a moment of fear that is going to be a problem and all these conversations and I know a lot of you may disagree with me but around the existential you know threats to humanity I think are kind of distracting from what I'm trying to focus on right now and I think what IBM is trying to focus on right now which is the real practical let's take care of the fact that we have to improve AI literacy we have to address the risks that are playing out in real-world uses of AI today and if we start imparting some best practices we need national privacy legislation in the U.S. for example we need some rules around explainability and transparency and data and what it can be used for and what it can't be used for we need to be focusing on those today and another reason I'm here this week is that IEP AI governance conference that's the International Association of Privacy Professionals training now I think over 2,000 people in a field of AI governance that is needed you know people need to understand what the implications of these technologies are and how they can build AI systems while mitigating those risks this is touching every industry every profession everything you're learning the way we deliver education all of it so we can't just be taking this you know approach well let's have an international regulatory body and then we'll lock down in and the only people that could develop it are a handful of company I mean that's not going to help us this is a society that this is a long-term marathon not a sprint and we have to think about it in that context so that's where I'm spending a lot of my time and I probably used 15 minutes so I will turn it to John thank you thank you Christina and I promise the audience on the web acts that are on the webcast there are repeat the questions and we'll have a roving mic here in the room also so that everybody else in the webcast can keep our space and so fantastic John welcome back to Harvard I know that you've been a fellow here at the Kennedy School where we met but also you came here for college right so welcome back and we're chatting about when we talk about all of these risks in a sense you're a line of defense right to at Meta so tell us a little bit more about that and that's all right thank you and let me just give a shout out to the Sanskrit and Indian Studies Department at the Harvard College I'm a proud grad but anyway yeah I thought I'd share some reflections on basically unlike Christina I was talking pretty umbrella about AI across all these industries my focus of course is the online harms and focus to talk about some of the risks and how we think about risk in that space as AI is developing so yeah by way of introduction four years at Meta now in the Office of the Data Protection Officer that's the DPO is a function that sort of basically oversees our compliance with European regulations GDPR, DSA, DMA, AI Act etc so kind of down in the weeds on the European compliance needs AI has always been an important part of our oversight responsibility Meta has 2025 large models deployed for the past decade or so basically to run everything to personalize all the services to manage safety and integrity things like that so these like Christina it's sort of like suddenly this the chat GPD has brought this to the front of everybody's consciousness but these issues have been around for a while first a general caveat unfortunately I am not able to represent Meta here in any way so these are just my own personal views expressed I've also been told not to share much or anything about our internal practices so forgive me if I have to deflect any hard questions later on so but to give you a little sense of where I'm coming from and how we think about it operationally that last time and I was standing my team my team is you could think of as a third line of defense in this in the compliance model and for those of you aren't familiar the first line are the teams the the product management the engineering product council policy teams who work together to bring a new product to launch right and they're ultimately the most accountable for the safety of a product or special talk about AI products in this in this example so that's the first line second line is basically overseas the first line there are governance function generally where they will make sure that all the right tests have been conducted that the safety test pass certain thresholds depending on the risks of the use case etc that all the documentation is up to date like system cards like Christina mentioned we have the same same idea trying to be fully transparent with what goes in and what comes out of the model to the best we can so then my team and my role is sort of as a third line of defense as as Paolo mentioned we we monitor the first two lines and by monitoring we do either sort of metric driven monitoring of how's the process working and how are the things performing or we'll do deep dive kind of audit work where we go in and explore certain topics of interest to us and then come back to the business with with recommendations so that's monitoring and then more and more we're finding my personal attention it's sort of like Christina is is shifting away from monitoring and more think about risk and I mean you could it's pretty obvious to folks I think but risk is at the heart of both governance and ethics right we can't govern things if we're not clear about what we're trying to prevent what outcome we don't want to achieve similarly with ethics we can kind of think of that as balancing benefits and risks for an individual or different groups of individuals and stakeholders and how do we make sure that the balance of risk and benefit is fair so I mean that's a bit simplistic but that's kind of how how I think about it but the point is risk is is central to both ethics and and governance and so you know I'm gonna go you know within Meadow we have a very different taxonomy but for this talk I'd sort of pose that there are three buckets of risk we could think about with different governance requirements societal governments governance requirements for each so the first is first bucket might be the the risk that the model or the product doesn't work as it was intended to do right those and the risks they're pretty well known at this point things like bias toxicity hallucinations or you know that the process that the bot or whatever will not act try to conduct the transaction accurately or you know the privacy issues that that can come about if the training data is inappropriate or the operation isn't managed well so these are things that are understood it's when the model doesn't work as performed in accountability is crystal clear it's on whoever's developing the model they have to get that right and so from my perspective that makes that the most controlled bucket of risk like the risks are clear and the accountability is clear we don't have solutions for all these problems yet but there are a lot of smart people working on it doing the best they can so I'm personally pretty confident within a few years all the big problems we all talk about today will be will be yesterday's news and there'll be a new set of things so second bucket of risks again I'm thinking sort of holistically from an online harness perspective second bucket of risk is the the types of risk when the models may be working the tools may be working as designed as intended but being misused by bad actors and that you know I maybe I personally may be staring into the abyss of online harms too much but I see that as a more and more pressing risk it's things we know about the misinformation you know the amplification acceleration of it with deep fake type technology all the all the sort of the shaming the blackmail the fraud all the all the misuses you could think of around that we have societal harms around face wreck that were plugged into and general and security breaches are getting more and more clever using all these new sort of AI powered attack factors so there's a whole bunch of risk that's coming sort of from an outside in perspective it's not the companies that are generating the models it's just the misuse of these models and tools that that I have to worry about and to go a bit off into a tangent I personally think in within a few years we're going to need to have kind of a rethink about online safety at a fundamental level where I mean again I would follow you mentioned like I spent a year thinking about identity assurance but I think that's a cornerstone of online safety in the near future you have to do that in a privacy protective way but we need ways to know you kind of have to move to a zero trust environment where you're assuming that every entity in an interaction is with a malicious bot at some point unless proven otherwise so we need to figure out how do we how do we make that how do we make that easy practical for people to validate certain things about themselves and also to validate the integrity of content so I anyway this is a long question it sounds like Christina you're pondering the same things but it's it's it's sort of the pressing pressing issue of the next few years I think then the third bucket is sort of these longer term I don't think of them even as risks but maybe societal changes that are coming from AI like you mentioned the same same point I think like basically when you picture a emotionally intelligent bot with you know fully human interactive capabilities verbal visual etc people will fall in love with things like this literally and maybe at the detriment of human relationships or a super super capable digital assistant who can just take care of everything in your life from your insurance claims to your taxes to your social calendar I mean you could be see becoming quite dependent on that plus information flows managing their your information flows your medical advice sort of creating highly personalized like we all live in sort of personalized bubbles online today already it could get much much deeper and more profound and again the social impact of that is a big question mark something I worry about so anyway I you get the point that third bucket is kind of this future facing call it risk if you will but it's the things I hope entities like the Berkman Klein Center will give us guidance on because we are building tools now that begin to sort of drift in that direction so the sooner we can get some safeguards safeguards or guidance like societal guidance on what you know what's appropriate what's not how do we how do we even think about these changes the better so that's that's my personal commentary I'll leave it there and open it up thank you John and I think a topic like this that's so open to you know questions and different points of view it's a moderator's dream because I don't need to think about questions I've already have got a few coming online but let's check first here in the room if if we have any questions and before asking just wait for the mic to get to you so we've got one back there the woman in red thank you for these comments really interesting to bring in something I I didn't hear discussed I'm curious whether you have thoughts on whether proof of personhood types of systems privacy preserving hopefully proof of personhood platforms and products would be one solution where we can say let the bots run them up let a thousand flowers bloom but when you really want to know you're interacting with a human we can do that and it just won't be the existing platforms or they can have a specialized product would that solve all the a i challenges take this one John I don't think it'll solve all the problems but I do believe it's a cornerstone that we're going to need to think about setting pretty soon and there are some very good privacy protective solutions coming out self-sovereign identity if you're familiar with that or some of the ISO standards that are emerging basically allow people to keep control of their identity information and sort of selectively disclose just the relevant pieces in a trusted secure manner so confirmed by a third party so it's there are there are some models out there that are looking like they could be scaled I again from a platform perspective it becomes a whole lot easier if you get to manage online harms if you can confirm who people are as we all know people pop up in a hundred different identities and you may be the same person or or fake people passing long information so I think from a platform perspective that's that's going to be an integral piece I also think as bots come online and become more and more autonomous we're going to need to have accountability of bots and they should be identified by who they're serving explicitly and also with all the youth harms and youth concerns right now you know the parent child relationships might be validated as well through these through these sort of identity platforms identity insurance so I personally think that's to my mind it's sort of inevitable that we if you really want a safe internet that's kind of where we need to go that's my personal view I'm certainly speaking blasphemy to many people but that's my view so what do you think Christina no I wouldn't I would I would agree with that and I think just when even today when you think about how we've been looking at technology in IBM like we've been clear because we have technology that our clients ask us to build that is you know very realistic chapa technology that type of thing and we've had rules in place that require transparency we don't want things to look too superhuman like and and so that's how we've been from our own perspective trying to support what our clients want while also adhering to our principles like making disclaimers and those types of things but this whole idea of self-sovereign identity and what the web 3.0 or 4.0 is going to be we had a lot of these discussions thinking back to coven for technology solutions they seem to have gotten quiet now because of all the focus on ai but they have to come together at some point and there is a definite sort of conflict in a sense between preserving privacy but also having the solutions to help you preserve privacy and control your data like single id and that type of thing that we haven't gotten you know I think there's a lot more work that needs to be done on figuring out where we want to be as a society and then figuring out how you bring the technology along got to take maybe one or two of the questions you're online and then I'll come back to the group this is a kind of a hopefully an easy one we talked about the importance of education and digital literacy in general so I'm paraphrasing one of Joyce's questions here which is you know how given that this topic has some technical edge to it what are the resources that you would recommend for people to get to learn more about it I know I've been recently launched in the AI Academy and but so if you want to just you know talk a little bit about that and what recommendation would you have for people to kind of learn more about the subject I'm sure John has some great ones but yeah the plug for the AI Academy so that's available to anybody I think the first module released today maybe very recently so just good basic AI literacy is so critically important I think you see in the executive order efforts from a White House perspective to improve AI literacy across government across education a lot of the directives in that order will you know play out by the agencies over the next couple of months depending on the time frame they've been tasked to do that but yeah basic basic AI literacy is so critically important right now and we're trying to do our part in scaling as a company I think a lot of the tech companies are I would point to those resources like the AI Academy our public websites even our effort if you want to learn about AI ethics we deploy a lot of things in on our public website we try to be as transparent as possible in terms of how we're thinking about issues and the like education's a big piece of our mission in IBM and in particular in my team yeah I guess I would just answer that with beyond what Christina just said first of all technically there's a slew of great YouTube videos that's right that's where I learned stuff about how the models work if you're interested in that and for AI governance as a topic I actually found the NIST model which you referred to as very accessible very clear very clean so I read that 30 40 pages gives you really good sense of how large organizations should manage AI so I thought that was a great resource too let's go back to the room here I think we have one in the middle here or you have the mic there in the back there let's start through the back I am from biology department I'm from biology department during president Reagan's administration there was a ban about creating life within the laboratory and because there was steep opposition from the religious leaders then and since then times have passed changed do you anticipate any kind of objections limitations or opposition from the religious leaders as well as the government depends upon which party is in power for this full-plugged AI system in the future so I mentioned the Rome call for AI ethics that call in January of this year was republished with signatories from the three Abrahamic religions in a ceremony which basically said I mean it's important because it basically said that you know they're committing and companies who signed on to it including IBM including Microsoft I don't remember which the other ones were but they'll commit to use AI to protect people and improve the planet like those were the fundamental principles so there's some things you're not going to do and there's some things you are going to do but it's having that set of sort of ethical rules that religions agreed on that being said you're never going to have a full global stand right I mean because different parts of the world have different values based systems so there's always going to be disagreement if you look at something like social scoring the EU is outlawing the use of AI and social scoring in China that's a big part of the AI solutions it's just fundamentally different approaches to what we're going to use technology for but there is some alignment I would encourage you to sort of read the Rome call I do think it has really so sort of really nice grounding across three of the major faiths of course not all but three of the major faiths you want to comment on this one because there's so many questions you're online I'll just say if your comment was about when does AI cross the line into becoming a lie a life form of its own and is that permitted or not anyway that's an interesting philosophical question but I don't have any understanding of the religious boundaries on that but it's an interesting idea the there is this is from Genevieve here online a key issue with the AI ethics and in the industry is the tension between ethics which requires a slowdown doing extra checks etc and speed to market how do your organizations kind of balance this tension I think this is a core to what we're trying to discuss today I mean it's it's definitely a tension so but I you know all I can say is there's a lot of oversight and people are pretty careful about weighing the risks of inaction versus the risks of action and you know we're fairly thoughtful of weighing all the outcomes we look at vulnerable populations and how they might be impacted by by unintended outcomes so I there's always a tension but I and I know by the way I part of our philosophy at Meta is to embrace the open source community like Christina mentioned in the spirit of that is too that we know that these models aren't perfect yet and because the risks are not yet so high this is still a safe pretty we are in our view a safe thing to do to kind of release them to the open source community let people kick the tires and play with it and poke at it and see if they can see what they can do and that's way we're all learning together it's sort of an iterative cycle that way I can't I don't say there's I can't really answer the question any other way but to say there's a there's a lot of oversight and governance we test we use all the all the tests that that are sort of mainstream at this point for bias toxicity safety etc but just you know live your thoughts uh so I think a lot of it comes down to having that governance process in place in your companies and following it and that's why the NIST is a great resource are at the process we talked about with our AI ethics board um and there's a whole sort of governance process around that um that we follow we put our foundation models through it you know and so I think that helps to at least ensure your vetting the issues um and signing off at the right levels with respect to how much risk you're going to take so I think one of the I always tell companies that are thinking about like how do I get started with AI ethics is to immediately the first thing you need to do is just put a risk management process in place put a governance process in place the temperature turning that up or turning it down can change and it changes for crises it changed for COVID like you know there may be technologies you would never have thought about potentially during a global pandemic that you want to bring to market that you know you turn the temperature up a little bit on risk to address you know it's a balance but if you have the process in place you've got the right starting point let's come back to the room I think there was one there at the middle actually there are three so what let's try and take maybe two or three at the time so just since we go hi thanks I'm my name is Craig I'm a professor at USC Annenberg and can you hear me okay and a visiting scholar here at the Institute for Rebuilding Social Media my just my interest is around AI and the creator economy MEDA has launched a whole suite of services and helped facilitate what Goldman Sachs will be says is a one trillion dollar economy built off of social media platforms that will be deeply affected by these technologies I wonder what sort of protections are in place for creators alongside all of these incredible suites of tools that have already been launched by MEDA for all these creators to harness including the possibility that that MEDA and other platforms may train their models off of creator content and launch their own competing creators and that will deprive these entrepreneurs of their livelihoods very valid question I will have to deflect as I'm a not an expert in this area and b I'm told not to opine on things of internal but obviously the the copyrights are very important and the the rights of the creators are critical that we depend on them as well as we sort of envision this this metaverse economy emerging so we definitely need to make it win win for everybody I I'm afraid I'm really not expert in how we're handling this exact question though there was an interesting discussion earlier this week at the Rappaport forum here at the law school and I trust this this must be online somewhere which was still some debate even within the you know legal scholars in terms of how to handle copyright and IP right and and by the way do this in the absence of privacy laws also in the US so it's another one of those kind of fascinating topics in which there's some normative judgment that we need to apply and then we're in the process of creating and inventing regulation and legal jurisprudence also about this right yeah and I would just point you to a number of companies and organizations including IBM just submitted comments to the copyright office they asked for you know input into how copyright laws should change so those should all be on file because I think they were too Monday should be able to read and see how companies are thinking about it hi everyone my name is Jahira I'm also a harbor alumni and I've worked developing AI technology from the technical side and also product management side and I I really appreciated the different like types of risks or hierarchy of risks that were mentioned I'm curious how do you see you know both of you the line between a model that wasn't being built doing the proper diligence versus a model that is being used by a bad actor and would love to hear your thoughts on that I mean well I'm not quite sure I I'm trying to picture a scenario but it would definitely be the developer will be accountable should be accountable whether if they designed it badly or didn't really think through all the implications I would is that is a question of about accountability or I guess it's so we mentioned you know it could be that the model wasn't built is the model built as intended and then the model the model was built as intended but it's been used by a bad actor so when then where is the line between a model that is being used by a bad actor versus a model that didn't go through the proper to diligence okay okay it's a good question I yeah in our world we're both the developer and the deployer so we're accountable for both I don't have a clear I don't have a clear line in my head because I haven't really thought about that question much do you yeah I mean any technology any AI is a tool it can be misused so and and ultimately the creators of AI can't bear all the responsibility for misuse or you would never put anything on the market like it just and the same is true with any tool it's just that AI has risk it could scale more than you know but you think about it with gun manufacturers there's debate about that right I mean but it's the same kind of thing you're misusing it so I think we have responsibilities as creators developers deployers to build as many safeguards in place as possible to prevent misuse but ultimately that's something and that's part of the reason why this whole open approach the more people you have who are testing who are red teaming who are trying to challenge how could this model potentially be misused before you deploy it is one way of addressing it and I do think that's a creator's responsibility but again in your own mind you got to balance like are we ever going to not deploy something because anything could be misused this chair somebody could pick it up and you know throw it at somebody I mean that's just the bottom line but exercises like red teaming are focused on that right now you should be able to to have before you put out a major model have a bunch of people not from your creator team see what harm they could do with it and then Christina may be inspired by one of the questions from Jonathan here online is one could think about this topic also from a precision regulation perspective since you're you've talked about you know not necessarily being in favor for licensing but how do you see this from an industry perspective and this could get closer to already existing regulation that it exists within industry the context of industries right and then potentially hold people accountable to the standards that are already existed within that industry yeah no that's exactly right which is why we've been saying you know we don't need a single federal regulator for example like every regulator understands how the risks of AI deployed within their the context of their own regulatory authority is going to play out and it plays out very differently in the banking context than it does in the transportation sector than it does in you know consumer fairness and the FTC is very focused on that so I do think having a risk-based approach providing more and imposing from a regulatory perspective more restrictive requirements on creators of models that are used in sectors and users and employers of models that are used in high-risk sectors where more harm can be generated is where this you know where this goes next. I think we have one here that has been patiently waiting for quite a while. I'm in alum of the Graduate School of Education's risk and prevention program and I run a mission-driven innovation strategy consultancy. Since you work I believe you said in 175 countries and your dedicated IBM is dedicated for instance to education I'm wondering and you have this academy but I'm wondering if you have initiatives that are sort of pushing out if you will into the education systems K through 12 to help train the next generations to be thinking about this and also as creators and innovators but also country contexts are different so Chile might be thinking differently about its AI strategy versus Mexico versus Brazil etc etc so I'm wondering if you can speak to to that. Yeah I mean not to the details of it but we do have a corporate citizenship program that has very much been focused on deploying skills build it's you know to K through 12 to graduate schools and there's unique programs tailored to different parts of the world as well so that's something our head of corporate citizenship can speak to the details on but it's definitely a focus and one of the things you'll see with AI is this ability for more personalized learning as well right. I mean I think the more AI has the potential to customize learning in ways that we never could do before to individualize it to make it specific to geographies but you know this is some of the broader societal issues that we were talking about before so it's important. Paula we have two more questions in the room if they are brief we can do one after the other and then panelists can respond to them and then maybe you can run it one more online what do you think. Very good so let's have two brief questions and brief answers then we wrap up. Thank you for a very interesting talk. I'm Satoshi Narihara I'm from Japan and now I'm visiting Tsukara at Harvard Institute. I'd like to ask a question for Christina. As you pointed out social context of use of AI is important therefore as you suggested it may be we should regulate not technology itself but its use. On the other hand some experts point out technology is never neutral so technology improves or hinders some barriers for example some technologies are privacy friendly other technologies are invasive if that is true I wonder if we may regulate technology itself under some circumstances what do you think about that point thank you. Maybe we take the second question and just for efficiency. Hi thank you for coming for this panel. NIST RMS or NIST risk management framework has come up a couple times and I'm wondering if that means that industry is encouraging of compliance and auditing. Well we're encouraging I mean I can't speak for all of industry I can speak for IBM's position that we are very supportive of the NIST risk management framework we've mapped our own governance program to it and AI auditing is something that needs further study like I definitely think there are scenarios where in high-risk contexts auditing could be a solution to help build trust but we're not anywhere near the point where we even know what that would look like and have a standardization around an AI auditing profession for third-party audits so I think we can't let the regulation get ahead of the regulation and the requirements get ahead of the capabilities to do that in the first place and then being thoughtful about where we're requiring it. And on the point of regulating the technology I will admit I'm sort of struggling like I understand there may be scenarios where the technology we may say the technology shouldn't be used in this context but I still believe that it used space lens rather than regulating the technology because how would you do that like I now there's a lot of focus on frontier models and imposing certain regulations on the highest most capable models but that threshold like I still struggle and this is my personal point of view with the definition of frontier like somebody just made it up and decided that somehow models with you know a certain flops you know size and compute capacity are more dangerous than models below that capacity and I'm not sure that anybody knows why they chose that threshold other than the fact that no one has it out on the market today at that threshold you know so I'm not a fan of regulating technology for technologies sake. I can have one comment just just to put a point on that the one of the first things that meta not the first thing but a few years ago we stepped back we're realizing we had all these engineers building different models there was a a long exercise internally to define what is a model and begin to inventory them and many many things are not models that we thought were models and sort of defining that and beginning to manage them systematically is harder than it sounds in these development environments so yeah I'm assuming and by the way the models today we use LLMs or we use gen AI type stuff tomorrow it's going to be something different and they're they're already working on the next iteration that's totally different and it's going to have a whole different set of issues so just to reiterate like regulating one type of model or one specific technology is sort of sort of fruitless. Thank you John thank you Christina and I wanted to go back to a comment that I made in the beginning which is this widening gap between this leading edge of technology innovation and our ability as a society to derive standards and regulate if you go down the river here to MIT and talk to people like Gabriella Russ she'll talk about liquid neural networks that may have a hundred to a thousand times more capability than the current LLMs if you were to talk about or with Ian Lacoon maybe more in his capacity as a professor at NYU than as the chief AI scientist at MATA he would say that LLMs are a thing of the past and really we should be looking at you know object-driven AI so this just may go to show the futility of trying to regulate technology think about if any perfect regulation had been passed as of maybe September of last year before chat GPT and that would be completely you know obsolete right so while I'm not trying to suggest that we should abrogate our you know desire and our hard work to make sure that we identify how to regulate some of this I think there's there are different approaches maybe it's risk-based maybe it's industry-based maybe it's use-based that may have to be complementary to just trying to put shackles on the technology itself but I wanted to thank you all for coming today thank our panelists and the Berkman Klein Center for hosting and have a great afternoon everybody thank you