 Welcome to the state of generative AI views from the frontier. We have so much to get through, so I'm just going to do a quick intro for all of our speakers and then jump in. This is Andrew Ng, founder and CEO of deeplearning.ai. We have Pilar Manchon, Senior Director of Research and Strategy at Google AI. We have Sarah Hooker, Director at Cohere for AI, and Peter Hallanan, Leader of Responsible AI at AWS AI. Thank you guys for coming. Andrew, I just want to start with you and I'm just going to jump right in. We've had such an intense year of AI development and I'm curious if you can talk a little bit about anything that's surprised you about some of the recent advancements technologically and has anything surprised you about the way we are talking about governing and living with these systems? AI is a general purpose technology like electricity and I think when the process of identifying a lot of use cases. If I were to ask you what is electricity good for, it's almost hard to answer that question because it's useful for so many things and AI is like that too. So a lot of work remains with new generative AI but also other things like supervised learning, labeling things that work well years ago to figure out the applications. But I would say here's my biggest surprise. I was not expecting that in 2023 I'd be spending this much time to try to convince governments not to outlaw open source software or to pass laws that effectively make open source software impractical for many organizations to keep contributing to because that's the fundamental building block is very democratizing. I think earlier this morning Jeremy Jurgen talked about we don't want to leave nations behind. Guess what? If some of the lobbyist attempts to shell an open source succeed, there will be a lot of losers and a tiny number of winners, almost everyone in this room, almost all nations will be a loser if those lobbying efforts succeed and that deeply concerns me. I'm curious if any of the other panelists if you agree with Andrew's perspective on open source being outlawed or if there are other models to think about. Yeah, I just don't think it's binary. I think it's so it's a really interesting. I lead a research team and we build both these models as well as the next generation models. But as Andrew is saying we come from a community of AI researchers where open source is just very core to how we've developed our field and to progress within the field. But frankly I also wrestle with the fact that we're not in a conference center anymore and our work is used by millions of people around the world. I actually think so our lab open sources and publishers we're actually open sourcing a large multilingual model next year. So I say this is someone who's actively wrestling with these questions because I actually think maybe the mistake is we treat it as open source or not open source. And perhaps more interested as a technical conversation is how do we have responsible release. What does it look like to still give researchers and independent organizations and all these organizations are necessary for progress access to all of this while also making sure that we're developing the technology for model traceability and for data traceability. Because right now I think the main concern driving these questions of nuance around open source is the reality that these models are being used in ways that are powerful for good but also can be used in ways that are unintended by the researchers who build them. Can you just describe what you mean when you say traceability. Right now and maybe I'll give an example because when I mentioned we're actively grappling with that we're releasing a multilingual model because most of models right now are English first they serve English. Andrew is completely correct by saying who is left behind. Well we're releasing I have because it serves many more languages but when we release this model it will be weights. And that means that when we drop the release these weights anyone can just copy it it's a file and you can just essentially we lose our ability to track where it's being used. I think there's interesting urgency to technical questions around this. Can we have model signatures in the sense can we trace where models are used but it is an extremely challenging technical problem. And so I also don't want to I don't want to minimize the amount of work that would be needed to have serious model traceability. Yeah Pilar how does Google think about this. What is the idea that open source represents sort of I don't know a challenge to safety. I mean how are you thinking through that particular question yourself. Well we release a lot of open software right then we release a lot of models and we release a lot of tools. So we are active contributors to the community and we completely support it. But in terms of completely open sourcing some of the models then you have to take into consideration the benefits and the downsides the tradeoffs. And in this case like Sarah said it's not a binary decision. It's more about when you release what you release with what level of traceability control transparency and responsibility. So I think that we have to find the right kind of balance and that's what we are trying to do with Google as well with an open architecture open infrastructure that enables you to to use not only Google's models but also any model any open source model or any model that you have access to. So that people can choose the level of risk that they want to undertake. How they want to work. How much testing is done and the level of transparency of each of those models. So I think that the answer is a little more complex but we're dealing with not only the complexity of all the technologies that we're still researching about but a complexity of releasing them in a safe way and allowing for research and also making sure that other countries don't fall behind, other communities don't fall behind and we democratize it. But we have to be careful. There's a lot of interesting ideas there but I just wanted a key in on that word safe. I mean AI safety we've been talking a lot about that as a concept but it's not particularly well defined. I think most of us don't quite know specifically what that means. And then I'm curious, yeah what is what is safe in these scenarios? What is the appropriate level of safety? So I feel like your AI does have risks and I think that if you look at different applications of AI, for example, media, social media, if you build a medical device, if you build a self-driving car, all of those things could cause very sick and harm and the thing deserved to be regulated. And I think the problem with a lot of the current regulatory proposals is rather than regulating at the application layer, they tend to regulate at the technology layer. So for example, the White House executive order from a few weeks ago was starting to propose reporting requirements and maybe other burdens in the future for basically if you build a big AI system, then you have, you know, starting to have burdens and regulatory requirements. And I think some proposals in Europe also have a similar flavor. And the problem with regulating the technology layer is, we know that if you want more of something, you know, don't add friction to it. To me, the heart of the question is, do we think the world is better off with more intelligence, human intelligence or artificial intelligence? Yes, intelligence can be used for nefarious purposes, but I think that as the world became more educated, more intelligent, we became better off even though, you know, there were some nefarious uses of intelligence. So the problem with the current regulatory proposals is it adds friction to the creation of intelligence and also, whereas in contrast, I think we need good AI laws to say, if you serve a billion users, which by the way, means you have the capacity for burdensome compliance, then let's get transparency and regulation and safety and auditing. But if the same laws places similar burden on a small startup or a big company or a very small research and big company, then this is about letting companies climb up and then pull up the ladder behind themselves so that no one else can follow them. And that's unfortunately where a lot of proposals are headed. If I may jump in with something as well, I think that that is super important and obviously regulating the applications and the domains in the context of even what was said in the previous panel, is super important, not the technology itself, because it could be used for anything. So that's 100%. But I think that it's very important, I would think, in terms of the users that are able to use the technology but don't understand it deep enough to know what the collateral impact of what they're doing is. So it's not only, safety doesn't only mean using it in a safe way or creating it for people who intend to do bad things with it, but also for unintended collateral effects of people who do not understand what they're doing well enough to know, to know better. Peter, I'd love to hear from you. Yeah, I'd just like to add maybe a couple together, the open source and the safety issues, if you will. So just speaking for AWS, right, we're a big proponent of open source software. We support the PyTorch framework. We're supporting Lama. And part of that is simply to offer options. So we don't have the perspective that there's going to be one model to rule them all, they're in fact going to be a variety of base models, a variety of specialized models. But there's a lot to learn about these models still. And when you have open source models available, people can do research. They can explore things, they can learn. And that improves safety across the board. So I think these are highly coupled issues. And yes, one has to strike a balance. There are issues with more knowledge can be used for good or for bad. But it's better to have a smaller set of sort of known unknowns and a smaller set of unknown unknowns than a larger set, I think. And I think the open source work contributes to reducing both of those. I'm curious, Sarah, if you can talk a little bit about the broader discussion we're having globally around AI safety and the risks, especially existential risks versus near term risks. There seems to be, that also is a binary conversation. And I'm curious if you could talk about whether that framing is helpful, how that's shaped the way we all think about AI. Yeah, maybe you asked because perhaps I'm a bit grumpy about this. So I mean, I think firstly the notion of safety, right? We talk about safety. We often talk about a lot of desirable concepts like this. We talked about interpretability for ages as if it's a finish line. And one day we're like, it's safe. It's interpretable rather than a spectrum. I think there's a more nuanced divide, which in some ways has created a lot of value driven divides that have kind of polarized our research community as people who build this technology about where do we place the emphasis in terms of risk? Like wait, these models, it's very rare as a researcher that you build something that is used overnight. And that research and direct world impact collide. And it only happens a few times in history. So I think what researchers are grappling with is this technology is being used right now, but also the pace of acceleration is felt by I think everyone in a nuanced way. However, how that's translated and is in this divide about whether we focus on longer term risk, which may be harder to measure progress against because when you think about something long term that's existential, essentially what you're saying is something devastating that we may not be able to articulate now, but as a future threat. Or do you focus on the reality of these models being deployed every day? You mentioned how do users know what they don't know? Or how do you calibrate hallucinations or misinformation? Which is something which I think we're gonna talk about in much more urgent tones, but we're not talking about yet enough. If that, I think in many ways this is for me one of the risks that is most present. And we have to articulate. And that's why I think we can't treat open source as a binary. We have to acknowledge open source is really critical, but it amplifies risk, what should we do? And I think that's a much more interesting conversation to have, because then we can funnel the resources we need to really equip ourselves for what's coming next year, which is that elections are gonna be held all over the world and we don't have traceability tools and we don't have good ways to implement. So how do we navigate this divide? I think what I always try and state is the tools for both existential risks and present risks require better auditing tools at scale. We have large models with millions of data points that are trained on, but also being used in very different ways. Whether you care about risk, which is perhaps more long term, like bio risk or cyber threats. Or if you care about things that are very fundamental and present today, like hallucinations, we still need the ability to audit. And that is very difficult, because take red teaming. If I asked Andrew, what does red teaming mean to you? And then I asked Pila, what does red teaming mean to you? You may give me totally different answers. Like how long should it go for? Should it be your friends in a slack thread? Should it be a dedicated group that does it in an ongoing way for productionized system? We have to have these crucial precise conversations even about the reality of how we tackle any risk and anchor it to the tools we have possible. And so that's why I think it's okay if some people in this room feel very strongly about bio risk. I'm not gonna try and just read you, although I can at lunch. But maybe we have to really care about what's happening right now. But I do think what's important is we have a more precise conversation about the limitations of our current tools, even for present day risk, that alone the longer term risks. Yeah, I mean, you just said something, you said a lot of really interesting things, but the one I wanted to get back to is the idea that open source amplifies risk. And Andrew, I was just curious, if that's the case, what is the problem with additional regulation and barriers if open source technologies are, if they amplify risk, if they're more vulnerable to problems? Actually, I think what Peter said was the opposite. Not that open source amplifies risk, but the transparency actually helps to reduce the risk. Did I interpret it correctly, Peter? Yes, it's both, right? I mean, you get people doing diverse things. I mean, you're not gonna have guarantees of watermarked output from open source text to image synthesizers, for sure. The ecosystem is more complicated. But on the other hand, you gain a lot of understanding. I think the focus on safety, there's so much temptation to focus just on a foundation model. We're basically in a process of experimenting and co-engineering sort of new human workflows with new technologies. It's very hard to put each of these AIs into the single box, right? Some of them are quite simple, some of them are quite complicated. I think one has to sort of approach this on a use case by use case basis, where use cases are defined extremely narrowly. So narrowly that they'll give anybody in marketing conniptions, right? But face recognition, for example, is not a use case. There's many different applications of face recognition technology. But you have to think very carefully, am I trying to do virtual proctoring? Am I trying to look up a found child in a database of missing children? Am I trying to index an actor within a video data set? All of these are different use cases, they get tuned differently. JNAI has dangled in front of us this beautiful model that can do so many different things and yet as we deploy it, we need to go back to the basics of narrow use cases. What in this particular situation makes sense? Your question earlier about what is safe enough, right? You give me a model that does anything, I can't answer your question. But if you give me an application domain, a specific narrow use case, I can answer the question. And more importantly, right? We're deploying, I mean lots of people, lots of enterprises, lots of individuals are trying these technologies out. You have to kind of scope the challenge, the deployment challenge, the building challenge to who's actually doing it. If you make it a broad use case, people get stuck. But if it's a narrow use case, then you can have a development team, which is not world class philosophers and ethicists. You can have reasonable people make reasonable decisions about how to do this safely. And I think you're sort of narrowing in, thinking carefully about risk, which by the way is a social decision making process. It's not a turn the crank, and this is the risk kind of thing. And then really understanding that there is a shared responsibility model. I know that it's been understood in security. For example, AWS has a shared security model where AWS takes care apart and the customer takes care apart. But in ML, it's endemic with the technology. ML is really about statistics. We're rolling out statistics, okay? And once you put privacy in play, the deployer has visibility on their data. The builder does not. The deployer must understand how to test. Testing is not easy, okay? That requires that you introspect, that you think about what's acceptable in your particular use case. It's a time consuming process. It takes a lot of social discourse and discussion just as risk assessment does. But that's key to this. Anyway, I'll pause there. I get very excited about this stuff. I think Peter is right. So the thing about AI and Peter mentioned the term foundation models. So large companies are training these large and increasing startups are training the base AI model from, say, reading a trillion words on the internet. And that's a core technology component. Many of you will have used Chai GPT or BOD or other tools like that as a consumer tool. There's one segment that I think is underappreciated, which is these tools are a fantastic building block for others to write software on top of them, not just to use as a consumer tool. So maybe one quick example. In previous lives, I built applications for, say, email routing. Cusper sends an email. What department do I route this to? And with traditional AI techniques, it might have taken me and very good AI teams like six months to build something like that. Thanks to this generation of tools, there are now hundreds of thousands of people that can build in maybe a week, what used to take me six months. And so this dramatic lowering of the barrier to entry means that there is starting to be and there will be a lot more applications out there in the world. And this comes back to the point of AI being a general purpose technology. It's not just Chai GPT and BOD. It's being used in corporations for email routing. It's being used to help with legal documents, with nascent approaches, to help with healthcare. I've been working with the former CEO of Tinder, Renata and Ibok on AI applied to your relationship mentoring. But there are going to be far more applications than any one of us can probably imagine at this point. And the problem with the regulations and open sources, if you slow down the work on the technology on the foundation's model, you're saying, let's slow down AI for all of these wonderful applications, most of which we have not even imagined yet. As opposed to if you were to say, if you want to use AI to build a medical device, well, I know what are the risks for that. You have to prove your medical device is safe before we put in a human body. Or if you want to use AI for underwriting. Well, we know we don't want underwriting to be biased or unfair. So I know what the risks are. Let's really regulate that. And I think that's why I really agree with what Peter is saying, that regulating the technology foundation model layer is just saying we want less AI that would damage most of the world. But if we regulate the application layer, then we can be realistic about the risk without kind of slowing the world down. What is relationship mentoring? So my team at AI Fund, we decided to apply AI to relationship coaching. And you might wonder, like Andrew, I'm an AI guy. What do I know about romantic relationships? And in fact, if you don't believe me, you can actually ask my wife. My wife will confirm that I know nothing about romance. But I want to get together with Renate and Ibo, former CEO of Tinder. And then my team wound up collaborating with her to build a product that she launched, announced a few weeks ago called Meno. That is a romantic relationship mentor to help individuals think through relationships. I think U.S. Surgeon General has declared loneliness and epidemic in the United States. It's actually worse for you to be that lonely than to say smoke 15 cigarettes a day, I think. And so Renate with a little bit of help from us is trying to use AI to help with, I think, a really, really important global problem. I may add something, because I think what you said was super interesting. And in agreement with you, probably 99%, but it's- Talk about the 1%, we like the arguments. The other 1%, something that I think we all know is that the legal system, the regulation and the collateral impact of everything that we are doing always comes far behind what we're doing. And we all can acknowledge that AI is not only accelerating in itself, but accelerating everything. So it's hard to find a domain, a science, an industry, anywhere where AI is now having some kind of an impact. And if you start thinking about reducing and analyzing each of those use cases and the millions of other use cases that we haven't thought about, that we could ever do or use for, there is no regulation, there is no law, there is no precedence. And we, people that are here, we're stronger to keep up to be up to date with the latest of the latest. And if you include there the morality or the ethical or the values of what if we apply this to that, try to think about a judge, try to think about whether something is legal or ethical or whether there is collateral damage, if we run so fast that the rest of society cannot come with us safely, then we're going to create a whole generation of casualties, of the collateral unintended impact of this renaissance and revolution that is on the one hand wonderful, but on the other hand unprecedented in size and speed. So we do need to take that into consideration. I'm as excited as you are about this renaissance of all these wonderful things that we can do with AI. But at the same time, we have to think about who else is there, there's not an AI that has to follow and has to suffer the consequences of what won't necessarily go all the way right. I empathize with what you're saying. You make a good point. And I just worry that the difficulty of doing it right is being used as an excuse to do it wrong. I'm curious how you guys, how anyone on the panel that wants to address this, what's to come? We have all talked about agents, AI agents, the idea that you might have a system that interacts with other systems that do things that are potentially helpful for you. For me, it would be reading all the emails from my kids preschool. That would be very helpful to have an AI agent that does that. But what are your thoughts on the feasibility of those kinds of systems? Like how quickly can they come? Or what are the technological challenges that might stand in their way? I'll just say these systems are here now. If there is something big coming in this year, well, it's already been announced, right? The ability to sort of hook LLMs up to stuff and start doing things is just very attractive. Now, I hope, take this as a clear directive, do not do things like try and steer a power grid or anything that's sort of a risky connection with these. But these will start on the consumer side, right? Just as OpenAI has released recently, I mean, there's going to be lots of opportunities to hook these models up to things and have little apps. And notice that in a chat that, oh, you're asking for the value of this thing, and then it spins up a little script to write it and do the calculation on the fly, all of that kind of stuff is here. Now, how good it is, OK? It will get better over time. I raise it because it complicates this business of shared responsibility and testing. The notion of privacy is critical throughout. What happens when someone has signed up to use sort of an orchestrated agent system and they want their data to be theirs, as they should? And yet the system is spinning up like little programs to execute various calculations that are needed. And it's derived the structure of the program from the context of the chat. How is that actually tested and verified, right? It begins to, like, we know how to do that, but what we're beginning to do is integrate a lot of pieces together. And it just takes care and thought and sort of step by step. I mean, I don't think you can turn it off, but it's here. And I think it's partially exciting and partially test. Please test. I don't know. I would say it's here, but it's pretty clunky. I mean, maybe I'll describe the technical problem. I mean, you're trying to use large models with the infrastructure of the internet. And the infrastructure of the internet was built in very fractured ways. The whole notion of API design is because people choose different ways of doing it in different places. So what you'll notice now is that what's compelling about this idea of agents is in some ways we leverage external knowledge all the time, right? Our ability to connect with other humans has been amplified by having the internet or having a phone, which is probably very close to you wherever you're sitting now. You probably have your phone somewhere close. That's something which is an auxiliary tool of information. The reality, though, is that you have to make all this work with the internet and it's going to be fractured. So you'll notice people are starting by very particular use cases. I think in the short term, this is going to be the reality, because it will be hard to pivot and create more general agents. I agree completely with Peter, the idea of safety. What does it mean to be in the loop? How do you, if an agent conducts a transaction for you, what's the accountability if it goes wrong? Like the notion of, and that's a very basic example, but there's much more perhaps problematic ones. So we have to think about what does intervention points look like. I will say that's a medium term problem, but we need to start working on it now. This wide idea of what's exciting, I think, for me, what's really interesting is things like multilingual, so making these models more robust in different parts of the world, multimodal. So how the original vision of AI was let's impart to machine skills reserved for humans. But the way it was implemented throughout computer science history has been these disparate fields. You would have audio and computer vision and language. What's exciting about this moment is that we have the compute power, perhaps to crudely do multimodal. Right now, our main solution seems to be throwing a lot of compute at it, but it's the first step in having a more nuanced approach. I would also say, for me, adaptive computation is one of the most interesting ideas that is really important because if you think about it, we're addicted to this formula of bigger is better. Why do we do that? Because we essentially throw a huge model at every single data point the same amount of times, and that's not how humans approach our environment. We typically apply more compute capacity to things which are more difficult. We squint if we don't understand something, but we largely ignore things that are easy. This idea of how can we have adaptive compute is, for me, one of the most fundamental questions of how can we avoid this ladder to the moon, where we're trying to just really use this crude tool of parameter counts to try and approach more and more intelligent systems. What's adaptive compute? It actually, I think, is a few things. Some are actually already in production. You can think of a mixture of experts as adaptive compute, but it's not that a mixture of experts right now, frankly, is kind of an efficiency solution. It's just to reduce the total number of flops, but it's not truly modular or specialized. If you squint hard, you'll say it's specialized. You'll say every expert is only a different thing, but the reality for people who've worked on it is that we don't have good ways of enforcing specialization. But the ideal thing is you have different models which are specialized to different things, but it's also things like early exit. Why do we have to show every example to the model the same amount of times? It's also things like, what is the critical subset of data points to train on? A lot of our work as a lab has been showing, maybe you don't have to train on all of the internet. Maybe, in fact, it really matters what you pay attention to. But for me, this is one of the most interesting because it moves us away from this paradigm of uniformity, where you're treating all data points the same, but you're also applying all weights to all data points. And it's a very interesting direction. Yeah, I don't know. Andrew, it looks like you want to say something. You go for it. It makes sense. There's an emerging paradigm in AI called data-centric AI, where the idea is instead of trying to get as much data as possible, instead of just focusing on big data, focus on good data, so that you can focus your attention on what's actually most useful to expand your computation on. Maybe, can I just add some other things on myself? So Sarah mentioned multimodal. Just to make some predictions about upcoming trends, I think we've all seen the text processing revolution. I think the vision image processing revolution is maybe a couple of years behind the text processing revolution. And images and text do come together with multimodal. But what I'm seeing is that computers are starting to really perceive or see the world much better than ever before. And rather than image generation, I'm seeing the breakthrough in image analysis. So this will have consequences, for example, with maybe self-driving cars, where they'll be able to perceive the environment much more accurately than before. So if you're in a business with a lot of images, I think there could be consequences for this. And then I think, what else? I think agents, just to chat a little about agents already. But this is one of the wild west areas of AI research right now, frankly. So I think of the term agents is not well defined. People use it in different ways. But this concept that right now you can prompt or you can tell a large language model like chat GPD or bot what to do. It does it for you. That's there now. But this idea that we can say, dear AI, help me do market research for the top competitors of this firm. And it will decide by itself the steps to do that. First, do a web search for the competitors, then visit each of the competitors' websites, then generate summaries, and go and do all those steps. So this idea, kind of a computer, figure out a multi-step plan, and then carry the multi-step plan. That's kind of at the heart of the agents concept. And right now, what I'm seeing is I've seen fantastic demos that look amazing. But most of us, we just can't get them to work for most practical commercial things just yet, despite the amazing demos. But this is coming. A lot of us have worked on it and paid attention to it. And I think when that becomes more widespread, it would be an exciting breakthrough. How long until we have agents that can book flights for us? Go for it, Andrea. I think for verticalized applications, it might be quite easy. In fact, even now, versions of chat GPD can decide to browse the web, decide when to visit another web page, whether to scroll down the web page. And I think that even now, one of the biggest application sectors of large language models has been customer operations or customer service representatives. And so if you go to a website and chat to a customer service representative, these bots are integrated to take action, such as it has to decide at some point, is it going to issue a refund or not, or what call a database query to answer your question about when was your order shipped and when was it going to arrive? So these AI models, they can start to take some actions by querying databases or sometimes even something as maybe risky as issuing a refund. You don't want to get that wrong. That is already starting to get there. Interesting. I just want to remind the audience that we are taking questions, which I'll take in a couple of minutes. I have a lot of questions, but I'll just limit myself to one. I'm curious how you see the field continuing to develop over the next five to 10 years. What are, we've talked about agents and that being both here and also some years away, but what are the other applications, other things that people are working on and trying to push us towards? I have a suggestion for many of you from different businesses, which is whenever there's a new wave of technology, the meteor and societal interest tends to be at the technology of the tooling layer. Because it's fun to talk about this cutting edge, but it turns out that the only way for the tooling layer, for the technology layer to be successful, like the clouds and the opening API service and so on, the only way for that to be successful is that the applications built on top of them are even more successful so that they can generate enough revenue to pay for all these tools that we read about in the meteor. For whatever reason, in earlier ways of technology innovation as well, a lot of the attention is on the technology layer rather than the application layer, but for this whole ecosystem to be successful, almost by definition, the applications have to generate even more revenue. And I think that's where a lot of the richest opportunities lie, to look at your business, figure out what are the specific use cases in your business, and then to go do that. And actually, some of my friends, every Brin also and others have done, and my teams do this too, is work with businesses to try to analyze if you have 10,000 or 100,000 employees, what are all these people actually doing? And to go through a systematic process of taking jobs, which comprise many different tasks, to take jobs, break them down into tasks, and figure out which of the tasks that are amenable to AI augmentation or automation. And I find that when we go through that brainstorming exercise pretty much every time, we find tons of opportunities that yield. End up being exciting for businesses to pursue. Personally, what I'd like to see rather than what we're going, is more work on human value alignment. And it's really easy to understand what that means, and we all have a general concept of, we all have a certain set of values. But the reality is that when you come down to it, your value is my value is, the value is in the West, the value is in the East. So it's not about human value alignment. It's alignment with a certain set of values that you can be transparent about, that you can provide control over, and that you can help yourself or the model accountable for. So it's not only to get the models themselves to be aligned with a certain set of values, but having a control transparency, accountability, and flexibility so that we can all have versions of that model, applications of those models that align with the values we want and we can feel safe about. And we don't have to agree on all those values. There is a core set of values that most of us, I guess, I hope, agree on, but there are certain things that will differ. And I think that it's extremely important that that happens sooner rather than later, so that the democratization of the usage of these technologies can go further and can go beyond what we think our values are into all kinds of communities, geographies, and domains. And the second thing that I'm also super excited about is the application of all this AI to the different fields of science, because we have already seen examples of how AI can help change overnight challenges that have been in different fields for decades or centuries and all of a sudden something that took five years to do and that takes you five minutes to do. And as we open all those tools and we let people just go crazy and do all kinds of experimentations with it, we're gonna see an unprecedented number of disruptions and breakthroughs and new ways of seeing the world that is gonna change who we are as a society. So I think that's where we're going. I want to take a couple of questions. Does anyone in the crowd have a question? Yeah, please. Thank you so much for a great panel. I'm Landry Signier, senior of field at the Brookings Institution and Executive Director at the Thunderbolt School of Global Management for DC. So there's a couple of dimensions that I would like you to elaborate on. So with the gen AI, we have the pacing challenge, the incredible speed of development and also the coordination challenge, the multiplicity of actor and of usages which could also be made. So, and we are here discussing AI governance. How do you think that various stakeholder could work together to address that pacing and coordination challenges knowing that the public sector, the ability of the public sector to evolve with speed or with velocity is pretty much different from the one of the private sector let alone the civil society and the diversity of stakeholder and what participation mean because we are also speaking about the imperative of including civil society, civil society but what level of participation will also be considered as meaningful. Thank you. I keep it short. It's tough, but I think education is going to be key. Risi is teaching gender AI for everyone on Coursera but I think helping everyone have a basic understanding will be important to let all the stakeholders participate but I think Peter looks like he's going to say something. Well, I mean, it's a core question and yet it's almost a question that's impossible to answer, right? I mean, I think we do what we can. We engage in venues like this to discuss. I think anybody in the field, whether you're a deployer, a builder, just a user should be engaged with government as government considers regulation. I think you should get out and try it. There's all sorts of organizations which exist today to facilitate conversations. You know, speaking for AWS and Amazon, right? We fund lots of research for third parties. Like there's just like so many different levers that you need to pull to sort of engage people in these discussions. And I don't know that there is any one lever and I don't know, like there's so many different, you know, speeds at which different organizations whether they're civil sector, private sector, government move, you know? So how to steer it all, I don't know. The best you can do is contribute and engage. I'd like to add something as well. The way I think about this is like when you go to the beach and there is a lifeguard or there is no lifeguard. And if there is no lifeguard because regulation has not made it that far, then you're swimming on that beach at your own risk. So education is important to understand the risks that you are undertaking as a developer, as a user, as an organization, et cetera. And regulation can at least be transparent in terms of your safeguard of some sort. Here or now, are you swimming at your own risk in that particular area? Well, you can also bring your family to the beach to watch out for you, I think. That's how I would think of it. Thanks a lot for a nice discussion. Daniel Dobosch, SwissCom Research Director. Andrew, you mentioned nicely basically the comparison to electricity and looking at the history of electricity. People discussed a lot about all the risk that it will bring us on, what people will use it for, what people will misuse it for. Same with connectivity, what people will do if they now have information at any given moment. So let me try to bring you a little bit in the future of, I don't know, five years, 10 years, 20 years, and this is my question. Will we sit here, I don't know, in five, 10, 20 years and discuss that the biggest risk is that we made critical infrastructure, and the biggest risk is that we don't have it available anymore and our services cannot work without anymore. Oh, I see massively little risks of us deploying AI and then for some reason AI becoming unavailable unless some really horrible regulation shuts it down. I feel like AI has risks and I think a lot of things that Sarah, Peter, Pilar described, they, one of the challenges of AI, it is different than previous technologies and I think something that Sarah alluded to is that there's different boundary conditions than earlier technologies, so we don't really know as well exactly when it's gonna work and when it's not gonna work, which is why the way we manage it and govern it, it is different, but I can tell you that I work with a lot of product teams that are doing just fine in terms of testing it extensively, deploying responsibly, having human in the loop until we're confident it is safe, so I think that a lot of fears for AI is not that AI is harmless and will never do harm, but I think that a lot of fears are overblown. Anyone else agree with that? A lot of the fears are overblown? Tend to agree in the sense that I always think the best way forward with risk is to allocate resources to the risks that we see every day. I probably disagree with Andrew a little bit in the sense that I do think there are enormous risks that even happen with our models deployed right now and that we need to allocate, but I do agree in the sense that we need more scrutiny for domain sensitive areas. We need to allocate core fundamental research. You know, one of the most promising things I've seen recently is that every country wants to start an AI safety institute, which I think is actually not a bad thing. I think it will have funnel needed research and strengthen within government technical talent, which has been notoriously difficult for governments to attract in the West, and I think it's really important that you have technical people informing what are the realities of how these models succeed in our brittle. What I will say, and where we agree, is that for me, there's been a lot of anxiety around long-term existential risk, which for me feels like in some ways a way that sometimes displaces conversations about the reality of how these models are deployed. And I think that I always ask, well, how do we measure progress along those axes of existential? And we don't have a measure of progress because there are many possible risks and it's hard to quantify appropriately what is the actual probability or likelihood of it existing. Andrew, do you want to say? So I actually spoke with quite a few people about the existential risk and candidly, I don't get it. Many of them are very flaky. Go ahead, please, thank you. Many of them are vague and fluffy statements and I can't disprove that AI could wipe us out, nor would I can disprove that radio waves emitted from earth won't attract aliens to come and wipe us out, but it's so fluffy, I don't know what to do about that because I can't disprove a negative and I agree with Sarah, this is a distraction from, frankly, is there disinformation or misinformation on media or social media? Those are some short-term things where we could pause transparency and safety types of regulations and take actions that this other thing is a huge distraction from. Oh, by the way, when I speak of US government officials, many of them kind of roll their eyes. They say, yeah, kind of like, whereas interestingly, Europe is taking extinction risks more seriously, so there is a divergence. And one of the things I see is there is a faction in US government that tragically, because of real or perceived adversaries, potentially having access to open source, there is a faction in US governments that would welcome a slowing down of open source because of potential adversaries, real or perceived having access to it. In contrast, the European rules to slow down open source, I don't understand really the, frankly, I think that if we're to slow down open source, Europe would be one of the places that is shut out because of the concentration in the US right now, so I feel like a lot of the, I think the theory of slowing down open source in the US is flawed. I don't think it's a good idea. And then I think it's even more obviously not a good idea for Europe because Europe would be one of the places that is shut out if some of these laws come to pass. I know I'm pushing it, but I am gonna take one more question. Is anyone interested? Yeah, please. Thank you very much. Andrew, you recently tweeted about your son creating a mess with a chocolate cookie that he found from the pantry, but in that tweet you brought out what I think is one of the most important points, which is it just might be easier to align AI with human values than aligning humans with human values. And I think that is the biggest risk as far as coming from a country like India. That's one of the biggest risks that we see because even as we speak of AGI et cetera, but behind every smart algorithm, there is a smarter human being still. And how do you, any thoughts on how do you fix this problem? Yeah, that's a great point. So what happened, one or two weeks ago, my son gone to pantry, he stole chocolate and made a mess. I was slightly annoyed as a parent. And I tweeted out, at that moment, I definitely felt like I had better tools to align AI with human values than I had tools to align my two-year-old son with human values. And most seriously, I feel like the tools for aligning AI with human values, they are better than most people think. They're not perfect, but if you use JGP or BOD and try to get it to give you detailed instructions for committing harm or committing a criminal act, it's actually really difficult to get AI to do that because it turns out that if we teach an AI, we want it to be honest, hopeful, and harmless, it really tries to do that. And we can set the numbers in the AI to kind of very directly have it do that. Whereas, how do you convince someone not to invade Ukraine? I don't know how to do that. So I actually sincerely find that we have better tools, more powerful than the public broadly appreciates for just telling an AI to do what we want. And then while it will fail to do so in some corner cases, and it tends to get a lot of publicity, AI is probably already safer than most people think, which is not to say we should not also have maybe every country have a view on AI safety and keep on investing significantly in it. Thank you all.