 Welcome to the World Economic Forum's AI Governance Summit here in San Francisco, and also to all of those joining via live stream. If you are following along and want to share on social media, please use the hashtag AI23. I'm Ina Fried, Chief Technology Correspondent at Axios. It's my pleasure to welcome everyone for what I know is going to be a great discussion. As everyone is keenly aware, it's been about a year since chatGPT changed all of our lives. But I think we're all still trying to figure out what it means, what it's going to mean, and I'm thrilled to be joined by an eclectic panel, as was discussed backstage. But I think people that will give you a few very different glimpses. I think, again, we're all trying to figure out what does it mean for me? What does it mean for my job? What does it mean for my community? What does it mean for my culture? And I think we have a lot to dig into. Before we begin the discussion part, I'd like to invite Jeremy Juergens, the managing director and head of the Forum's Center for the Fourth Industrial Revolution, to deliver a few remarks, and then we'll kick off the discussion. Mr. Juergens. Thank you, Ida. So thank you, everybody, for joining us here. We have over 250 people assembled for the AI Governance Summit. This is the first time that the World Economic Forum actually convenes a group and a summit around a specific initiative here. And in this case, it's our AI Governance Alliance. We established this earlier this year following a convening of over 100 top leading AI experts in April of this year. And out of those discussions, we understood how quickly the technology was moving, the dramatic effect that it would have on the economy, on society, on livelihoods, and how even among the world's top experts, how little it was understood both how we might actually guide or even control and put in place the necessary safety mechanisms, as well as how we would govern and take into account that we balance the risks along with the opportunities and what it would take to fully harness and unlock the innovation benefits that might emerge there. On the basis of those discussions, we published the Presidio Principles. This, again, attracted attention from a number of not only business leaders but also policymakers, civil society groups, and academics. And with that, we established the AI Governance Alliance. So today is another step in this journey of helping us understand and go in depth on each of the different domains we had here. So a lot of discussions looking at safety, what is the role? What are the technical pieces that we need to put in place there? Considering governance, we know that we've had discussions both with this announcement out of the US recently, out of Bletchley Park in the UK, the Hiroshima process, and other groups. So a lot of different jurisdictions are navigating this governance framework there. And importantly, also, how do we not lose sight of the opportunities that it does present? And not only in developed countries, the US is in the lead. China has a very clear role. But also, how do we make sure that those benefits extend to developing countries and even other developed countries that may today not actually have the capacity to put their own models in place to do the necessary training? So all of these topics we'll be covering over the next two days. And with that, I'd like to thank all of you for joining us in these discussions, for the contributions that you'll have both here today, as well as in more depth, the technical discussions tomorrow. And I look forward to navigating this new era together. Thank you. Well, thanks. I mean, I think you hit on probably the three most important conversations that I think we're all trying to have simultaneously. How do we harness this incredibly powerful, incredibly fast-moving technology? How do we recognize its limitations, its risks, its biases? And then, how do we reshape ourselves as humans? It's sort of ironic that the technology that we're able to most easily communicate with, the technology that finally speaks our language, may require more change than learning any programming language we ever had to learn. I do want to tap into the expertise of each of the panelists here. Maybe we'll start with Minister Ghan. Minister Ghan, talk about how you're thinking of it from a public sector perspective. What does it mean? Singapore's obviously been on the leading edge in a lot of ways of technology and business. How are you looking at what we've all witnessed over the last year? Thank you very much. First, let me thank all of you for inviting me to this forum. Singapore has a very unique situation, because partly because we have a very scarce human capital. And therefore, AI actually promises to allow us to augment our human capital. And it also promises to solve many, seems to be able to solve many of our challenges of our time. But AI at the same time also brings about new risks that we will have to manage. Its capability of generating content at speed and at scale will create new risk elements, including, for example, the possibly the higher risk of misinformation or disinformation with fake news. And it can also be used by elements that have ill intent. It is also an area that we will need to think about how we can encourage the appropriate use of this technology. But if we use it properly, it has the ability to uplift our economies, our societies, and also empower our workforce to do even better. For Singapore, I think our experience is that we want to embrace AI. And Singapore is one of the first nation to have rollout and national AI strategy. And this was done in 2019, and we continue to invest in building our AI innovation capacity. But with the advent of generative AI, it's even a more exciting era that we can think of doing things that are seemingly impossible in the past. Now we are enabled by generative AI. But it also requires us to pay specific attention in the governance of AI development. Let me share Singapore's experience in three particular areas. First, I think to fully embrace the potential of generative AI, we will need to focus on preparing our businesses and our people. This is very important because it will then minimize the risk of widening a digital divide. So we have to help our businesses transform. Equip them with digital tools and at the same time share with them, provide them with the necessary support to harness the potential of generative AI. By the same time, we also need to make sure that our workers are able to harness the skills they are needed to be able to do well in taking advantage of generative AI. So based on the recent reports on the future of work, Singapore workers is found to be one of the fastest in the world to adopt AI and at the same time to be able to pick up the skills they are necessary to do well in the implementation of AI. So to prepare the businesses and people is very critical for us to be able to harness AI. Secondly, because of the speed at which AI is developing and evolving, we will also need to adopt a nimble and flexible and practical approach in the governing of AI development. One size fit all the rules and regulations is going to be very difficult because by the time we roll out the rules and regulations, technology has moved on. So it's important for us to take a different approach, have a nimble and practical way. And that is why Singapore has also rolled out an initiative called AI Verify. This is a testing tool to allow us, the developers and the owners of AI system to test, to demonstrate its governance on their AI systems. So these are ways that we need to continue to evolve and to adapt to the new environment. So thirdly, it is also important for us to come together collectively through a multilateral collaborative platform. Platforms like this allow us to share our views, our experiences so that we can collectively come together and address the challenges of AI and tap on the opportunities of AI. At the end of the day, I think it's important for us to at the same time, while we recognize the potential and benefits of a generative AI, we must also recognize the risks involved and it will be important for us to come together as a global community to address this issue squarely and to collectively steer the development of the technology so that we are able to advance the technology for the public good and do so in a safe and secure way. Thank you, Minister Gan. I hope you'll have more opportunities to share where Singapore sees itself and also some of the projects you're working on. Dr. Kler, I was hoping that since you have this vast knowledge of the technology space and have been working in AI since before all of us realized it was a thing last year, one of the things that struck me is AI is not new. The idea of it, many of the places we wanna go were laid out by researchers, were laid out by science fiction for decades, but the technological way we've gotten where we are today has surprised a lot of people, including a lot of people in the field. What does it mean that it's generative AI that has suddenly come to the forefront versus some of these other technologies? A lot of other AI, machine learning, different things we're trying to analyze, we're trying to predict. This technology, you know, sort of, just, I mean, a lot of people have called it autocomplete on steroids. It's amazing the results we get from this technology, but I assume by its very nature, there's some shortcomings to what it can do given the way it operates. Help explain really quickly how we got here, what does it mean that this is the technology and how do we go forward with it? Yeah, so first of all, thank you for inviting me here. It's a pleasure to be here. Yeah, I'm what you might call an OG AI person. I've been in the space since the early 90s, long before the space was, as the space was coming into being, and I remember chairing an AI conference that was the biggest in the field and it hit 1,000 the year I was the program chair, and we thought that was a big event, so it just shows you how quickly things have evolved. What seems clear when you look at the arc of the technology is that we're living in the midst of what is an exponential transformation, and that's what made it hard, I think, to perceive and to predict the future, because when you think about how an exponential curve go, initially, the time that it takes for a doubling, if you will, of capabilities, whatever those capabilities might be, takes a long time, and so you think you're kind of on an ambling, slow, kind of linear curve, and then it suddenly starts to accelerate and you say like, whoa, wait a minute, what happened? And I think this is where we are now is at this inflection point where we suddenly realize that we have been on this exponential curve all along, it's just that we hadn't noticed. And I think that exponential curve has been enabled by the converges of three tidal waves that have all come together at the same time, one which I think is actually the most important, and I'll come back to that in terms of where things are going, is the availability of data. When I started in the field long time ago, a large data set was a couple of hundred samples, and you felt fortunate to have that. And now, we've been training literally on the web in terms of the text and images and increasingly speech and video and so on, and it's that amount of data that allows the kind of models that we're seeing to be trained. Without that, all of the methodological innovation would not actually amount to anything. I know because we were there at the time. The second, of course, is enabled by the amount of data. Now, more sophisticated, more complicated models can actually be trained, and they become increasingly more powerful. The more data you feed them because of a combination of how the technology is adapted to dealing with the richness of the data. And the third, of course, is compute on tap, which we also didn't used to have way back when. And so, those three have come together to enable the sort of incredible acceleration in the progress that we're seeing. Now, as to why generative AI has been the thing that has taken over, I think there is actually two elements to that answer. The first is just that it's so understandable, so relatable to people. And so, if you're there and you're actually having a computer that talks back to you, it feels amazing in ways that other, less maybe visceral applications don't. And I think that's part of what we're seeing, but I think it's only part. The other part is, I think, the somewhat surprising observation, and I think it was a surprise to pretty much anyone, is the extent to which this autocomplete on-steroid task and sort of creating kind of like, almost like a balloon of model around it that's forced to sort of create in a very realistic way what the next word is going to be, how much representation that has forced the model to learn, how much world representation, how much reasoning is required to do that task really, really well. And so, I think that is an important lesson learned, and I think has certainly transcended into how we think about how to solve other problems in terms of the different pieces that need to come together, which is defining a really high value proxy task, like autocomplete combined with enough data to actually make those models learn something that is a meaningful model in a particular domain. And for those of us like myself who work in data-poor domains where you really need to be thinking all the time about where do you get more data or generate, we print data in what I do, how do you generate enough to leverage that sort of incredible sort of combination of those methods? And I think, Dr. Clark, you hit on one of the things that I think is really a key part of this, which is these generative models, what they really are is taking very complicated or what they accomplish is taking very complicated tasks that used to require a specialized form of communication, whatever that was. So for some people that was Photoshop and editing photos, you had to know how to use a very specific tool. In this case now, you can just describe what it is you want. In that similar open AI had its event last week, they announced the ability to build custom GPTs, and I go to developer conferences all the time. It's rare that I go home from them and build anything because in the past, they required knowledge I didn't have. I went home from that developer conference and was able to build a couple really useful tools. And what it reminded me is chat GPT in a sense is the parlor trick. It's the introduction, it's the tutorial, but the power of these models isn't just reasoning against the whole world. It can be incredibly powerful when you're using that reasoning against a specific set of data. For me, it was planning my coverage for next year's Olympics, but for companies, and I wanted to get to use Sebastian because Salesforce has been partly at the forefront of how do businesses make use of this? Businesses obviously bring a different set of concerns. It's one thing if I go home and use my data, I'm the IT manager, I'm the CEO, I'm all of those things, so I get to decide. It's really tricky when we start getting into businesses using their data. Talk about, you know, Salesforce has spent the year rolling out a bunch of tools, a lot of them in sort of very limited feedback, limited programs so you can get a lot of feedback. You know, I've been to all the events where you've announced all the tools. In a lot of the cases, there seems to be a big emphasis on letting these systems generate a lot of information, but not letting them take actions, leaving that to the human. My suspicion, and you're the chief legal officer, so you can probably tell me if I'm right, there's two reasons for that. One, it's a best practice. We don't know these systems that well. It's a best practice, but also, it shifts the liability to the person doing it and not the company providing the software. Talk about sort of where we are. What have you been able to do in this first year? What are some of the concerns you're still grabbing with? From a company that's been at the forefront, where are things right now in terms of businesses being able to make use of these tools on their data? No, sure, thank you and delighted to be here and delighted to be here with all of you. I suppose I'd come at that kind of with a couple of ways. First, and to echo your comments, AI fundamentally is not new, right? And in a way, though, this exponential transformation could not have come at a more important time, right? We think that businesses need AI now more than ever. Governments, constituents, stakeholders need AI more than ever, and the broader society, we think, need it. Now, what do we need? Well, we need the benefits, right? And then it's how do you kind of achieve those benefits taking into account the relevant risks? Of course, look at Salesforce, and we had decided to make a very significant additional investment around AI research, probably around 2014. And then sort of working with our customers really were viewed as we're pioneering the AI era around predictive AI, and then we launched Einstein 2016, and trillions of predictions, right? What's the next best action XYZ? The funny point you're making around, we haven't yet fully deployed what I think you're referring to autonomous, right? Activity autonomous agents. We do see ways with predictive era, generative era, autonomous era, and then we can talk through what we think of the other sort of future areas. I would note, when we were sort of laying out what we called sort of our Einstein GPT, one concept was around, could we let people build skills? What are the skills, the actions, the capabilities, that a customer, a developer would want to create in using sort of various software, right? Or sort of different technologies, more to come on that front. I think what I would also say if we step back, we think that, look, at Salesforce, and I think many kind of companies and institutions and certainly governments, trust has to come first. For us, our core values, we lead with trust, trust, customer success, innovation, equality, and sustainability. I think the question, we think asking the right questions now will enable us to create the future that we want to have rather than the future that we may end up with, is how do we ensure that a trust first motion, that these concepts of responsibility, of ethics, look, of legality, how do you build them into the use, development, deployment of this, you know, we think is sort of exponential opportunities here in terms of technology, build it in early. We do believe that the, this AI revolution, is it worth, it's a trust revolution, or it needs to be, right? A trust revolution. When we had designed what we call the trust layer, you know, again, listening to our customers, listening to, you know, we thought we're having early views, actually also of regulators and the like. The idea sort of with the trust layer is how do we make this safe and effective, taking into account, you mentioned data. Well, when we think about data, quality of data, integrity of data, security of data, and when we think about risks, mitigating risks, monitoring risks, but also fundamentally understanding, you know, the risks. I think there have been two items that I've been surprised by as we see how this sort of new sort of phase of generative AI has developed. The first, how quickly we all appreciated and embraced that we need a multi-stakeholder lens on this new technology, this opportunity. And I felt it was very quick that we know as we think about how do you partner, right? Sort of academia, right? Civil society, private sector, public sector, human beings, right? Employees, workforce. This idea of needing a multi-stakeholder approach, I was surprised how quickly, again, you know, at Salesforce, it's something we've always thought of the broader Salesforce ecosystem and that business can be the most profound sort of system, you know, and kind of approach to drive positive change. The other element is that when it comes to governments and regulation, because I do believe, by the way, one of the great opportunities for governments will be how do you improve the constituent experience? So certainly AI is going to transform how businesses interact with their customers and in time, how businesses interact with other businesses. But what about, you know, for all of us just as human beings, right? In our communities, you know, and the like, how could you improve the constituent experience using AI and technology? I think that's an incredible opportunity, you know, for the private, for the public sector. And so the other surprise, again, so multi-stakeholder lenses and that that's important and you had to bring in diverse and different sets of voices, but also that we need collaboration between the private sector and the public sector so that we can move really fast but thoughtfully. You know, I do think, you know, kind of when you think of kind of the legal landscape, you know, more broadly and kind of what's the ideal role of law, you know, and the like, look, we need to figure out how do you use AI to accelerate the velocity of wise decision making, not speed for speed sake, right? I think the other concept I'm just gonna sort of introduce here, we're all really excited about talking about what's the kind of, you know, the moonshot, the opportunities that can arise, right? If we sort of get AI right. How is it going to just incredible, right? Diagnoses, right? So for various health conditions, how can it, you know, enable, right? Educational sort of outcomes, you kind of go down the list. They're really exciting ideas for myself and actually what we're also looking at very much at Salesforce, I wanna talk about something that's a little bit more boring, but I think much more important when you think of scale. I wanna talk about the problems of mistakes in society. I wanna talk the problems of how you think about inequalities where people who don't have access to the best, the best of anything it might be. The best healthcare, right? The best technology, the best software. What if we can all focus, and you could look at it with the SDGs, right? Each of those different sort of sets of the SDGs. We believe everyone should, they should be available to everyone, right? What if we use AI to raise the floor? Not just go for the moonshot, of course. Everyone's gonna invest and focus on that. What if we think about raising the floor, reducing the impact and the likelihood of mistakes occurring that have horrible impacts on people that never really get sort of written about or focused about. And just thinking about this idea of raising the floor in every potential venue and in every potential region, I think it's a tremendous opportunity. And just the other thing I would ask, on this issue of accuracy and reliability and ethics and responsibility, one question I have had where we're all very concerned about hallucinations in these models. The area that I have just not heard enough about and I just wonder is where, to what extent are these hallucinations a feature rather than a bug? And what I mean by that is, is there an element to which where we're excited about gender variety, being able to create new content, which then creates a whole host of new ethical issues and the like. We must have accuracy or at least transparency about where something is just flat out wrong. But is the solutionation element part of how the potential sets of technology are able to create and generate sort of new ideas? And how do we deal with that? And that's what at Salesforce, when we build on the trust layer and other items we're trying to grapple with some of those opportunities and challenges too. And I think you raise a lot of the issues that we wanna dive into and we are in a sec, but I think before we get into the risks and the benefits, the other element that I think is important to address and Dr. Kohler sort of talked about this in referring to it as an exponential amount of change is just the speed. And I think we have to take a moment to talk about the speed with which all this is progressing because I would posit that stuff is actually changing faster than many humans are able to adjust let alone institutions. How do we deal with a technology that is moving this fast? I mean, it's AI and generative AI is often compared to other big shifts in technology. We had the computer revolution, we have the internet, we had smartphones, all these were major shifts and changes, but at least in covering each of those technology waves, and I've been fortunate enough to get to cover each of them, the pace of change wasn't actually that great. They were huge shifts, but it took us quite a while to make those shifts and the technology underlying those shifts wasn't actually changing that fast in part because it was hardware. I have been amazed and continue to be amazed over the past year just how fast this is moving. I will talk to a company about where they're going and three months later, the amount of progress you'll see is dramatic. One example I saw at a recent conference was showing a slide of the same prompt in mid-journey a year ago and it looked like something that my parents are Shigal that they might have that was very interesting and artistic, but nothing that any of us would call realistic and then the same prompt today generates something photorealistic, similarly with other types of generative AI. My question, and maybe we'll start with you Jeremy, since you're sort of managing this for the forum, but I'm kind of interested how everyone is dealing with this is are we capable of keeping pace with this rate of change? Because even if we want to get to all the things that Sebastian's talking about, we have to not, you talk about mistakes, we have to not make mistakes as humans in what we allow and don't allow. So I'm curious how all of you are thinking, if anyone disagrees too, that the pace of change isn't that fast, I'd love to hear it, but how do we deal with technology that's changing this fast? Yeah, so I think that's a great question and something we've been focused on quite a bit over the last year since, I think I was playing on mid-journey with my kids and it was before it was kind of mainstream with the chat GPT release. I was like, oh yeah, this is pretty cool, amazing. Don't need the Photoshop there. With the release of chat GPT last fall, all of a sudden it's kind of people woke up and it wasn't that AI was new, right? The AI had been ongoing and it reminded me a bit of how companies approach the digital transformation in their companies. There's a period where every company was trying to go through digital transformation and at some point it became mainstream, they took it for granted, they hired a chief digital officer and then they moved on to other things, ESG, sustainability. In the last year, they've all been focused on AI and so there is this kind of high-level element there but to come back, how do you address the speed and I think not only within organizations but across society as well because I think there's a lot of fear that comes with this. We hear about the fear of killer robots and extreme disasters and so on. Quite exaggerated. I'm actually much more concerned around just exacerbating the imbalances that are already in the system today. Around digital safety, around inclusion, around very simple and basic things that we don't even have to worry about further out down the road there. And to address these, all the most important element that we focus on is inclusion. We talked about multi-stakeholder here and I can give an example of work that we've done in India. We've been looking at how you can use AI for agricultural innovation for smallholder farmers in India. Over half the population works in agriculture in India so you're thinking roughly double the population of the US. 85% of those farmers are managing a little bit less than two hectares so relatively small plot. And a lot of them have limited access, a lot of them don't have phones. So it's like, okay, how do you then leverage the technology for the benefit? You know, one of the first places we started was actually bringing the farmers into the discussion. Through farmers cooperatives, we brought in startups, we brought in large Indian companies that are at the forefront of agricultural production. We also brought in foreign multinationals, we brought in governments, we brought in agronomists, et cetera. Now that discussion took time, right? We spent over a year on that. But in the process, and Sebastian, you mentioned the word trust, the trust actually came from including a much larger ecosystem in the dialogue. So it wasn't just some company coming out and saying, okay, we've got a wonderful solution for you, it'll solve everything. We actually involved the farmers, we involved throughout the value chain all the different elements there. First, yeah. I wanna press you a little on this because on the one hand, the opportunity is incredible. Like I remember again, in the cell phone revolution, just giving subsistence farmers, typically they had to, these decisions of what crop to plant, of when to plant, of which market to go are basically the difference between feeding your family and not. And it made a huge difference when you could send a subsistence farmer in India or in Africa, which market? You're gonna have to walk a day to sell your crops. If you know which market's gonna offer a higher price, again, that's the difference between feeding your family. So having more data and being able to deliver it in a conversational way where the farmer doesn't have to do much more than ask a question is amazing. But I also wanna challenge you, you were meeting for more than a year. I suspect that technology shifted a ton. So how do you have those long-term conversations? How do you build that trust and not have the result be, here's the best thing we could have done a year ago that has less relevance than you would hope today? Yeah, so this I maybe would disagree a little bit on the speed. The speed in the lab is happening quite quickly. The speed in deployment does take more time. We exist in very complex political systems, complex economic systems. And if you just come back to this question of healthcare, for example, yes, you could get a diagnosis in chat GPT on something. And the question is how many of you want your diagnosis or read from a combination of Reddit data, Twitter data, ex data, so on, versus verified validated trusted sources. So we do have agency in this. We have agency in determining when we decide to use the technology, conscious decisions about when we don't and we delegate that to someone else. And so I think it's important that as we navigate something that's happening very quickly, we move slowly enough that we're actually taking conscious things and not just allowing ourselves to be pushed along through that rush. And we'll actually get more benefits of that. And if I come back to the Indian farmers, what we saw is we ran the first pilot with 7,000 chili farmers in the state of Kerala, Ghana. We saw improvement in yields, time to market, reduction in utilization of fertilizers, which have an expense, especially in the face of the recent energy crisis and increased profitability for each farmer. We're now looking to extend that to other states. And we didn't say, okay, let's now just roll it out to the whole country, work with the agricultural ministry, 600,000 farm cooperatives. We're taking a step-by-step approach. And because the underlying framework is developed in a system that recognizes that the technology will continue to evolve, we'll be able to bring in new developments as they occur. And this is also where I think the start was playing an important role in those discussions because they're often moving much more quickly than the large companies. They don't necessarily have the capacity to immediately scale up their benefits there. So I'm actually optimistic that if we consciously include a larger ecosystem in there, that we work deliberately, that we recognize the agency for the different individuals, that we can harness the benefits even as we mitigate the risks along the way there. Yeah, please. It's a very different perspective. I think technology has great promises. It can solve quite a lot of problems, but in itself it's not a solution. I think it's just an enabler. So for example, AI can provide answers to how to improve the crop yield, what kind of fertilizer is the best for this type of crops, and which time of the year is the best time to plant the seeds and to harvest. So these are technology, and AI can help in finding solutions and answers to that. But to ensure that the farmers have a better quality of life involves many other factors, including socioeconomic issues, geopolitical issues. And you need to also make sure that despite having the best yield in the crops, the logistics and the supply chain has to be in place. So you need to work at different aspects of the problem to be able to solve the issue, to deliver good outcome for the individuals on the ground. So I think AI is a good tool, but it needs to be used in combination with all the other measures in place to be able to deliver outcomes. And you also brought up a point, Jeremy, that I wanted to bring up with the rest of the group, which is, no, I certainly don't want to rely on major medical decisions by typing into chat GPT, but I think that's one of the fundamental misunderstandings a lot of people have, is that what generative AI means is I go type into chat GPT and expect a miracle response, whereas I don't necessarily want that, but I do want the doctor that I go see who went to medical school 10 years ago to have access to a tool that's probably trained on the really good data, not Reddit, not everything that's been said on the internet, but the medical things, and by the way, spoiler alert, if you read my newsletter tomorrow, this will make more sense, but how powerful is the combination, and maybe start with you, Dr. Kohler, since you're in all these fields, both AI and healthcare, how powerful is the combination of this interface, these models with very specialized data, particularly in healthcare? So I think that's a great question, and I think actually that interplay between a person and the computer is something that I think we should be leaning more into, and there was a recent study by Eric Brilliusson that basically showed that you could deploy the technology entirely on its own, you could deploy the technology in partnership with people, both of those are legitimate productivity gains, but when you have the partnership between a computer and a person, you actually have a considerably greater efficiency gain, and I think also are able to avoid some of those issues of loss of trust and hallucinations and some of the other risks that if you just kind of launch of the technology on its own, it just doesn't make, you run into all those risks, and in the field that I'm in, you oftentimes hear people like, oh, we have the first entirely end-to-end AI system for X, Y, and Z, and my question is, A, do you really, what does it even mean, and is that a good thing, even if you did have it? So that's a place where I would, I think, ask questions. Now, to the other aspect of what you asked, I think there is clearly a need to create specialized agents that are trained on the kind of high-quality data, not the Reddit, it's not the X, not whatever, where you need to kind of curate the data, make sure that it's actually representing truth. In some cases, you need to generate data, you need to invest in harmonizing, cleaning your data. I mean, let's just say that garbage in, garbage out was always a thing in machine learning, but I think as you make the AI more powerful, it gets better and better at anchoring on falsehoods and inaccuracies in your data and amplifying them. And so I think making sure that as we create what I would call an app ecosystem on top of whatever generative AI models are out there, we empower those apps with the right kind of high-quality, curated data. And you ask me, I would say the value is going to be split quite nicely between the baseline foundation models that create the core representations, whether it's text or images or whatever, and these verticals that were trained in a very thoughtful way with the right feedback and the right input from humans on high-quality data in a domain-specific way. So you bring up, I would say it's the power of the technology and the limitation, which is it's going to make incredible insights, but based on the data it has. And a lot of these systems have been trained on the whole internet. And I think when we talk about the risks, I think a lot of the risks are tied to the fact that the internet was created, the data was created by humans, it's processed by machines, but it has all of our biases, it has all of our proclivities to division and all the problems that plague our modern world are well-represented on the internet. How important is it that we now, while we're just in the infancy of this, are recognizing those limitations? Where does bias rank? Maybe, Sebastian, you're dealing with all these risks. Obviously, you've got to prioritize them. Where does bias fit in? Where does income inequality fit in to sort of, versus, again, as Jeremy was saying, a lot of talk about the robots taking over. I usually try and suggest people separate those conversations. I think they're both worthy conversations to have, but you can't really talk about these near-term concerns, misinformation, bias, income inequality when you're saying, oh, but the robots are gonna kill us. So let's have special time devoted to our fears around the robots killing us. I do think the more we talk about that and the more we recognize it as a possibility, the less likely it's gonna be. But I also think we have these other problems that are here, they're present, misinformation's probably at the top of my list. But how do you move forward, and then I want to come to Minister Ghan on the same question, how do you recognize these risks? Work to address them and still move forward. You're not gonna get rid of bias, you're not gonna get rid of income inequality. How do we move forward? You talked about lifting the floor, that's certainly one way. I think it's so important that you raise this, because I think this is exactly the kind of conversation we need to spend more time on and resources on, right? And focus on these solutions. What you're highlighting is, we think there's sort of this three prong lens, you can take trust, responsibility, and impact. You asked me, how important, where do these issues of bias, where does the issues of toxicity, where do these other issues of inequalities fit? They need to rank really high. So what do you do with it? Well, so to look at Salesforce, the way we looked at it is we said, we need to make sure we're always really shifting left on this trust type of conversation, which means as we think of developing our product solutions, partnering with our customers, our customers, whether our customers are private sector governments, nonprofits, foundations, what have you. Civil society building in one second. Explain shift left because it means one thing to developers. Salesforce is known for shifting left, and that means something totally different. Talk about what you mean in the development side. So what I mean, Simon, for the more sort of approach, and you'll see there may be some more ties to it, is here's what we can't have happened, or what in Salesforce we cannot let happen, and hopefully everyone here says let's not let it happen. That trust is an afterthought, bias toxicity issues are an afterthought. That ethics, responsible innovation, and grappling with ethical and humane use of these issues is an afterthought that you figure out later after products have been deployed, develop sort of its scale. Shift left means bring these topics around bias, how do you deal with bias, toxicity, misuse. The problem of, well humans misusing sort of the technology, shipped it really early into your product development life cycle in your design, whether it's again at your organization, whether it's your company, whether it's your government, whether it's in your local sort of municipality, what not. Having cross-functional, say councils, as you're thinking about developing technologies or developing solutions, or determining what are the impacts we're seeking to have. Very, very early, when you're creating your product roadmaps, right, or your impact roadmaps, or your inequality sort of solution, or mass, right at the beginning, and then throughout the early stage, not at the very end, like, oh by the way, shouldn't we have started to look at these risks and then play catch up on it? Very, so what does that mean sort of for us? So we said, okay, let's build toxicity filters, right? Let's grapple with sort of these bias related topics. Let's think about, how do you, when you talk about bridging the old digital divide, sort of the AI divide, how do we vary early on also focus on enablement, upskilling, re-skilling. We've been partnering with sort of, many folks across the globe around how do you use our trailblazer platform, which is just, it's free content that we've just developed about how do you train and enable people on all sorts of items, right? And making sure that whether it's at our schools, or it's our workforces, again, upskilling and re-skilling, people understand what the opportunity is and they can deploy it really fast. We do think there are technological solutions to some of these issues, but it also requires companies to decide, let's build in, right, trust by design, compliance by design, responsibility by design. Again, rather than, oh, someone else will figure it out. One quick point I wanted to also raise, you talked about the farmers. I think this raises an even broader and interesting sort of question. Where are the domains or the context, you know, and this relates to what you were highlighting. What are the domains and context where in order to achieve the benefits in a way that is thoughtful about risk, but achieve the benefits, you know, for all or in an inclusive way, do we need to have adoption and embrace of the solution? And if you're in a domain or a context or a use case where you're gonna achieve the really important benefits you must have quality, embrace, and adoption, then you must do what Jeremy just outlined, right? Because it's not just someone from up on high that says, okay, now it will occur. You have to have the human beings, the organizations, those that may question, that may be skeptical, that may doubt, may lack trust. And that's where you have to work, I think, really early and have eternal vigilance around making these solutions, this technology, and the like worthy of the people's trust, worthy of enterprises trust, worthy of the public sector's trust. And it's really hard, because it requires you to grapple with very serious trade-offs. Minister Ghan, maybe that's a good time to come back to you in terms of, you are looking at this for, like, AIs here, how does my country take advantage of it? How do I use it to improve the well-being of my citizens, the economic competitiveness? How do I make sure I'm not introducing a new dependency? I think we've all learned over the last five or six years that who you're getting your technology from matters a lot more than we thought when we thought it was one global economy and it didn't matter geopolitically. I think, you know, things have shifted quite a bit. How are you thinking about, you know, where are the greatest opportunities and where do I need to be careful? First, let me also address the issue of the risk. I think with any new technology, there's always risk, and particularly, technologies like AI or generative AI, there are risks that we are not familiar with, and therefore there's always this fear that we will not be able to manage the risk. But I think what we should do is to do our best to identify the risks and to tackle it, to mitigate it. Because even human decisions, the process also is risky. We also have human biases, we also have our own experience, and even doctors make mistakes. As a former health minister, I must say that AI actually has great promises in healthcare, for example, in drug discovery. It will help to speed up the process of drug discovery much faster. At least Dr. Kohler is working specifically in that area. At a much lower cost and have fewer mistakes, fewer errors, I think this is a very important, and another area which Singapore is also looking at is how to make use of AI to encourage or to promote public health, and the things that you should do, the exercises that you should be undertaking, the type of food that you ought to avoid, and we can make use of AI to customize for the needs of individuals. At the same time, it is also a possibility for us to begin to embark on a precision medicine, to provide the customized medicines depending on the makeup of individuals. So gone may be the days where we have a generic medicine that is a bit of a trial and error. Sometimes this works for you, sometimes it doesn't, whereas with precision medicine, it has a potential of a more targeted way of therapy. So I think these are things that we are looking at, and Singapore is also looking at how we can apply AI and generative AI in the manufacturing context. How can I use artificial intelligence to optimize my manufacturing process, to optimize the management of an entire supply chain, particularly in today's world when we are beginning to operate globally? I think it's very often it's beyond human capability to be able to manage global operations, and with the assistance of AI, it will allow us to do so in a more efficient way and in a more optimized way. So I think there are many opportunities for us to develop AI capability and application, use cases. By the same time, we need to be conscious of the risks that it involves and have a robust governance system, and the system must evolve as the technology evolves. It cannot stay static. And therefore, Singapore has adopted a sandbox concept where we allow the technology to evolve, have a very light touch regulation. By the same time, we keep a very close watch on the development of the technology, and we need to make adjustments to our rules and regulations as we go along. So this way, we allow the development and evolution of the technology, but at the same time, making sure that this application is safe as much as possible. And if we discover a risk and we must be quite prepared to move and to adjust our rules and regulation, our governance system as quickly as possible. So we do need to have a flexible, practical, and nimble governance system on the use of AI and the development of AI. Can I just add on top of that, and I love the examples that you gave Minister Yan about the use of the technology in drug discovery and in healthcare, and I want to use that to highlight a point that I think often gets lost, because the consumer use cases that we all find so immediate and relatable kind of have taken over people's imaginations of this is what the technology can do. And I think the examples that you highlighted, which are the examples that also speak to me, are use cases where the computer actually does things that a human simply cannot do. In terms of assimilating, tremendously large amount of complex information and finding the patterns that allow you to whether it's identifying a new therapeutic intervention that a human will never identify, or whether it's being able to finally dissect the complexity of human biology to the point that we can actually identify which patient is going to benefit from which medication versus not, which requires assimilating so much information about human biology, human anatomy, omics levels, genetics, imaging information, and really create a diagnostic and a therapeutic path that a human clinician will never be able to get to. And I think those use cases often get lost by the wayside because people are like all about, whoa, diffusion and we can create beautiful images and we can no longer need Adobe Photoshop, which is great, but there's use cases that are just beyond the realm of human capabilities. And I think those are actually going to be the ones that in many ways are going to be the most impactful of all. I agree, and I also think that's where our challenges get pushed even further, is how do we as humans manage more and more work that's being done above our capability? Again, there's all this talk about AGI. I don't think any of these technologies are going to replace humans, but to your point, I do think already and increasingly they will be able to do things at a pace and scale that humans can't do, but also factoring in way more things than a human brain can process. I was struck, I had the opportunity a few weeks ago to drive an autonomous vehicle or ride an autonomous vehicle with Toyota, but I also had the opportunity to drive one where it was the human and the car, we were both playing a role, but I wasn't playing the role that we're used to. So we're used to direct manipulation. I turn the steering wheel and all the wheels go that way in the most advanced system, maybe only the rear wheels do. Computers capable of saying, well, the human driver wants to go left, I know the conditions, I should actually turn the left front wheel this way, I should, you know, anyway, my point being that there's actually a lot more factors when a computer's in charge that you can take into account that we've simplified because humans can't do it. Sebastian, how are you thinking about how do we manage responsibility when the work being done is, I don't wanna say above our pay grade, but more factors than we can handle. And in moving into a world, which correct me if I'm wrong, if anyone disagrees, but I think we are gonna move into a world where we are letting computers take action. I think that's the most exciting part, it's the scariest part, like right now, we're using it to inform human decision-making. It's decision support, but it's gonna be taking action. How do we prepare for that world? Yes. No, I'm smiling just because this, you know, at Salesforce we've had for a long time, a dedicated, I think this is public information, Salesforce Futures. And it's a dedicated Salesforce Futures capability that we really infuse into all of our kind of discussions around what are all the potential future scenarios that could occur, what do we think is more or less plausible and what are the ones that we wanna help, right? So to support or just risk to mitigate. You know, the lens of trust, responsibility and impact. So I think there's three kind of interesting areas that we're focused on, but maybe kind of other organizations and sort of society, you know, could be focused on. So there's an old line, you know, in the business world, but I think it applies to all organizations, culture each strategy for breakfast. So if we think about the pace of change and velocity of change and just speed, speed, speed, have you built an organization that is able to respond, has built the capability of innovation and a culture that can be fast, that can take in inputs, that can adjust, that can pivot, right? Can take in new information and then make adjustments rather than have a culture that is totally dependent on, right, the past. I think number two, you know, on these sort of the issues that you raised, how do you think about human capital management, right, and corporate strategy or government strategy, right, or non-profit strategy, or when you're managing a mix of human beings and what we call, you know, autonomous GPT agents. You know, when we launch our Einstein GPTs and those sort of different skills and the like, you know, I'll give you sort of a brief sort of a little vision, you know, and but again, this issue of how do you manage like this broader kind of workforce that hopefully again, you've got humans who are amplified and augmented, right, sort of with, you know, Einstein co-pilot, you know, whomever it or what it is, but partnering with AI to do their jobs better, to have their impacts be more meaningful, and then also dealing with folks that, right, aren't human, right, autonomous agents. But here's sort of the interesting dynamic. So right now we think about, you know, how do people interact with each other? And it's the idea of, okay, there's a human interacting with a human, and then maybe, you know, there's a human interacting with sort of some AI on the other side. Again, our view, our strong principle and acceptable use policies and the like is disclosure, be transparent, we should all know, right, when we're interacting with that, but there will come a time where it's not a human interacting with an AI. It's autonomous agent interacting with autonomous agent, and then continuing to interact with autonomous agents, and it's all going back and forth, whether it's in any sort of potential domain, certainly in business, right, whether it's in sales, whether it's in R&D, whether it's in marketing, right, whether it's in procurement, whether it's in all this, we're already by the way seeing, you know, the first legal contract, it was in the NDA space, you know, was negotiated solely by, right, agents on each side, overseen of course, right, by lawyers who hopefully were paying very close attention. But this whole issue of agents interacting with agents, and then the concept, you know, we talk about human in the loop a lot. You know, I think you raised an important point, look at the current general sort of state of the world around AI and these technologies. It's actually not just human in the loop. Humans are making the decisions with the benefit of the inputs, AI inputs, the analytics and whatnot, but humans are making the decisions. This is very important by the way. You could say maybe this is part of trust responsibility impact, how do you ensure that it continues to be the case that we have, right, sort of human oversight? But will there come a time where a, hopefully again, a trust-first, trust-infused, responsibility-first, ethics-first, responsible innovation type of decision-making apparatus is actually AI led? What do we have in a world where AI is making capital allocation decisions? If AI is making, and here, you know, I think this is, you asked about what's the significance of change? And if things happening really fast, what happens is the law, right, the reality on the ground, the technology outpaces the law, it outpaces the public policy. And in order to achieve the future that we want to have, rather than the one that we may get stuck with, we have to have strong law and sound public policy, but also one where these elements are gonna continue to drive forward, I think, by the private sector. And to some degree, and perhaps to an increasing degree, you know, depending on how items occur, by the public sector, you have to embed in that voluntary set of actions, and hopefully cross multi-stakeholder actions, this sense of responsible guidelines, right, acceptable use and trust, so that even as you protect and safeguard and hope maybe accelerate innovation, you're accelerating responsible innovation and people are really grappling with these issues really early. Jeremy wants to get the last word, so I wanna give you real quickly. I just wanna come in, because some of these questions aren't necessarily due, right? So if you look at financial services, you know, a lot of you have already used a credit card or a payment app today, there's AI behind that. If you think of high-frequency trading, we actually have safety mechanisms in place, so when the agents are talking and basically trading against each other, say, oh wow, this is actually getting a little bit out of hand, I think what we can do is actually learn from some of the heavily regulated sectors, like healthcare, like financial services, and say, now knowing what we know, we'll actually start needing to apply this in more areas than we have previously, right? So it'll actually, because AI is going in and being embedded in a lot more domains, we'll need safety switches, we'll need to think about what happens when agents, talking to agents, get out of control, when do we want someone to say, oh, hold on, put a pause on that, do you wanna do my whole bank account or just the trade for my Netflix subscription, right? There's a big difference in when you want a human validation in that system, and last element I would just touch on is roles. Recommend everybody look in our report from last September, we don't see jobs being displaced so much as specific skills, so we recommend all of you to look inside your organizations and think what are the specific skills or roles that are most ready for automation, and also have that discussion with your team members, with your customers, with your organization, and this will actually help us navigate learning on knowledge in both successes and mistakes from the past and actually helping prepare for that better future there. Well, we are gonna have to leave it there. Thank you so much to all our panelists for the insights. I know we're gonna have a fascinating couple days of the conference. If you enjoy discussions like this, quick plug, I do a daily free newsletter for Axios. You can go to aiplus.axios.com. I'm sure we're gonna be talking more about this, not just here, but again in Davos in January where last year chat GPT was the talk of everything. I think this year, how we deal with a world that's been changed by chat GPT will be at the top of the discussion. I would ask everyone in the room to just stay in your seats. Our next panel is gonna be on the new age of Gen AI governance. It's gonna begin in a couple minutes after we quickly refresh the stage, and then after that session we'll have our morning break around 10.45. Thank you.