 Hello. Thank you so much for joining us for the final session of Slush. Backstage we were joking that a year ago a panel on AI policy on a Friday afternoon would have interested zero people. But I guess this year has been pretty special, thanks to a small AI system called ChatGPT, which I think has really brought AI and AI regulation to the forefront of the conversation. Now, Sandro, you lead policy work at OpenAI and you've had a crazy few weeks, in fact, never a dull moment in Silicon Valley. How has all of this influenced the way you guys see AI regulation and the need for it? Thank you, first of all, for having me. And I think I don't remember the last time and maybe there won't be a future time where a conversation on AI policy is in between the former Finnish Prime Minister and what I think is karaoke and the closing party that everyone is looking forward to. So I appreciate all of you staying and listening to us for 30 minutes. Indeed, the last year has been pretty special. And I think, you know, when we a year, almost exactly a year ago, we're internally talking about a low-key research preview called ChatGPT to give people an easier access and just have people play around with our models. I don't think we could have predicted and we didn't predict where we were today. But I think one of the things we've hoped and indeed is a conscious choice by OpenAI to do what we call iterative deployment, where we think that actually the conversation around AI and AI governance I think is a better one the more people have access to the technology and if it touches the real world. And I think the last two, three days being here in Helsinki and seeing what all of you are kind of building on top of foundation models, how people are using ChatGPT I think has changed the conversation. And I think it's also changed the conversation around AI regulation and AI policy. And I think it's changed it for the better. And we had hoped also with the tour that our CEO did over the summer and conversations we've had with developers and kind of leaders of governments across the world that there is going to be a broader more inclusive conversation about where we want this technology to go. And I think the last year has shown that the world has really stepped up and is now thinking seriously across civil society, across the private sector, across governments, across people deploying the technology. It feels like we now indeed have a global conversation around AI governance. And we think that that's a good thing. And that was the hope we had in kind of releasing the products and getting people to just experience what is possible, experience the limitations and just kind of ask better questions. Because I think the questions get better the more you get to play around and understand what the technology does and how it works. Absolutely. Now, Lewis, you're at the heart of European lawmaking in Brussels. What's happening? Could you give us an overview of everything that's coming down the line? What do startups need to know in terms of AI regulation? Rules that might apply to them? Give us the overview. Thanks, Melissa. And thanks to everyone for being here with us. First, I really think that this type of discussions between policy makers and companies and the academics are of key importance. So it's an important thing that Slush is organizing it. And I hope that is one of the first events that you can see throughout Europe precisely to bring together policy makers and the companies and the investors so that we can have a base for the discussion. But as you are saying, I remember when I did my master thesis on AI, it was already like six years ago. And everyone was speaking at that time how Google Alpha Go went to the Chinese game, Go three times in a set. And everyone was surprised with the landscape and all things were changing. And you see today with the computer power, data scientists with the analytic side of the development, how much we are developing the different gen AI systems with chat GPT through open AI. We also saw this morning at Google. And we are looking like the large language models and you see how much the society will change in the future. So I feel that it's very much the speed and scale of the technological transformation that we are having that bring precisely the policy makers and the leaders a little bit worried about how the things will change in the future. And that's why you were saying about the global dimension. We have the Europe's AI act that everyone is waiting to reach an agreement. And I hope that until March, max, because then we have the election, but at least that everything is agreed in this policy cycle. We have also the executive order from the U.S. We have the G7 Irochima process. We also had the trade and technology council. And the president was just at the White House and President Biden precisely discussing that. But I feel that we also need to bring to the start-ups two domains that is important to underline. The first is that what we want to do is to try to create this safe and regulatory environment that also brings trust, trust to the start-ups, trust to the investors, so they know what to invest in. And sometimes I remember like just think about the shared economy side. Like, they need to deal with 27 legislation. Sometimes they even need to lead with different municipal regulations when you're speaking about the house sharing economy. So it's much easier to lead with just one single legislation across the level. And finally, the other thing that I also like to underline is that we always speak about like on a defensive side, very much about legislation, norms, values, principles. And I feel that we are also having the other side of the coin that is precisely a more offensive approach. Like, we really want to create the ecosystem around AI at European level. We want to create different scientists to bring together the knowledge transfer between the scientists and the start-ups. We are trying to create also different innovation values on the AI domain across the U. And as you saw, I think that on funding, we are really trying to match and to walk the talk, try to have through the AIF, the IB, the EIC, even the digital Euro program. So I feel that it's this side of the coin that is very much related to norms and regulations, but very much also to try to create the ecosystem and to boost the AI development and deployment across the U. Great. Now, Peter, you lead one of the biggest private AI labs in Europe. And I know a lot of founders are really anxious about upcoming regulations saying it might kill AI innovation in Europe altogether. I mean, how does this all make you feel this became a therapy session? How does it make you feel? Do you think Europe is taking the right approach? Yeah, I mean, I don't think there's a wide range of AI companies in Europe, and there are a wide range of different perspectives on sort of what is the right approach. And I mean, we were founded in 2017 with a mission to ensure that Europe has an AI flagship. Today, we're a bit more than 300 AI experts, majority PhDs in AI related fields, and we're contributing to a wide range of different industries. And we're also building a family of open source language models that are covering European languages, especially. And I think with that perspective, we're sort of seeing both the EU AI Act and regulation overall from different perspectives. And I think sort of what has happened during the past, say, five years or so is sort of, I think there's a lot of good elements in the current state of the EU AI Act. I mean, we are now looking at it not through sort of AI as a technology or systems and methods, but rather through applications and use cases with risk categories. I think that makes a lot of sense. Like that is how we view the market as well. AI is elevating products and products sit in different industries, be it social media or chatbots or vacuum cleaners or vehicles. And it's very different to regulate these sort of products. And so, you know, I think that makes a lot of sense. But then on the other hand, if you look at sort of the past year, now sort of everyone has been discussing foundation models and how those should be regulated and adding that as an additional category, I think that is very challenging. And sort of that's the crucial element now in the sort of last mile of the EU AI Act discussion and how eventually those will be treated. I don't think that's an easy question. And then maybe sort of the last element is that now everyone is talking about regulation. I think so if we look at this from a European perspective, I think there's a lot of sort of there are a lot of more positive elements to focus on how do we actually have a united sort of approach to build a leading player or multiple leading players in Europe that have a European perspective rather than sort of have this say, a little bit sort of nationalistic perspective where every country is building their own AI initiative or language model and so forth. So I think there's a lot of opportunity to actually join forces and build something being here. Yeah, please. Can I maybe just one one thing in addition, I think is that the last year is also brought is a lot of curiosity, a lot of curiosity about the technology about the different applications. And I think we've seen that in the last two, three days or every one of us has felt it here. But I think the thing I also want to do that same curiosity exists in Brussels exists by policymakers and members of parliament. So I also want to say that in conversations we've been having and many others are having regulators elected officials are equally curious and are asking the right questions and want to try and understand where this shift is going. And I just wanted to say that I think that is a fundamentally a good thing. And as Peter was saying, the questions we're grappling with aren't easy. But I think we are really encouraged that globally the questions are getting asked and there is an openness to try and understand the impact on the positive side and kind of the risks that emerge from this technology. So I think that's something that I think is not often said when talking about kind of regulation and policy and it's often seen as a kind of trying to stop certain things. But I think the first reaction we've seen is an attempt to understand where the world is going and kind of what values matter for us and what the right regulatory guard rails are. So I think if I try to sum up last year in the conversations we've having is curiosity on all sides. And I think that's a good starting point. And I think we've seen policymakers and regulators have that as a starting point as well. And what are the sort of key questions you would want people to be asking and maybe what they aren't asking right now? What is your sort of focus on this all? I think I think the application piece and the application layer Peter was talking about is one of the right questions, which is what are the use cases that Europeans kind of care about where they have unique insights, where they have unique data sets, where they have a unique understanding of a certain problem or an issue that certain regions really care about. And what is it that this technology or how is it that this technology can help us achieve that? I think the other question that Louis touched around about on kind of the role of research organizations across Europe, again to Peter's point, we have such fantastic universities and researchers all across the continent. So I really think the ingredients, the ingredients are really there. So I think the right questions that I'm excited about is like how do we get this into place so that Europe can really play kind of a leading role in this. And then of course there are also questions around kind of more long term kind of harms and risks and that this technology brings and we've seen again policy makers step up to that challenge, whether it's kind of the UK government with a lot of partners hosting a big discussion at Bletchley Park on the AI safety summit. And again, you know what these things do is mostly pose questions, but I think that's a really good starting points to get to answers that we're all happy with. Yeah, now, Louis, I see you nodding. You know, the EU really likes to think of itself as a global regulator, right? But on the flip side, other places like the US or the UK thinks that's maybe a bit too restrictive and they don't want to do that in case they hinder innovation. I mean, what do you think of what do you say to these people like sell the European vision? No, what is always interesting is that if you look if you look at the global stage, we have this term that everyone coined like the Brussels effect that usually true that Europe is the first one to put the norms and the regulations in practice. And then you see like a couple of months or a couple of years later, you see like global governments also trying to adapt the EU legislation to their own jurisdictions. We see that on the GDPR. We see now with the Digital Service Act that is trying to regulate also online platforms with the Digital Markets Act that is precisely to regulate the online market part. And we have filled the only careful thing that we always have is that we don't want to overregulate because we really want to create this ecosystem across the EU. But I feel that we are living a little bit like in this climate change environment. It's very much like on the economic side and we see here like investors, startups, all the economic and the manufacturing side, they are really aiming and hungry to have the AI deployment already in their daily business. But then we also have the other side that's the societal part that probably people are a little bit afraid to understand how things work. And sometimes I like to give this example is that imagine just ourselves that we are here. Imagine that by not any chance we have a car accident and tomorrow we are in the hospital. And imagine yourselves that we have a robot that is doing the surgery. Will you trust that robot or not? Will you trust in a different system? I will trust if this system is qualified, is trusted and is certified by the authorities. Not by a system that we don't know would train on how it perceived the surgery in the day after. So I feel that is very much that this trust environment that we want to bring. But again, we also see as Sandro was speaking about that, we don't want to regulate the technology perceived. We are worried about the misuse and the risks of those technologies because we look for example the intersection about AI and democracy and we have the disinformation angle like AI and the different deepfakes that we have that can change the election system. We also look into the cybersecurity angle of is that AI can help to detect the cyber threats but can also automate and accelerate the cyber threats that we have across our society. So it's very much like these misuse and the risks of technologies because what we really want to have is to have like a responsible AI systems, products and developments across the EU that also helps to deploy the technology. Yeah, great. Peter, what do you think of what Lewis said? I mean, that's a great vision, right? But does it help you make business in Europe as a startup? What would you want to see more from regulators? Yeah, I'd again sort of, I mean, you can look at it from the perspective of the EU AI Act and regulatory requirements and restrictions. And I do think that we are sort of at a quite sort of decisive, you know, point of time now when the final decisions on how the EU AI Act will eventually especially tackle foundation models. And let me maybe get back to that. And I know that it's a difficult topic for Lewis to touch upon given the sort of state at which we are in discussions. But I mean, maybe to start on a positive note, if I think about what are the sort of crucial elements to succeed in AI or for Europe to have a large AI play, it's obviously capital and I think that's the sort of the weakest one. But then the three elements, data, compute and talent. I mean, as was indicated already, we have actually world-class talent, academic talent especially, that's proven and seen in many rankings. Compute, I mean, the Euro HPC supercomputers are among the most powerful in the world. We have both AMD powered and Nvidia powered supercomputers. We've been training on Lumi, which is in Finland, the world's third most powerful, Europe's most powerful supercomputer. And then the final element, which is data. And I think that's something where, you know, say there could be a larger role on the public side. Again, it's not a regulatory matter necessarily, but we sit in an environment where we're supposed to build language models and we have quite many languages that are quite different and we have low resource languages that are quite small. So I think that is an element where, you know, certainly the public side and regulators could play a role. And now then, if we think about where we are at the moment in terms of sort of the EU AI act and the sort of last mile or last steps that need to be taken, I think it can go wrong in so many different ways, but and we look at it from our perspective. And our perspective is that we're sort of serving clients in a wide range of industries. And we need to continue serve clients in a wide range of different industries. And they're not always using foundation models or they're actually at least not using foundation models with existential risks. And I think, you know, we could have an outcome where eventually we over-regulate and it's going to be very difficult to operate because pre-trained models or foundation models, you know, are in use in very many places. And having that as a separate, like highly regulated, high risk category could be cumbersome, especially for small players, say like Silo, to operate in. But then on the other hand, I do think that we there's also a risk with sort of not agreeing on a regulation. And I would not want to operate in a Europe where every single country has their own regulatory requirements. That would be very cumbersome. The other element is that if we don't agree on regulation, we've already spent five years in an environment of uncertainty where, you know, you are investing in these technologies as a company in Europe without actually knowing sort of where this is going to end. And I don't think we want to sort of continue a few years in such an environment. So, you know, I think it would be important to conclude and it's the only risk isn't sort of over- regulation here. Yes, you had a point to add. I just want to follow up on Peter because he raised important points. For example, when you say that we don't have the data at European level. For example, we're trying to create what we call the data spaces across the EU. Is that if we don't have enough data ourselves to train the systems, we can perfectly share these data anonymized, obviously, with other entities across the EU. And we are doing that, for example, on the automotive industry currently that companies are able to share their different data across the EU. So that individual companies they have a large pool of data so that they can trade their models. We are also doing the same on the health industry, for example. So data is one of the point. Then the computer power and you are saying, and I want to underline that, that was an announcement that president of underline she did three months ago, is that whoever has a startup and the company can perfectly have access to the different computer power that we have, the two moving in Finland, also the Leonardo in Italy. This is something that is at disposal of the different companies across the EU. So the computer power is not a problem anymore at the European level. Then we also have the research and the talent pool. And here I think that we give cards to the world in the sense of having the best talent pool across the world. We have research centers excellent in Germany, in France, in Finland, in Portugal, no matter each of the 27 countries that we have. I think that we have a huge talent pool, even the Americans here in the room, I think you will agree with it. And then we've achieved the final question that for me is also the key part. It's very much like the funding and the financing part. We know that this is something that we have a lot. We are covering a lot already on the early stage, on the seed capital, that we have some institutional actors. But I feel that on the later stage, on the growth stage, capital, we still need to do more. And I feel that there's that understanding from the new authorities and that's precise on that area that we need to work more. And it's related not only to AI but to the general companies precisely to help them to start up, to scaling up across the year. Yeah, Sandra, I mean, coming from a US perspective, right? What does, what Louis said, how does that sound to you? I think, I mean, I think it's important that I'm glad we're touching on kind of both sort of the the ingredients needed for Europe to really kind of make use of this technology and develop it. And I think it's so often these conversations tend to focus on and it's also important to focus on the kind of right regulatory guard rails and the safety procedures around it and the best practices and kind of how we can learn from that. But I think also what we've seen kind of Europe do and how quickly it's been reacting to thinking about how all the ingredients that exist and that we all see actually, and I've not been back at Slush since 2019, but seeing all of the member states here represented and it really feels, even this event, feels a lot more European. But like what the right way is to kind of pull these resources and how quickly we can move, one of the things I'm a fan of that the Commission has done is something called the Startup Nation Standard where actually they haven't sort of set in Brussels deciding what the right ingredients are, but they went out to the member states and the startup communities to say what are concrete things that your governments are doing and programs that they're launching that has helped you kind of build your business and then they've sort of pooled it together into a standard and have done a little bit of sort of naming and shaming across European member states and kind of who's doing well on this and who's doing kind of, maybe not as well or can do movement. So I think there's also a lot to be said to just look around Europe and see the different approaches that governments are taking, but I think we're sometimes stopping then at the level of like, okay, let's all agree that this has worked in this specific environment and let's lean in and sort of take that to be a European approach. But at the same time, I think there's also sometimes a misperception to your point that everything is always greener on the other side, meaning in the US and there is, you know, I used to be in the kind of in the financial services space and again there, I think if you look at the FinTech boom across Europe to take a break from AI for a second, I think the regulatory framework Europe has built and the digital single market has really led to Europe being a fantastic place to build and scale FinTech businesses in ways that I don't think the US has. So I think just looking across what is working, what isn't working, but then like implementing it across Europe. Great. I actually had something to ask you about data. Now, OpenAI, you've already gotten into a bit of problem with European data regulators about how you use personal data. Does that, like, how does that make you, like how does Europe seem as an ecosystem to do business for you? I mean, does that make it less appealing? I don't think it does, no. And I think, listen, we are, of course, as I said, in kind of the the desiderative and deployment approach and having our technology out there getting kind of used also means that we're engaging with kind of regulators and policymakers and they have questions about kind of how our products are developed and we think that's a good thing and we've been engaging with them also here in Europe. I don't think that makes it necessarily a worst place to do business. We're really excited about the offices we've opened here in Europe and our growth here. But I think it is more that the uncertainty that I think was that Peter Lewis touched upon that we sometimes feel given the amount of legislative and regulatory changes that are happening and it's not just one to your point. If you're thinking about kind of building AI and kind of different regulatory kind of frameworks, it is a lot to think about and it's sometimes when we're speaking to kind of startups building on top of our technology can lead to a little bit of uncertainty of kind of where we're moving. And I think thinking about how to address that or how to maybe communicate better so that startups understand what is possible and isn't possible, I think that's the thing that I'm more worried about rather than kind of our own operations in Europe. Great. Yeah, I'd really like to comment on what Lewis said on sort of the opportunity that we have in Europe especially around compute and data and there's not only the data spaces initiatives and EDIC for us an initiative and so forth but there's certainly a number of initiatives out there. I do think there's now we are like Sandra was referring to European initiatives but like when you look at what is like we talk about Europe, right? But when you look at sort of what is actually happening and the actions eventually on the sort of field those are usually quite focused on individual countries like an individual initiatives, individual projects focused on beat and Swedish language model or beat and Norwegian language model beat Dutch or German or French and so forth. And I think the same also with sort of having the strongest supercomputers in the world but then so spreading out the compute power to so many different players that it doesn't really matter that you have the strongest anymore because you're splitting it up into so many small pieces. So I think what I think is really crucial if Europe is to, I mean, OpenAI has quite deep pockets and is able to do sort of a sizable investment and initiative around language models for instance for Europe to be able to compete. We need to do moonshots and I think that is also then eventually answering the talent question if we are to keep our academic talent in Europe when they move to industry they want to do ambitious projects they want to join those moonshots. So I think we have a lot of resources let's make sure we distribute them in such a way that we can actually build European moonshots. Great, we have a couple of minutes left and I want to hear from all of you. Do you have any practical tips? What do startups and founders need to pay attention to or what could they do now to prepare for what's coming in the regulation space? Why don't we go with Lewis first? I think that one side that will also help us is that the different startups and the different investors they also reach out to ourselves and they also try to underline what are the bottlenecks what are the problems? What do you can be doing? How can we be improving? For example, these advice that Peter was saying I'm taking it back home to the commission because it's precisely like this type of advice that we need to listen from the practitioners from the startup, from the investors, from the companies. Like I would love to see like a different startups that we have here to partner probably with different type of companies to a whole new level, to large companies and try to spread the AI technology systems and products throughout our economy and society in a responsible way that is always our team. Excellent. Okay, Sandra, you have 30 seconds. Reach, find your... There are now wonderful startup associations across all of Europe and so many kind of founders and startups that have come together to try and talk to influence, reason with, disagree with their governments and I think they've done a tremendous job at doing that but I think back to the point on it is mainly focused still or mostly kind of on individual issues in member states and I think for the community to come together to have a big, you know, give Lewis and the commission a long list of things that you want then I can tell that he's happy to kind of receive that and work on that. So organize, get together, reach out to startup associations. They're doing fantastic work and I think we need more of that. Perfection. Thank you. And Peter, your top tip. Yeah, in addition to, I mean, there's been a lot of good tips already on, you know, how to tackle regulation and how to interact with regulators and so forth. I'd just like to say that we talk a lot about regulation now, right? And we talk a lot about AI. But I mean, what I, you know, what I might, my tip, tips to basically startup founders here is that build the best software companies and then eventually elevate those software products with AI. Don't focus too much on just AI for the sake of AI. And I think that is actually what we are, in many cases, are lacking in Europe. We're lacking sort of proper product companies that are scaling and then AI is something that of course can elevate those product companies. Great, great tips all. Thank you so much for joining this panel and for you for joining this conversation. Thank you. Thank you. Enjoy the rest of your slash. Thank you guys.