 Welch chi'n gweithio ar y ffordd, yn yw'r aethygoedd eithaf yn y ffwrdd cyfnodol, ac yn ymweld y cyfnodol o gyrsgarau i gyd yn eu ddweud. Mi'n gweithio Jack Parrick. Efallai yng Nghymru yn Brasol, ydych yn ymweld rhai o'r ysgol ffordd y TV channel. Yn ymdian, dweud i gwybod i gael eu hunain yn eu gwirio ar gyfer y gwaith yma i fynd i gael eu cyfroedd yma i gael yma i gael eu cyfrifiadau. Mae'r cyfrifiadau yn gwirio i gael y bydd yma i gael eu cyfrifiadau. Mae'n gweinig, oherwydd, o'ch gweinig i gael eu cyfrifiadau. Ac yna, mae'n gweinig o'r 20-odd y gwaith y panel, yma, o'ch gweinig o'r cyfrifiadau tecnicol ar gyfer ei wneud. i'r cymaint ei wneud. And ddim le'n golygu ar holl gwrs sydd yn y posibl y ddydd pan dd securedd. Ond rydym yn olyga'n, wedi'u cwestiwn cyfeirio, y cwestiynau chynnydd i ddechu'r cyfath. Dwi'n chylau cyfathol o fath o dda, ond roedd y peth ydydd yn gondol ar gyfer y panel. felly i'n gwelch fel y c fascinatingau'r pethu i gael gwêl, rydych chi, fel ei wneud o'n gweithio gwaith. Cyn ffyrdd y cwestiynau yn ddod yn y ddylogodd cywysig oedd y dystyniad o bwynt i ffyrdd y teimlo. A oedd yn dan am bod hi'n bwynt dweud am y ddweud o'n gweithio cwestiynau. Felly, fel mae'n gweithio, wedi gweld i y ddechrau sut y gweithio ar y cyfnod hwnnw, ei wneud o'r NMEP ar y SDG. Rhywbeth Ion Hymffriedrick, rhai i Bn. Rhywbeth Debra Ddijechymau, rhai i Weithwm, rhai i gynsultant. Rhywbeth Romeo Cynslaire, rhai i Bn. Rhaid i Rhywbeth Irohyn Hadid, rhai i Llinux foundation, sy'n gwybod i'r AI a data. So, rydym yn rhywbeth, ydych yn ymwneud i Yflawn Primae Boulken i'r ffwrdd y Cymru. Rhywbeth Rhywbeth Ion Hymffriedrick, rhai i Bn. Rhywbeth Ion Hymffriedrick, rhai i Bn. Rhywbeth Ion Hymffriedrick, rhai i Bn. Rhywbeth EpoGol, ac rhai i Bn. Rhywbeth Ion Hymffriedrick, rhai i Bn. Rhywbeth Ymwneud i Ymwneud i Llinux foundation, I'm pleased to co-host this event together with the Open Forum Europe. We are all looking forward to our discussion, so I'll try to be as brief as possible. I am a member of the Legal Affairs Committee, and as such, I am heavily involved in digital policy files. Notably, my committee has produced three reports on artificial intelligence. That were adopted by the P3 last month. So we adopted one on intellectual property rights for the development of artificial intelligence, one on the civil liability aspects of artificial intelligence, and one on a framework of ethical aspects of artificial intelligence. So letter is particular very relevant to our discussion today. I complement the commission's wide paper on artificial intelligence from earlier this year, which, as you know, lays down a broad framework for an ecosystem of excellence and an ecosystem of trust. And in order to have these ecosystems of trust artificial intelligence needs to be based on ethical criteria that reflect our European values. And two things immediately spring to mind in this context, which are also central to the ethics report by the European Parliament. AI needs to be human centric and AI needs to be as transparent as possible. Open source provides, from my point, if you exactly this, when the coach and data are available to the general public, we achieve a high level of transparency, which not just ensures human oversight, but collective human oversight, which is, I think, even better. And I think it's an important bias. And finally, this is just the starting point for the discussion as to why open source can be beneficial to shape artificial intelligence according to our values and expectations. And I'm very glad to be discussing with you how we can achieve this from a technical point of view, but also from a policy point of view in the second half of our discussion. Thank you very much indeed, Timo. Thank you for your introductory remarks. Now we're going to move on to Johann Friedrich, who is the technical relations executive at IBM. And Johann's going to speak to us a little bit about the sort of lingo that we're going to use and be a bit sort of outline some of the definitions of what we're talking about. So I'll hand the floor over to you. Yeah, thanks very much. And thank you for organizing this. Thanks Timo for bringing us all together here. It's absolutely exciting. My name is Johann. I work for IBM, where I'm heading the department for standardisation and technical regulation in Europe. I've given the pleasure to give a brief introduction here about what what we see groups and it starts with the last slide sorry let me go to the beginning not sure why it started in the end. Okay. About artificial intelligence, and I put it under the headings of responsible computing different topics in AI and open technologies and I've got seven to 10 minutes so I need to be brief and focus on the main things. I always like to start with this pretty old slide here, which outlines the spectrum around cognitive technologies as we love to call it in IBM, having artificial intelligence here on the on the top and it is supposed to be on the top. But then there are a number of other technology topics. I'm lacking a better word, like robotics machine learning system, natural language processing, deep learning predictive analysis, recommendation engines, all these things play into the cognitive technology portfolio. Probably need to be considered when we talk about AI in a broader sense and usually there is all of this is meant when people talk about AI and very often there's also a bit of confusion. Looking at the next slide, it tries to limit this to four topics, neural networks, deep learning, machine learning and artificial intelligence and a topic of mine and do you see the fourth at the bottom has has put this into the fantastic category of the Russian nesting goals. Where you see that one is included in the other. So at the very bottom, you have neural networks, which mimic the human brain to a set of algorithms, essentially differentiate four components, inputs, grades, thresholds or bias. Then the next level you have deep learning, which is referring to the depth of the layers in the neural network. And usually if you say you have more than three layers, that can be considered a deep learning algorithm, right, a neural network of more than three layers. Next level machine learning. Deep learning again is a set of machine learning, the way in which they differ is algorithm learning. Machine learning model cluster can classify inputs just to sketch out. And then on the very top, the all-encompassing Russian nesting goal, you have AI, which mimics human intelligence, it's used to automate, to optimize tasks. Like speech, facial recognition is making translation. What we like to bring up and we believe in IBM is important is to differentiate between good AI badly and many addressing the use of AI and have a lot of AI. I'm going to stop here. Ibrahim, I think your microphone is open and it's giving us a bit of feedback. Okay. Oh, perfect. Thank you. Much better. So there are some basic principles on AI, trust and transparency principles. So first of all, the purpose of AI is to augment human intelligence. It's not to replace it. It is to enhance and to extend human capability and potential. Secondly, data and insights belong to their creator. So the client's data is their data and their insights are their insights. That's also an important transparency and trust principle. And thirdly, new technology, including AI systems must be transparent and explainable. It must be clear about who trains AI systems, what data was used, data governance policies need to ensure that people understand how an AI system came to AI systems. It came to a conclusion or a recommendation and the issue of data bias needs to be addressed proactively. Going a step further imperatives for artificial intelligence for companies based on whether they are provider or owner or both of an AI system. And here I'd like to come up with these five. First of all, say designate a lead AI ethics official. AI is a topic that is new for us how to use the technology. We all want to have responsible AI trustworthy AI need to be accountable for internal guidance and compliance mechanisms. So here in AI ethics official should oversee these tasks and be the focal point for this in a company in an organization. Use different rules for different risks. It is very important to differentiate. There are a high level assessment should be done on the potential harm that can be done on the risk level. And according to this different rules should be applied, assessed in depth detailed assessments must be done. Don't hide your AI. That's also very important. Promoting the transparency is through disclosure. Make purpose of an AI system clear to customers, to consumers, businesses with whom you work, but also inside of your organization. If you use it, explain your AI, have all the trails surrounding the input and training data, make documentation available and finally test AI for bias. Responsible AI systems are fair and secure and we always need to check is there any bias. And this is an ongoing process. It's not something that is done once it needs to be done as AI systems are operate on a similar way. We have seen the high level expert move on AI working with the European Commission on ethics guidelines, artificial intelligence. They have produced these guidelines and they list seven key requirements that you see in the two boxes at the bottom here. Human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. Now, coming from these introductory perspectives, imperatives, principles, guidelines. We believe that standards, open standards and open source play an important role here to create trust, to create transparency around AI systems. Both standards and open source are part of open technologies, but they are not the same. That's why I have this slide included. It's not specific on AI, but standards provide something like a building plan, methods, metrics, processes, protocols. They are developed collaboratively in standards developing organisations and you can see not controlled by a single vendor. They may include patented technologies, open standards. Typically, they are implementable in open source. They should be free of patent claims and licensing claims for patents. Open source is source code. It's software. It's developed as well collaboratively, usually in projects, communities, open source foundations and we will hear about the Linux foundation AI later on. They have certain governance models. They are openly available. Open source software is openly available to distribute, to fork, to do with it, whatever you like. And typically licensed under an OSI approved open source license. Illustrating a bit the organisational ecosystem around standardisation and open source. You have on the top here the standardisation environment. On the left hand side, the recognised international standards bodies recognised under WTO agreement ISO, IEC, ITU on the middle layer, the European standardisation organisations to provide standards in support of regulation and policies in Europe of the harmonised common market. And on the bottom on the left national bodies and towards the right you have the so-called for and consortia other global standards bodies. Almost everywhere here AI standardisation activities are going on. A major focus and I will briefly go into this in a second is in JTC one, the joint technical committee one between IEC and ISO. This is where international IT standardisation takes place and a subcommittee is in place there, the subcommittee 42, working on a number of highly relevant AI standards also highly relevant in terms of ethics trust transparency. And on the bottom, just giving you some open source groups, open source foundations, you all know the Linux Foundation and Linux Foundation AI where we will hear later on from Abraham about their work, the eclipse foundation, the IOT eclipse group or node Kubernetes many many others just to sketch the differences in organisations on the landscape. Now looking at AI activities, there are major activities going on. As I said, ISO IEC JTC one SC 42 is the major international committee where AI work is going on, and also IEEE has done a lot of pioneering work in AI standardisation. ITU, I put a bit in brackets, there is some work going on, driven by some, it's currently evolving, we need to look into this and how it matches with the others. On the regional level, we have had a joint group between CEN and CENOLEC, a focus group looking at AI for Europe and developing an AI standardisation roadmap for Europe. And you have Etsy, where also in several groups AI is already considered, and you have the national level of course in Germany Dean. I just listed some here, there are many others of course and I just listed Dean Afnor, BSI, Unin for Italy, Men for the Netherlands or at least for the US. And now coming towards the end of my overview, there are already a number of standards under development that are of high relevance to ethics in the broader sense. And you can see here, I mentioned already a lot JTC one SC 42, and I just list some of the standards, it's an extract, it's not complete, but some where I believe they have high relevance for our discussion today and for the discussion also about European values. And responsible AI. So you have a just starting project with a management system standards on AI, this is a standard to define processes. How should the processes in your company be, it may go a bit along what I explained before, have an AI official in place, etc. It's just starting this project, so it's nothing defined yet, but it addresses these topics. You have an overview of ethical and societal concerns as a technical report, TR, which also starts to sum up the concerns, sum them up and then follow on work will be to see what standards are needed, how can international standards address these concerns. You have one about bias in AI systems and AI a decision making, very important. Another one on trustworthiness and AI, and one on governance implications of the use of AI by organizations. This goes probably together with the upcoming management system standard with which will build also in parts on this governance standards. In IEC in the International Electrotechnical Committee, mainly responsible for standardization in the area of electro technology manufacturing automation. You have the systems evaluation group 10, which looks at ethics in autonomous and artificial intelligence applications. So very important also from the implementation side, the application side already addressed at an international level. I already mentioned IEEE, where they're very broad initiative, the global initiative on ethics of autonomous intelligence systems. The P7000 series in IEEE and in some parts really pioneering work about how technical standardization can address the topic of AI and ethics. They also start currently work on governance of AI systems, and there are several specific activities, for instance, about machine learning, but also about autonomous contracting, these kind of things. And since I already mentioned the roadmap on European AI standardization and now following this roadmap focus group activity under preparation is the setting up of a technical committee, a joint technical committee between CEN and CENELEC, the two of the three European standardization organizations recognized under law in Europe. And yeah, this is on its way. So just to put it in a nutshell, AI is a broad field and differentiation of the exact technology and its use is important. Imperatives guidelines are available regarding a transparent and responsible use of AI. Open technologies, open standards, open source play a major role in addressing societal values and ethical topics around AI and very concrete and relevant standards are already underway. And they do consider European requirements, for instance, those coming from the high level expert group and already to give a flavor here, happy to receive questions later on when we have a session and to go into the discussion. This concludes my intro words. Thank you very much so far. Thank you Joachim for that. That was a comprehensive rundown of what we're going to be talking about. So now we'll quickly move on to just introduce our other panelists before we start the panel debate. So, if Deborah de Jacomor, who's a senior manager at Wavestone, if you can give yourself a little introduction. We can't hear you. Well, I can't hear you. Okay. Just have a little play around Deborah and we'll move on to perhaps Ibrahim Haddad who can give a short introduction of yourself while Deborah is seeing if she can fix him up. Is this me? Am I unable to hear anyone? Well, at least I can hear you, Jack. Okay, so everyone can hear me. Ibrahim, we can hear you. Let's try Romeo. Let's give you a kiss if we can get your audio. Okay. Thanks a lot. So for Deborah, I have a tip. Just reload your page and give access to your microphone. And Ibrahim, maybe have put your voice recorder just make sure you plug to the correct output. Maybe your output is not from the laptop from the voice recorder. So I'm Romeo. I work for IBM for a special department. It's called the Center for Open Source Data and AI Technologies. And I'm the focal point for trusted AI technology there. So I'm responsible for all the open source package we have donated to the Linux Foundation, which Ibrahim maybe will come to later. Thanks. Ibrahim, perhaps you can introduce yourself. We've got your microphone now. Awesome. Thank you. Hi, everyone. My name is Ibrahim Khadad. I am the Executive Director of LFAI and Data Foundation. We are a not-for-profit organisation based in the US under the Linux Foundation. And our mission is to support and enable open source development and innovation in the domain of AI analytics and data. Thank you for having me on this webinar. Thank you very much indeed. So perhaps we can wait a moment to see if Deborah gets on. But if not, we will press ahead and start our discussions, I think. I can't see her. When she comes back, we will continue. We'll allow her a moment. She did. She has to give access to the microphone now. She joined as a participant. Oh, into the video, yes. So there's a top right of the screen. There's a button, Deborah. You have to click on that to give access to your microphone and share your screen. Now I've lost audio entirely. Ah, Deborah. Hope it's working now. Yes, we can hear you. OK, so give yourself a short introduction. Apologies everyone. OK, you were waiting for me then. Excellent. We were waiting for you. We have a couple of others. OK, thank you. So thank you everyone again. Very glad to be participating in this talk. A little bit about me. I am for a bit more than 10. More than 10 years I've been working in advisory services to the European institutions, mostly on digital policies and assessing and advising on impacts of technologies on your legislative initiatives. I'm currently leading a team running the Commission Open Source Observatory, OSOR, if you have heard about it. And I have also been part of the Task Force on AI and Cyber Security, organised by the Centre for European Policy Studies, SEPs, also I think Tank and Brussels. And regarding research, I have recently been involved in the research on the importance of transparency and AI and how the AI ecosystem can evolve towards an ethical AI by design. So this is my background. I try to be very quick so that we can go for the discussion. Thank you. Thank you Deborah. I really appreciate it. So let's start. The point of this discussion that we want to have is how can open source and standardisation work in building an ethical AI that is used by users? How does this sort of enter the real world? And I think that question, how does that happen, probably starts perhaps first with you Ibrahim. Perhaps you can launch into this. I feel that companies, organisations like your own, can implement an ethical, technical yellow. Yes, thank you Jack for the question. So I think my basic hypothesis is that open source offers the major benefits that are very unique to AI. We understand and we all know the benefits of open source in general, the peer review, fast development cycle, and the networking effect of having a lot of people and organisations collaborating. However, what's really unique in the open source that applies to AI are five different areas. The first one is fairness, second one is robustness, third one is explainability, fourth one is lineage and fifth one is open data. I will briefly explain each one very briefly. So all the work in open source is done in the open. You can see what I'm working on. I can see what you are working on. I can provide feedback on your work. You can provide feedback on mine, incorporate work coming from different people and different opinions and different backgrounds. And there is 100% transparency in the work. And there is many talk, meaning people who are really good at doing something that actually the one comes with it. And when it comes to AI, this kind of transparency and openness are extremely critical in the path to achieve ethical AI, trusted and responsible AI. From a fairness perspective, we need to be able to have tools, libraries and methods that are open, that are collectively developed to ensure fairness in models. From a robustness perspective, a very similar approach, we need to have openly developed tools that allows us to verify the robustness that nothing in terms of models or data have been tampered with. From an expedited perspective, we need to make sure that neither data or models or actually both, we have ways to explain, for instance, how the model works. If you are a bank and your whole system runs on an AI model in terms of giving loans, you need to be able to have consistent results and have the ability to explain how the system works. And lineage, we need methods that are developed collectively in the open to track the origin and how things are changing over time, whether it's data and models. And at the bottom of these four different layers, we need open data. We need methods that will allow us to sort, tag, identify and track the governance of all these data sets and ensure privacy and security, as Johan explained earlier. So from an open source perspective, there's huge benefits to this society in general for using the open source methodology in getting towards the world where we have ethical use of AI, where we have trusted and responsible data. And one last word on tying this to standardisation. So at the Linux Foundation, we have, not longer ago actually, in early this year in May of 2020, announced that the JDF Joint Development Foundation, which is an umbrella foundation under the Linux Foundation, has been formally approved as an ISO IEC JTC1. And Johan actually touched a little bit on these. So now we have the ability to host projects as part of JDF Joint Development Foundation, and drive them towards standardisation with a lot of ease. So now we have that hope between open source and standardisation and really massive benefits for going that way or when it comes to ensuring ethical AI. Thank you. So from that, though, when you're talking about all of that happening between organisations and businesses, perhaps Romeo, we can ask you about how that then is implemented for users, how this then becomes a reality for people as perhaps the person taking the bank loan or the person who is, you know, coming into contact with AI on the internet on a social media platform, Romeo. Yeah, so see using these libraries, it's now possible to give you an explanation of why the AI algorithm came up with a certain decision. So if you are rejected for the bank loan, you, for example, can inquire the algorithm what are the reasons or the key features of my profile which led to this. And the other thing is what I'm really hoping to see soon is for all the recommendation engines. So the targeted advertisements and playlists on YouTube and Facebook feeds that we actually get a button or something which allows us to see what part of our personality profile led this ad to be shown to us. And all those tools are now under open governance under the Linux Foundation and are open to available. So that's a huge breakthrough in my opinion. Perhaps then moving on to Deborah. I mean, it is a big breakthrough to have that connection. How do you see it when it comes to sort of expanding those ideas from that position? So I think what we have seen and completing building on what my colleagues have been talking. That is several ethical dilemmas dilemmas that we need to tackle and that we are we are trying to work from both sides, technical and policy. And at certain point they need to converge. So it's like on fairness on transparency on collaboration on trust accountability and morality. So certain points there can be really tackled with what I would say the almost business as usual of open practices or the way of open developments. And what I think is that open source have like why the fairness part can be a bit more complex to tackle because we have bias involved on the development and involved also on the on the design of algorithms. If you see from another perspective, the transparency can really benefit from open practices and the way open source is there for a while already. So I think that is one of the main points of open source practices, not only on the on the business as usual. I would say, but that is like on quality on security on the easy customization lowering entry costs. But I think specifically the way I open practices can enable the experimentation. So even a small immediate enterprises now with all this technology being made open source, they are, they have of course fewer barriers like the high licensing fees that could be there. They also could face the limited talent, and these are two things that the open practices can really enable that is the access to skills. The community itself also increased the freedom to innovate leveraged by leverage ideas of the community, not only in a close pool of resources. And of course, all of this could really build on a greater trust in the technology that is fundamentally what open source development value. So the practice would tackle several issues hindering or raising concerns on the ethical AI designing approach. So, leaving on from that, I think probably is I'll come to you Timo in a second, but maybe you can you can put on this because this is the reality of using open source for sort of small. You know, even individuals that are using these sort of systems and software for themselves to be able to build and change and adapt AI in the way that they wanted to they want to or perhaps in a new way that we don't know about yet. But obviously in this moment with the sort of expansion of the technical implementation, it is the bigger players like yourself that are that are sort of leading the way probably. I wonder if you can talk to that how can we make sure that that sort of idea of the opportunity of of open source can can still be broad. Well, first of all, I would say, it's a very good question, but by by going into an open organization like the Linux Foundation building a community where not just the technology providers. They may be leading. Okay, because they are the ones who have the expertise, the deep level knowledge about AI, but by making the code open by building a community around it, and allowing whether it is your clients, your customers or public sector contributions from administrations, whoever to work closely on the further development of the technology of the open source of the source code is already something that that has the huge potential. It already does contribute to trust and transparency and has the huge potential to collaboratively bring the some some key technologies forward to to have this mutual collaboration across organizational boundaries right. Also, then you mentioned everybody can take it can play with it can see what it is can look into it. I mean, I was, I was, I thought it was a very positive statement from the European Commission the other day to say how open source is a key element for achieving digital sovereignty, because it prevents that you work with closed black boxes, whatever close boxes. It allows you to look at the code to look at what's going on. And also this is important. And it is. Yeah, you probably you need some technology drivers if you want high high class state of the art technology, they need to be ready to contribute it, but by contributing it by opening up to a group by saying let's let further develop this in open innovation processes. You create a completely new ecosystem on driving the technology is implementation and the trust and confidence. Thank you, Joachim. So Timo, perhaps to moving on to you if we can. As a suggestion suggestion as a question where are the ethical sort of difficulties where is where are we most likely to find ethical concerns and troubles in the implementation of AI into into our societies. What do we actually need to do when we're implementing technical in a technical fashion to prevent those those sort of hurdles or trouble moments that might come up. Thank you very much. I think I can directly continue on what we already said. So it's accountability. It's traceability of decisions which are very important for individuals, but very hard to understand and to trace. And you may know from my report on the DSA that I am convinced that personalized advertisements are harmful and shouldn't be this model. And I really like the idea to at least have some transparency and accountability. Based on which information is a decision based. So, and I think if we have the possibility to trace the decisions. We have clarity and we have. We can create some kind of liability. And this is important for end users for consumers. And so this is, I think what needs to be what needs to be really clear. Yeah, I think this is I think this is it and one of the things perhaps we should move on to slightly is talking about perhaps more about the end users the people that are actually. You know, affected by this. So Deborah, I know that this is something that you've spoken about about these black box tests and how your your perhaps concerned about the sort of doing the tests on AI where we don't really understand what the what's inside the what we're testing. I wonder if you can talk to that a little bit. Yes. So, I think we are in a point that generally people are recognized that black box AI approach are unacceptable. I think this is recognized in policy this recognized in any piece of work. We see published lately and this is a as I think we have heard today already like the data belong to the users and they need to be informed. So the transparency is generally about this is that they are informed on how they are impacted by the algorithm. And this brings us to a bit what is called the The trade off between transparency and the barriers on openness as well. So what we call transparency paradox so generating more information about AI may create real benefits and there's no one disagrees, but it also can create risks and is this risks that I believe we need to manage more and more that we make the users and the people who are being impacted by this algorithm so someone because I think also the policy puts a big bit of difference on the level of impact. Of course, house loan decision is much more impactful than shopping or consumption ads that you are that you are this that is displayed in your screen but in any way data is used to train this algorithms and decisions are being taken. So this is the by decisions are being taken by an algorithm that is using some rules and it's very important that this transparency and this rules are in certain way transparent to who is being impacted by the by the decision in the end. I think one of the papers of Professor Andrea Renda shows that is not only about giving generic explanations but also explaining in detail why what are the reasons and the in that this decision is taken. I don't know if it's clear I think I think you're totally right because I mean, I think one of the interesting things we're in a moment where people are quite aware that AI is affecting their lives that we are being targeted with certain things, but people aren't necessarily aware about how to access the information about themselves at the moment or what right they have. And perhaps this is something we can, I can ask you about Ibrahim about how how end users should be informed about the processes while making it clear how I mean because these are these are complicated complex algorithms, difficult data sets that are used in really complex systems. So how can we make sure that users are informed while understanding or making the information we give them actually useful. Yes, thank you Jack for the question. And I think when it comes to how we can do this we can look to how we're doing things that are similar to informing people about AI for instance. So one of the similar cases that comes to my mind are open source licenses. So when companies integrate open source components in their products and services, they need to acknowledge and inform the end user that the phone, the tablet, the TV, the fridge, whatever appliances or devices or software you are acquiring actually includes open source code license under the following licenses and here are your rights and where you can download the code and so on. So now taking that idea as a panel to the question you just asked me, I think we can go down there out of, you know if there is an AI system running and providing certain product and a service, we should ideally provide to the end user. Similarly to what we do with compliance, kind of an explanation of, you know, this is based on the following algorithm and here's how this algorithm works, which brings us directly to the explainability function of a given model. We need to be able to explain how a model works. It's really not acceptable to say, oh, this is really complex, you're not going to get it. No, you know, we need to be able to inform people that this model is based on these parameters and here's how each of these parameters is used as a way, and here's how it functions. And it might be as easy as providing disclaimers on the various technologies used and where the data is coming from and under which licenses is coming from, how the data is being governed, what's the company providing the service or product is using with respect to the security of this information and to the privacy concerns in relation to that. And also how this system is able to ensure there is no bias and that it offers fair service to everybody. And I think once we reach a point where we kind of take all this information and formalize it in a way where we're able to provide a standardized way to present this information and kind of connect to the standard now, then we're able to communicate as companies right to the end users of our products in a standardized way. You know how our system works based on what technologies and how we're protecting your data privacy and offering security and how do we explain the different models we use. And I think it has to be driven by kind of an acknowledgement, you know, a disclaimer. You know, if you want to use this product or the service, we incorporate specifically afterwards in it, here's how it works by accepting this information, you're able to continue and use the product. Something, you know, that's it. Yeah, I think I think this is it, but you mentioned that isn't this sort of idea, some sort of standardization of a disclaimer. Romeo, perhaps bringing you in, you could, can you talk about where we are with that where where the standardization of informing users is right now from your perspective. It's right at the beginning, so we just announced a system which is called fact sheets, which gives you insights on how a model is performing and getting some background noise. Maybe you can go on mute, please. So someone is not a mute. Okay, thank you. Because there's the sign that there's some. Okay, now it's better. Thanks. So that's one thing. So this, this stuff about explainability and bias detection and adversarial robustness. This is something the research community is actively researching on. So every week we see a new paper, new algorithm, and we just reimplement those algorithms in our toolkits for getting into a standardization. I think we need the officially recognized standardization body, which takes the most relevant state of the art measures for those four categories and defines okay if this algorithm is doing such an important decision. It has to comply with this and this and this standards and those are the measures and the implementation is this and that and the implementation is open source so everybody can check on the implementation. And I think that's something we can emerge on and I think that's the legislative who can actually push this forward to tell standard bodies to actually come up with a standard for those measures. Do you know what Romeo you set us up perfectly to move at the, at the right time that we have time schedule to move our discussion perhaps more directly towards the policy angle now I just want to remind everybody that is that is participating and watching this event that you are able to start keep your questions coming in keep sending them in in the chat box at the end of the event we will we will focus on the Q&A and continue, continue the discussion with our panellist there but let's move on now to a more policy policy based discussion and for that I think Timo probably will start with you. Let's let's start where Romeo, where Romeo left off that the idea of some sort of body some sort of oversight body. Is that possible at the moment, would that need to be European, how would that work, how do you see that that sort of shaking down. Thank you very much. First of all sorry that we're accusing you for the background noise but hoping insist that there's sound from your part so I maybe it's confused so sorry for that. I see there is no noise from you so sorry for that. And now back to your question Jack. Well, I think it's necessary to have oversight and I would like to see a European entity European body. I think that I would like to see even an agency equipped with the possibilities to do that to check the compliance with ethical requirements. Unfortunately, in the AI reports which I already mentioned, there was no majority to establish European agency. The sources to European entity. We are is my report on the Digital Services Act that I mentioned in European entity which could be either a network of national authorities or a European agency. The lessons learned from the implementation of the GDPR. I wouldn't be happy to see many different national responsible bodies because they are interpreting rules differently and maybe sometimes think they are understanding their role differently. So, I think there's a reason why many platforms choose Ireland as their responsible authority. So, because you could argue that their main aim isn't to protect data of consumers. But yeah, the agency could then communicate with the standardisation bodies to make that link without having to go via hard policy. So that's why I think this could be a good solution. But again, the majority of the colleagues wasn't convinced on that. But I think this will develop once we have an approach by the European Commission. So a legislative proposal, but now we are of course in the stage where we are gathering ideas and drafting our initiative reports. I mean, I was looking through a list of all of the EU's agencies thinking about this about where where it could attach if they didn't create their own agency. And I think the one I thought was most relative was perhaps the fundamental rights agency, but it still doesn't. I don't know. It's not an development rights agency is heavily understaffed. And what they are doing is to write comparisons and reports on the state of play. So, unfortunately, their scope is very limited. It's deliberately, but they wouldn't be in a situation at the moment to do that. One could think maybe about the ombuds person, but this is, yeah, it's not. It's not. It doesn't feel like anywhere structure. Yeah, it just doesn't feel like anywhere that it can fit right now. Exactly. So, so let's, to you, Jochen, perhaps you can speak to this as a company who understands this sort of policy network and the idea of where, whether national or EU bodies would fit. I wonder whether you can explain how you see it perhaps talking from the perspective of the white paper on AI that the commission put out. What would make it easiest for you policy wise? Yeah, sure. Let me maybe start as a company. We have a lot of experience in complying with EU technical regulation in many areas like electromagnetic compatibility, low voltage. The machinery directive, the radio equipment directive, all of these, okay. And the system, the framework, the legal framework applied there is the so-called new legislative framework, which is based on the now not so new anymore, but still called new approach, 30 years old, but still very powerful for Europe. The system is very simple. I try to make it in simple words. Government makes regulation with essential requirements, including a risk mechanism with different modules, so high risk and low risk. Then industry can develop standards to meet these essential requirements based on self assessment for low risk or not the highest risk areas. We can make a statement of conformity and under this statement industry is allowed to work to put their products onto the market under the presumption of conformity in Europe and you have market surveillance from the public sector to make sure that there are no issues. There are no problems there, nobody tricks, okay. I would see that for certain areas mean we first need a discussion on AI do we need to have broad regulation or high risk areas I can imagine for high risk areas regulation makes sense on the broad scale. I'm not sure. Okay, we need to have this discussion. But I could imagine that the same system would work for AI as well to have essential requirements laid down by the legislator and have standards with which you can meet them self assessment or for high risk areas compulsory third party assessment and market surveillance. And I'm not so sure whether an agency we heard agencies maybe understaffed agencies may lack the actual experts to assess etc. And I think this system that works well for product safety for a we should try to apply it because it also allows for innovation for a lot of innovative potential. The standards always try to have the state of the art included everybody can participate you have academia you have industry with their experts, you have consumers users SMEs who participate. And if I if I may give one example where this work extremely well was at the beginning of the COVID-19 situation where completely different area but we needed to have in Europe where there was a shortage of face masks of personal protective equipment and with the standards being available. And the clear rules. This equipment could also be brought into the market very fast industry knew what to look at knew how to follow the standards and they could be developed. And this is a basic basic the basic power of this new legislative framework. And I would I would see that we should, before we discuss anything else we should see in how far new technologies like AR may fit into this define what needs to be regulated and then see can this fit under this new legislative framework, which has been so instrumental for the common market. Thank you, Jochen. So, let's let's move on perhaps to to some of the more sort of solid policy that we have so we have the open source software strategy 2020 to 2023. And I wonder if I can ask you what how you see this changing or adapting in the current sort of technological landscape. Yes. Thank you. So when it comes to open source and AI and, you know, specifically your question, we've seen this happen with many other technological areas. So if you look at clouds, you know, before AI and even financial tech sector, and before that the networking sector and before that the telecom sector and so on. So these technological things come in ways. And now we're writing the AI way. And what's really interesting is when we look at the landscape, and we see what is really happening in terms of AI in general, we realize that most of the efforts when it comes to frameworks, libraries and enabling technologies in relation to AI, machine learning, deep learning, NLP is data models in all of these technologies under AI or different categories are actually happening in open source, because companies realize that the actual value is not the platform or the library that you create. It's the real value is sitting in the applications you develop, the real value and the models. So companies are coming together and collaborating together on creating these different building blocks. You know, you can think of them as different Lego pieces. And these different building blocks, we capture them in our open source AI landscape. So it is available via the web. The address is ls in landscape dot LFAI dot foundation. And we capture there about 300 key projects that are open source, coming from over 220 different companies. So what I see going forward is additional increase in collaboration and more and more companies are joining this trend in open sourcing their AI machine learning data mobile technologies and collaborating with other companies on these different building pieces. And then once these building pieces reach what I consider kind of the value line, anything above that value line will become proprietary. So this is where collaboration starts and then companies starts investing in battle on their internal R&D to build the differentiation, the applications and figure out ways to monetize the data and the models that they have come. So definitely the future is very positive in terms of collaboration in the space of AI, but there are definitely a lot of concerns in relation to privacy, in relation to security and in relation to fairness and bikes. I mean, all of those issues are things that the European Commission is trying to look into through through the white paper through the digital strategy, the Digital Services Act. Deborah, you, you work with them, you advise and consult on these issues. Where do you think the European Commission is focused in the right areas? Do you think that open source is a priority for them? Do you think they understand it properly? Do you think the EU is prepared and ready for what needs to happen? That's a trick question, but I think, well, in the several years that I've been working with the Commission on Digital Policies, I think open sourcing definitely has gaining a lot of traction in the last years, the last few years, like, and this is very clear with the release of the strategy last month. And they strategy make clear references to the leading model of open source, how this is necessary for platforms or software development also make clear links to artificial intelligence. What I see that is missing, and maybe my colleagues can confirm in the audience is the spell out of openness and open software in the artificial intelligence publications. It's very difficult to find really like a clear mention to open source or to like really spelled it out. And I think this is where the Commission can act, especially now with the paper of the strategy and with what is preached as a leading by example, is to bring more visibility of the advantages of open software for artificial intelligence. If you see, even in the white paper, we don't have reference, we have no reference to or push towards the open technology. I like the term used by the open technologies because it's open source software. We also have work on the G-Con act on open hardware and open standards. This is a point that we are not there yet, but there's definitely a good momentum on the push of open software. And I think that is a lot of work on the whole ecosystem to bring these two together. So Timo, perhaps bringing you in. What are you pressing on? What are your, when you're looking at these different, the strategy and the white paper, etc. What are your key concerns? What do you want changed? What are your priorities to make sure that things are ethical, remain open, remain accept? Thank you very much. There was a comment in the chat which I think is very interesting. Of course, we are turning to that during the Q&A session, but I think it's perfect to highlight how complicated the situation is. So the user Ben, he says that free trade agreements are also a growing problem. And this really illustrates how difficult it is because there are so many areas which could have a positive or negative influence on the development of open source and trust with the open source. I think if I have to bring in one problem, then it's that the strategies at the European level to promote the use of open source software are very weak and not ambitious enough. And in addition, we see that we do not invest enough money in the development of open source in Europe. And I think this could be a unique opportunity for Europe to fund something like the open technology fund much more intensively. So I think we should reconsider open source as a very important aspect for our digital sovereignty, which is often quoted, but no one really explains what it is. And I think open source would really fit into this concept because it would make Europe independent from the development of software from other parts in the world. Yeah, I mean, and this is something that is there's an acute awareness, I think around Europe that in certain, especially in a lot of sort of technological jumps, we've been half a step behind, perhaps on the international scene. Romino to bring you in on that. I mean, from a more practical implementation. How do you see it? Does there need to be? I mean, can Europe lead? Are we too late perhaps or are the people streaking ahead of us on open source technologies? I wonder just what your assessment is when you're when you're working with your people. Yeah, so it's never too late. So, of course, the Silicon Valley culture is somehow different, but we see these hotspots now emerging. And I think Berlin is some sort of getting the new Silicon Valley in Germany. And there is, I've spoken to many startup founders and they told me that there is more than enough money for them. If they have a great idea, there are enough EU money pots they can actually get access to. So that's not a problem. I think what we missed a bit is the coolness factor. So everybody wants to go to Silicon Valley. But I think the new shift towards working at home and working distributed globally will help us to just stay where we are. And one concern I have is that most of the open source foundations are actually from US. And in theory, I've read that somewhere the US government could basically say, OK, we just regulate this. This is now a property of United States and it falls under the export regulations. And then we have to start from scratch. So we should actively find a central foundation in Europe, which can also act something like the Apache Software Foundation. And what I'm also missing a bit is the academic contributions to open source. They just go up to their paper publications, but it doesn't come to a majority. And on the other hand, industry, they are using open source as some sort of warfare now. It's the same with closed source products. Now it's open source warfare. It's one open source product against the other. And I think Jochen wants to speak. So I give Jochen the word, of course. I don't want to interrupt you. I just want to say I very much agree. And I know from discussions. We have had discussions in the context of industry 4.0 in Germany, for instance, that there is still a bit of uncertainty amongst, for instance, universities. What it means if their professors, if their staff contribute to open source. How do they check? I mean, contributing to open source is on the one hand, very easy. On the other hand, of course, if you have IP in the background, intellectual property, etc. You want to check a bit. You want not to give things away that you don't want to get away, etc. So organisations need to build some capacity around this. And there is still some uncertainty amongst some organisations in the academic world, but also amongst new players or more traditional industry players who would like to contribute. But who are not sure what happens if my person contributes code and the code is faulty, but it gets in. Am I then liable? These are questions that people ask for contributing to open source in general. Not about AI, but in general. The culture is evolving. It may also be a generation gap a bit. Younger people are more ready. They want to contribute. And then they face the grey-haired lawyers in their organisations saying, oh, stop, please. Careful, careful. This is something that we might see to help. It may also be something for the legislators, for Timo to take with him to see. Is there something where the legislator needs to produce clarity on liability questions that are up? Are they valid? Are they not valid? These kind of things. So Romeo, sorry, I didn't want to interrupt you. Only one thing to say. As an example, the two mostly used deep learning frameworks are TensorFlow and PyTorch. So TensorFlow is led by Google. PyTorch is led by Facebook. And previously there was another which was called Tiano that was led by US University. And as I said, the open source warfare is currently at the highest. And we just need to be aware that all the major companies have a high political interest in shaping this open source world. Thank you very much. Let's go to you, Ibrahim, quickly, and then we're going to move on to the questions. We've got quite a few interesting questions coming. So Ibrahim, if you want to just... Yes, thank you, Jack. So it took me 30 seconds of interjection. So I actually worked with a lot of stuff that companies in Berlin. I mean, just to the point of Romeo, it's an extremely hotbed of AI technologies. A lot of them are focusing on open source. But the reason I wanted to interject for just 10 seconds here is to mention that we are actually opening a formal office entity in Brussels. So you will have an examination presence, a formal entity in the EU coming very, very soon. So that might alleviate some of the concerns that you may have on the screen. Thank you. Thank you for... Thank you. Interesting that you're coming. There we go. I didn't know that. So let's start with some of the questions that we've had come in. And perhaps a nice way of doing this is if you have something, a response you want to say, raise your hand and I'll give the floor to you. So Marco Batani Auckland has said, what does the panel have to say about explanations presented to the user? Say model version one explains the user what he or she needs to do to get a loan. User complies comes back three months later to ask for a loan again. And then they get a new rejection from a different model giving a new explanation about why they haven't got that loan. I think... I hope I've read that in a clear way. How do we fix this? How do we make sure that there's continuity through the implementation? Anyone want to jump in? Romeo maybe? I don't know. So I think it's more a question from the legislative perspective, but it boils down to the standardisation. So if you have a model and it complies with a certain standard, then of course rules change. And if the bank changes their rules, it's the same problem. It's just the frequency where rules are changing is basically causing the problem. So this is something which can be regulated. For example, if you didn't comply with version 1.0 formally and you are now complying with version 1.0 within the next next month, then you still have to be assessed against model version 1.0 instead of 1.1. So I actually had something I wanted to speak to you, Timo, which we were talking about just before from the legislative proposal, where we were talking about how professors and stuff might not be able to quite clearly understand whether what they're doing is legal or not, and there's this sort of grey area. How quickly can the EU act on this? How quickly can legislation and rules come in from your point of view? You're legislating on these issues. It needs to be done quickly, right? Yes, but I would expect miracles here, so wonders to happen. So this will definitely take some time. So EU legislation can be quick. So at the end of the last mandate, we were able to negotiate a new law on single-use plastics within I think roughly nine months. But if you take a look at the privacy regulation, which isn't still finished, which still isn't finished and this goes on for years now, it can take very long. So I think that the general law can be in place within, I don't know, maybe two years, but it's not done then, because then the problem starts, because you have to adapt the framework to technical developments, technical progress, which happens. So, and these needs to be done by implementing acts, and these can be adopted quite quicker or speedier than the framework regulation. So, yeah, I think two years is a good guess once the commission starts with the draft. And also, two years sounds like quite a long time, but actually it's not a long time, because if the fights go for many for four years, then you even start to enter a new legislative term. I know that's so this is the thing with these issues, right, is they need to get done. Okay, so we'll go back to the questions. There was actually a question directly for you, Joachyn, and I know you have kind of answered a bit in the chat, but we'll put it to you here. How do you envision market surveillance on the AI system market? I wonder if you can perhaps vocalise some of the answer that you put in the chat box there. Yeah, what I responded is sure. It's an important question. And of course it means skill building on the side of the market surveillance authorities. So if you, but it should be possible, right? It's a new technology, you need new skilled people, you need to train them, but it works in areas where complex hardware is being assessed against regulatory requirements. I mentioned electromagnetic compatibility. I think then I mentioned something else, but I would have to scroll now and look for it. Other medical device regulation, also medical equipment is being checked. I mean, the way this works is you have a standard to implement the standards you operate on the presumption of conformity. And then there are these checks being done by market authority surveillance. They pick you out of a random list or maybe they have you as a suspect, I don't know. Then they ask you to work with them to assess whether you are really compliant. And these are the checks. And for this you need to build the skills if you talk about AI. Sometimes the question is being brought up with their AI, as it is sort of, you need to consider the lifecycle of a product as well, or of a service or technology. But here, again, we have examples under the NLF with a medical device regulation, which also looks at the lifecycle. So this is something to learn from. We are starting to discuss this. I believe policy makers are starting to discuss this feasible. It's a discussion to have. How is this feasible? What skills do you need? How do you build it up? But personally, I would believe it's a good system we have in Europe and we should try to use it as first priority for those areas where we all agree that some sort of regulation and compliance needs to be done. Thank you very much. Just to say that question was from Xavier Lario. So Deborah, I've seen that you have a comment you want to make on this. I just want to make a comment. I think the system is slow, but for a good reason, like that we need to have a good political system in place. And of course, regulation involves much slower than technology. But I think in some industries, like in the financial services, they use implementing acts as a tool to partner with standardisation organisations and industry to define standards. So I am sometimes a bit concerned only if they standard to the market, like Johan said. I think sometimes it works, but sometimes we can end up in several not de facto standards or some closed standards. I think that independently, whether we go for agency or not, the use of implementing acts can be very useful in this case. So that can evolve without going through the whole review of regulations of legislative proposals in general. Yes, I want to add this. So then there can be challenged much more quickly, though. That doesn't necessarily secure the ecosystem long term, right? I think, and Johan, Tim, you can intervene, but it can be challenged, but it can also evolve. It's a trade-off. If you hard-code it on the main text, and this is really known, especially I have the pleasure to work on a bill on the G-Home. If you hard-code how your system should work in a legislation, it took nine years for the systems to be implemented. You can imagine that the technology evolved after nine years. Then you have this hard-code in the legislation. You need two more years to go to the entire policy life cycle. So the implementing acts works so that legislation continues to be a very good process and supports our beautiful European system. But we can cope with the evolution of technology. I truly believe that this is one way, but I think, Tim, Johan, you may have some experience on that. I couldn't agree more on that. That's an example where I think we made a mistake is the copyright legislation, where we said this law applies to all platforms except. Then we had a few platforms which were accepted from the scope. There is an exemption in place for Github. There's one for eBay. There's one for Dropbox. But what happens if a new service emerge, which we don't know yet, it would fall under the scope. That's why I totally agree with you. I guess this copyright directive is not going to be reopened within the next 10, 15 years. This is really a problem. Implementing acts are a solution to that as long as the parliament is involved. So I think what's from the democratic point of view shouldn't happen is that only the commission and the member states where the council are entitled to update. So we're just going to take one quick more question before we wrap up. This was from John Favreau, who is talking about being able to see code as necessary but not sufficient. It's a question, for instance, in automated driving, code is rarely open source, but the assessor sees it. The key issue is whether the decision process can be explained and so far it's too often unclear whether it can meet the safety standard. And I know Deborah, you made a quick comment on that on the transparency paradox as well. Perhaps you can explain what you were talking about. I'll reply to the question. I just said that it's a very good question because we are really, we also thinking about like I think it's a question for most of people involved in the field that I worked to anisa on cyber security but to touch points a bit with this on automated mobility. And that is a transparency, the paradox is transparency is positive for sure and is needed. We cannot accept the black box as we discussed here already but then we have this discussion on what is the right level of transparency, transparency to whom and transfers for which purpose. And then if we can calibrate these three aspects we probably can have a trade off between the full transparency but it's really a challenge. I don't think that is a, if someone has the answer to this question, I would be happy to hear that. That's the Pandora's box. We can go for another hour. Exactly. Okay, so I think just to wrap the session, if you guys wouldn't mind, I'll just go around each of you and just in sort of 30 seconds if you can perhaps present your sort of blue sky solution on in the next couple of years how you can see a way I developing in an ethical way and it just a really sort of succinct statement and I'll start with you perhaps Ibrahim. Jack, I was on double you. So I think in 30 seconds, I think one of the challenges I see that, you know, within that timeframe is the different legislation is coming from different countries. Right. So, as, as, as you know, there are maybe over 40 countries today that have officially different kind of flaws in relation to AI. Several countries have appointed ministers of AI or have appointed something of that similar level head of ethical AI or or officer of ethics in the eye and so on. And I think one of the challenges I see talking to different companies operating in different geographies, EU, Asia, and especially North America these different major three major geographies. There's different understanding and different thinking of what constitutes trusted and responsible AI. And I think for the next couple of years and maybe even more, we need to drive towards kind of at least a common base of understanding of what is trusted AI and what needs to be done. And what is ethical AI and how can we achieve and gear towards fair and equitable technology from that perspective. So kind of trying to bring in different legislations to a common base and from there maybe for specifically based on each job. Same sort of question to you Romeo. What are the most crucial aspects that need to be resolved in the sort of immediate term regarding making sure the implementation of AI is effective? Okay. So I think we should move to a security by design system and not security privacy by trust. So I don't want to trust any EU bodies. I don't want to trust any companies. It should be implemented in the system. I give you one example here. It's paper which has been leaked. It's from Brussels 6th of November 2020. The number is 1243 slash 2 slash 20 draft council resolution for encryption and I think I don't want my WhatsApp messages to be decrypted and read by others. On the other hand, if it can fight child pornography, for example, I might change my mind, but I still don't trust that the EU can implement it in a way that nobody who is not supposed to do can read my messages. So if we can move to systems which are private and secure by design over by trust, that's the future in my opinion. Interesting sort of balancing out there exactly. Deborah, same question to you. Where's the most crucial aspect of making sure we have an ethical AI system systems? You're muted. You're muted. Sorry. So plus one of the comments of Romeo, I think 100% agree, especially on the case that it's very difficult to enforce that the system will be used as they are supposed to do. And this is valid also for border control, et cetera, et cetera. But I would like to say if I could make a wish. I think this is the point if we could make a wish for the next years. I really would like to see that open source and open technologies, open processes are spelled out together with AI so that we see this that we can market better the importance of openness in AI initiatives that we can bring visibility and that we can write open in the papers. You can go and you go control find in the papers and we don't find it. So I think this is my wish list to bring really the open source and open initiatives where it should be in the level of importance for artificial intelligence. So we've got one minute left on the event now. So Jochen, if you can ask you to be extremely brief before we hand over to Timo, what you're thinking. I'll speak double fast. I agree with all that I've said before. We are at the beginning of digital decade. I love to call this the digital 20s. AI will play a key role. Data will be a key role. We need to get it ready to handle. We don't need to be afraid of it. We want to set standards in the broader sense of the world standards out of Europe here. We are on a good way and open technologies, open source, open standards will help a lot to increase trust in and facilitate and promote the uptake of the technologies. That's my blue sky. Thank you. Timo, finally. So the short answer would be everything what the previous speakers said within Legislative Act and a little longer. I think users need to know when AI is used and need to know of the implementation. Sorry. And they need to see it as an added benefit and not as a threat. So this requires a solid legal framework and honest contributions from the industry. And I think our discussion today was a good starting point for that. I really agree. I will have to wrap it up now. I think we've packed quite a lot actually into quite a short space of time overall. I thank all of our all of our five speakers, Deborah, Romeo, Ibrahim, Jochen and Timo for speaking so openly and really sort of going in there. And I think we've also managed to sort of speak in quite an understandable way about what is quite a dense and difficult topic. So thank you so much to the Open Forum Europe for hosting the event. Thank you for everyone for your questions and for watching. And if you want to watch a recording of this, I understand that it will go up on the Open Forum Europe's YouTube channel by tomorrow. So wishing you all the best. Thanks for talking. Thanks for joining and see you soon. Thanks very much. Bye now. Ciao. Bye.