 Good afternoon to you all. My name is Joyce O'Connor, and I chair the Digital Futures Group here at the IIEA. You're all very welcome to our webinar here today on countering online disinformation, actions taken and challenges ahead, with Paolo Cesarini, who is head of the Media Convergence and Social Media Unit in the European Commission. You're very welcome Paolo, and thank you so much for taking the time to be here with us in the IIEA. We appreciate it very much indeed. Paolo will speak for 25 minutes or so, and then I will go to the audience to ask for any comments or questions, which you can send in through the function at the bottom of your screen. I'd really appreciate it if you gave your name and affiliation when you're asking a question. That would be very helpful to us. Also, you could join us on Twitter, and our Twitter handle is at IIEA. Paolo, your presentation today is very timely, on at least two fronts, if not more. For the general public, discussion of online fake news and disinformation is a feature of our day-to-day lives. But it's also very timely, Paolo, in relation to your own work in the Commission, and of great interest to us here in Ireland. Your colleague, as you know, Kiran Kazan, the BAI, I know is working with you and your colleagues on the assessment of the implementation of the Code of Practice and Disinformation. And Jane Souter and Eileen Cullity, both from DCU, have published Ireland's country-specific reports. So, as you can see, there is a really good interest here. And we've also set up a commission for the future of media with Professor Green McGraw. So there's a lot of action in the general public here taking place and involvement of people within the country. So Paolo, we know your discussion will focus on European policy response with a particular focus, as I've said, on the Code of Practice on Disinformation. This code, which was introduced in 2018, is now under review. And you, Paolo, will assess the effectiveness and discuss the best future policy options to protect society, but also to protect democracy from the threat of disinformation. I'd like to give a little background to you, Paolo. You have a distinguished career presently in DG Connect and the European Commission as head of media convergence and social media. He personally previously worked in the international labour organization as a member of the legal service and the European Commission's directorate general office for competition as the head of antitrust for a variety of economic sectors. Paolo has also taught at the University of Siena and Montpelier on EU competition law. So Paolo, we're really pleased to have you with us today and we look forward to your presentation. The screen is yours. Thank you very much, Joyce, and it's a pleasure to be with you today and a warmly thanks the IIEEA for this invitation. It gives me indeed the opportunity to focus on an issue that is becoming increasingly important and increasingly present in the political debates, both at national and at EU level. At the core of this debate, there is broad strokes, the challenges that are raised by the digital transformation of our media landscape. And with the increasing power and influence that is exercised by the major, the largest online platforms that have become, as you know, as everybody knows, a privileged channel for access information for the vast majority of the citizens. But also, they have increasingly, are increasingly playing a role in moderating content that it is indeed produced by third parties, by users. They are not produced by their platforms. But the platforms increasingly with their own sophisticated methods for content selection and distribution, driven by algorithms and other recommender systems, really shape, have been able to shape and therefore moderate the distribution of content, and eventually the manner in which the public discourse and the debate take place. They have become a sort of public squares, public spaces, where if on one side, a larger number of people get access to a platform that enables them to make their voice heard. On the other side, it gives also the possibility to style actors to manipulate this debate. And there are several vulnerabilities which are built into the platform services, of which I will mention some in a second, that needs to be addressed. But I want to, before getting into the core of the issues, I would like also to remind another important contextual factor. The digital transformation of the media landscape, it is also accompanied by a profound distress that the COVID-19 crisis has exacerbated in terms of economic sustainability of the media industry at large. We know that the main drivers for revenues for journalism and media is based on a model that draws resources from advertising. And we know that advertising revenues are captured more and more by the large platforms. So this element joined up with the first point that I made. The increasing presence and power that it is handled by large private online platforms. The other element which I would like to flesh out from the outset is that tackling the phenomenon of this information online. It is a daunting task. It's a daunting task because it is easy to take the slippery slope to envisage measures that in fact moderate free speech and limit the freedom of expression. That's why the Commission since 2018 has taken an approach which is, which recognizes, which acknowledges that this information is a multifaceted problem and requires a multidimensional response. So going back to 2018 in particular to the action plan against this information, this complexity has been translated into a portfolio, let's say, of actions at different levels. The need was felt to strengthen, first of all, the institutional capabilities to detect, analyze and expose these information campaigns. That applies in particular as far as the Commission is concerned or as far as the EU institutions are concerned to the strengthening of the capabilities within the STRATCOM task force within the external action service to make them more able to identify and counteract influence operations that are very often driven by foreign actors. On another front, another important work stand consists in making the cooperation between member states much more effective by enhancing exchange of information best practices and possibly devising joint responses. That has translated into the creation of what is called a rapid alert system, where representative from different member states, experts from different member states, communicate easily between them relevant information and exchange views and develop hopefully good practices for the future. The fourth, the third strand of action, it is really the work with the platforms. The attempt here it is to increase transparency and accountability for the platforms, while respecting one fundamental principle, which is the freedom to conduct business for them. So without interfering with the choices that belong to any business in terms of designing their services and providing value for the users. And the fourth main strand of action, it is about raising awareness of citizens and thereby increasing societal resilience. Here, very important, it is the policies that needs to be boosted to increase the level of education of people, the level of media literacy for all audiences, young generations, but also elderly people, to make them more aware of the manner in which the information is distributed nowadays online and therefore to develop a critical thinking and a critical approach when they access information online. I will not mention and will not spend time on another very important, otherwise very important element which is all the actions taken by the commission in order to strengthen the resilience of the electoral systems as such. To ensure the integrity of elections against the number of threats where this information is one of the threats, but where cyber attacks to voting infrastructures and whether the transparency, for instance, in terms of funding in terms of political parties, and in terms of transparency of political communications, it is very crucial to ensure that the political debate, particularly during electoral campaigns, it is transparent and intelligible for all voters. But now let me come back to the point which is closer to my work, to my daily work, which is the cooperation and the work with the online platforms. I was mentioning before that the services are very diversified, but they all have in common a very important evolution in terms of the ability to sort out content, to devise mechanisms that correspond more and more to the user's interest and therefore the algorithms that are in place in order to present the content to the audiences are increasingly important in shaping the information diet, I would say, of users. But also I mentioned that vulnerabilities whereby hostile actors can amplify their narratives, they make their voice louder to use an image to the detriment of other voices. So in a way by compressing this, the space for freedom of expression of other. These techniques that are designed to manipulate the systems to increase the visibility to reach the speed at which a disinformation narrative may reach their intended audiences are now increasingly well known. There is the famous click baiting strategies whereby information is treated as a appeal as designed in order to appeal audience attention and to create spaces where the placement of commercial arts are producing clicks and therefore revenues for the publishers of such news. There is also an increased use of micro targeting techniques in combination with the sponsoring of certain content. This is clear for political advertising but is also clear for socially engaged or issue based advertising where the wider distribution of that type of content because supported by advertising investments reach the intended audience through the combination of a very fine tuned set of criteria that appeal to those audience and may have the effect to strengthen their beliefs and therefore to strengthen the narratives that support those beliefs. Another technique is what in English you call astroturfing, which consists in creating the impression on the audiences that there is a large support for specific stories through the use of artificial manipulations of the media such as the creation of fake accounts they use fake accounts they use the replication of certain stories through bots but also through the use of fake engagements that increase the popularity of a specific content and therefore make it more likely for this content to be visible to larger audience just to name a few and quite known techniques but the techniques are fast evolving and indeed they increasingly involve the combination of automatic systems or direct manipulations of the technologies with intervention of humans. The creation of groups, the operation of coordinated groups that plan and then execute the liberates these information campaigns with a specific political in general but could also be economic incentives. Fourthly, I would like to mention the fact that and that's the algorithms that are built into the services very often do not are not understandable for the general public for the users themselves, why I see what I see and I know that what I see is different from what other users see because there are a number of signals that drive the selection of the content algorithmically taking into account the popularity as I said, but also my own personal preferences. It isn't clear however, to what extent the authoritativeness of the information sources, the trustworthiness of the media that are distributed in this way. It is also equally factored in in this algorithmic architecture that's at the end of the day determine what we see what we know the manner in which we interpret the societal debate as it evolves. Then I would like now to turn to the code of practice because the code of practice matches quite well this vulnerabilities and these risks that are inherent to the development of social media technologies and other online systems for distribution of content over the internet. The code of practice it is aimed at scrutinizing the art placement, make political initiatives advertising more transparent, it is aimed at inciting the platforms to be active, proactive and to report on the action that they take when they identify inauthentic behavior, the use of manipulative techniques online that alter the integrity of their own services. The code of practice is also very much concerned about the needs to empower the consumers to empower the users. So today to make them more aware of the functioning of the content selection algorithms and also to be to make them able to easily access authoritative sources in a diversified number of sources that are equally authoritative. So when I say authoritative sources that are following good ethical professional journalistic principles, which are, as you know, controlled, regulated to a certain extent or self regulated in the case of the press, more regulated through governmental action when it comes to broadcasting, but still have their own framework for trustworthiness. The code takes also into account that there is a fundamental problem of asymmetry of information. The data that the platforms gathers and use and that concerns their own users is certainly much more vast than the information that the public authorities that treat exercise and oversight on the systems, or even the user themselves can have. So the point here is to engage the platforms in a more constructive cooperation, both with the fact checking independent for checking community and with the research community with the scientists that can make sense out of the data, and make it can make draw conclusions or devise measurements in order to understand the importance of the threats posed by this information by, you know, designing models that track the distribution and therefore the propagation of this virus, this information virus, and therefore make contribute to raise the awareness of the public and increase the resilience of our societies. What is the, what is the conclusions that we can draw the temporary conclusion, the provisional conclusions that we can draw after a couple of years of operation of the code of practice. The commissioners published the 10th of June, few weeks ago, sorry, the 10th of September, sorry, a few weeks ago, an assessment of the code of practice. It emphasized that there are certain undoubtedly there are important achievements that that have been that I had materialized, thanks to this separate approach. I mean, I could quote some figures regarding the number of ads, for instance, that have been rejected because they were actually misleading or misrepresenting the identity of the, of the, of the authors of this ads, figures that concerns the number of landing pages that have been identified the systematic prevails of this information that would demonetize, demonetized. I could mention also the progress that had been made in creation of public library, libraries accessible libraries for political ads, although everything is not absolutely perfect, but progress that needs to be acknowledged. I could also mention figures that concerns the number of closed accounts that follows the detection of manipulative behavior in authentic behavior, and, and also the, the higher level of transparency that today we have concerning the actors that carry out this organized campaign by means of AstroTurk. Thanks. All these is certainly laudable. I think it is an encouraging step forward compared to the situation we had back, you know, remember in 2016 when the, the word fake news was became the word of the year. And with the, all the concerns that were arising around the influence that fake news had in the elections, the presidential elections of 2016 in the US and also for the Brexit referendum. We know that we are in a better place today compared to that, compared to that time. But we know also that there's still a lot of work to be done. The assessment shows in the first place that the implementation of the commitments is not always consistent. And the commitment itself, they are not sufficiently clear in order to be actionable and, and, and subject to meaningful monitoring. So the platforms take different strengths of action and their own actions are not always comparable. And, and these, and these also is coupled with the fact that the actions are not taking the same speed and with across the union, different member states are better served than other member states. Another element is that the definitions that we are used to the terms that we are used to use to explain the phenomenon like disinformation, misinformation, foreign influence operations, inauthentic coordinated behavior, and so on and so forth. There are terms that are still working definitions that they need to be probably better defined. And there are certain terms like issue based advertising that have not been defined at all. A third element that is, I think it's worth pointing out is that the code in itself are some gaps, some gaps and the most important ones, I think is the fact that they do not include a set of key performance indicators against which would be possible for public authorities to monitor the evolution of the threats, and also the effectiveness of the policies put in place by the platforms to, to tackle those threats. There are issues that remains unanswered like is it legitimate to use micro targeting techniques when it comes to political advertising within the context of political campaign. Should there be a different, a different regime that applies for commercial ads and political ads. And, you know, the consequences that the use of personal data for micro targeting purposes may have, and the scandal of Cambridge Analytica is still, I think, very vivid in all our minds. Also, I think the understanding of what it means manipulative techniques that amplify content online to the detriment of other pieces of content of content needs to be further elaborated and because these techniques evolves all the time. So there are gaps in the code that would be important to plug. And finally, well, you know, they said the code of practices self regulatory measure, it is self regulation, and, and therefore cannot address issues that increasingly become important. These issues are, first of all, the inclusiveness of the approach are all the relevant services, really participating into this joint endeavor to apply policies that are designed and effective in curbing this information. The actors that are not present yet in the conversation, not only other online platforms, but also other specific services, which increasingly becoming important. Like end to end messaging services. What's up has played an important role during the COVID crisis in in enhancing the possibility for people to spread stories and myth and conspiracy theories. But also if you look from the perspective to follow the money and therefore to ensure that this information does not become a business where people can enrich themselves through click baiting. Well, there are important actors from the tech sector that are still not part of fully part of this endeavor. So inclusiveness. The second is about oversight. Certainly, a top down approach with stronger regulatory framework would be necessary in order to empower the authorities and to establish the appropriate mechanism to to exercise and oversight on the function of the systems including the auditing of the algorithms that shape the distribution of content online and including also the effectiveness of the mechanism put in place in order to avoid the risk of deliberate manipulation. In particular by foreign actors. There is no sanctions, by the way, so what would be the consequence in case of insufficient insufficient application or incomplete application of these commitments. So fundamentally and I would close here the least of the shortcoming. It is also important to realize that they are important question around the protection from the mental rights that requires specific redress mechanisms that ensure that there is no neither from the side of the users, not on the side of the platform, so not excessive. These proposed measures that silence people, but at the same time measures that enable people to be held responsible for what they do. When they are online, so taking into account this dimension also require probably a stronger framework. And I conclude because here we are the point where the reflection is in full swing in our offices. We have planned three important initiatives the commission has planned three important initiatives. One is the digital services act that will have among other things to modernize the rules for information society services in the context of the review of the e-commerce directive. And certainly there will be considerations that are relevant also for harmful content, like such as this information, particularly in terms of setting transparency standards for platforms that could be addressed in that framework. But there is also another important issue which is the European democracy action plan that will be adopted by the end of the year. At the same time, I would say that the first proposals for the digital services act. Were an additional issue, a specific aspect that pertain to the defense of democracy against the foreign interference and manipulations of media to steer the debate and to blur the boundaries of the different opinions by putting by using conspiracy theories and this information as a mechanism to sow distrust confusion and and increase polarization societies. Well, these elements that probably require specific considerations, the European democracy action plan we have to consider whether in addition to improvements to the code of practice would be necessary also to provide another layer of legislation that would make these commitments more binding more compelling for platforms to that play a systemic role in the information space. Third, it is the media and audio visual action plan also scheduled for adoption by the end of the year. We come out or try to come out from a coronavirus crisis that are profoundly affected the media ecosystem. Media journalists quality journalists professional journalists miss a pillar for our democracies and the need to provide an active and substantial support for independent journalism and for independent professional media. It is certainly felt as a challenge that needs to be taken at rest. So part of the resources of the rescue recovery fund, as well as the mainstream future mainstream programs under the multi annual financial frameworks that will kick in in 2021 will have to be devoted to this to address this problem. The problem that concerns the rescue of the industry in the face of the consequences of the crisis, but also in the longer term to favor eyes to support the transformation of the media industry. It's adaptation to the challenges of the digital world. I would stop there. I think I took a bit more than 25 minutes to 30 minutes. And I'm happy to answer any question. Thank you very much. Thank you so much for your very insightful and excellent presentation. And I suppose your presentation has shown the complexity of the issue. A lot going on a lot done as they say, and put a lot on the horizon to address these issues. So thank you very much for that. You certainly have got our audience thinking, and I'll go right away to the questions. And the first question here is, is from Karen Kazan, who you know, and, and he asks, and he's from the BAI. He asked what needs to be done to make cooperation between member states more effective. And can we expect to see specific proposals in the area presented by the Commission in the context of the, what you've talked about the proposed digital service act and the democracy action, both you as you've mentioned before the end of the year. Yeah, very good question. Indeed, this is one of the elements that are driving the ongoing reflection. Certainly they need to have a network of authorities that are devoting their time and resources for, for overseeing the platforms. It is, I mean, I would say is the no brainer is a no brainer. The need is obvious. The question is now it is concerns the identification of the, the authorities, which are the best place in that respect. When it comes to this information. And now I express my own personal views. I think that would be logical to think of the audio visual authorities. And as the network already existing network of of European audio visual authorities as the entity, the best place to exercise this function. Why, but essentially because given the experience gained across audio visual services and the issues that arise in that contest at the end of the day, many of them not different from the issues that arise in the online digital media context. That would ensure a more coherent and more consistent approach for all media that being increasingly convergent should not be treated differently or subject to radically different standards. Then the question is how to empower, of course, such authorities to expand their capabilities and competences beyond what they do now, so the audio visual services, both online and offline, and to include also other type of services that play a role in the dissemination of this information. There is also another issue very important, which is, is the oversight on the platforms that is designed to capture the trends and the threats inherent to this information. And he's designed also to assess the validity, the effectiveness of their policies and the actions taken by those who are responsible for the diffusion of this content, so the online platforms. Is this function, should this function be exercised only by public authorities or should there be indeed in order to avoid criticism about the possibility to create a censorship machine driven by governmental agendas. You know, there should there be space for an independence entity that's studied at the moment that make analysis with all the guarantees of independence that are required there so from a scientific point of view to exercise and oversight that is driven by a scientific program. And not by a political program as a necessary tool that could actually advise then the actions that the public authorities should take in function of this enhanced knowledge about what is going on. I think that pleads very much for having this combination of public authorities with the specific competence and knowledge of the media at large, and a well structured community of fact checkers and researchers that look from the point of view of the society. The nature of the threats and the effectiveness of the measures, the combination of these two elements, I think it is to be very deeply considered. And you give me the possibility to mention here that the European Digital Media Observatory is exactly trying to build up a strong community of fact checkers and researchers that could act as an advisory board for a national audio visual authorities in order to take decisions which are not politically tainted but based on evidence. That's interesting because you mentioned earlier in your presentation about COVID-19. We've learned to see the importance of individual involvement but mainly scientific analysis for government and for public authorities. So that independence and trust can be seen by the public as well. I think that's very important to emphasise. But Ono Del would like maybe to follow up on this question, Paulo. He's from Trinity College Law School in Dublin and he wants to ask you maybe an unnecessary question, but how confident are you that the three forthcoming EU initiatives will have a significant positive effect? Well, to anticipate what could be the effect of measures that still needs to be designed, it is pretty difficult. The difficult question is, yeah, it is very much crystal balling. At the same time, you can see that the combination of these trends, these main three-strand work should produce results. We are acting on ensuring a level playing field for all information society services. We will act in order to identify the specific threats to democracy posed by the transformation of media into the digital space. We are acting to support professional journalism and professional media. I think the combination of these three elements should actually bring us out of the difficult turn that the COVID crisis has put us in front of. And I think the future is promising. I should also mention another initiative, which is the digital educational plan that is also in the making, which from the point of view of increasing societal resilience and to devote resources and creating a community of experts around this important theme should also bring another piece to the puzzle and increase the through synergies, through synergetic interaction between these different initiatives and we should be able to have an impact in the future. You mentioned the European Digital Media Observatory and I understand that they are proposing to set up a series of regional hubs. Is that for that purpose that you are talking about? And I'm just wondering, do you think that Ireland would be a good place for regional hub, given the platforms that are here? And, you know, would that be an invaluable addition? And is that the reason why we're looking at the idea of regional hubs? Yeah, no. I think this community, this multidisciplinary community of researchers and academic researchers should be seen as a network of networks in the sense that it is not a centralized entity that can drive a research program strategy alone. The reality is that the information markets are very much segmented along national boundaries or perhaps more generally around linguistic boundaries. So, also the shapes and the forms that these information campaigns and conspiracy theories takes a defense very much from one area to another area. The presence of national or transnational hubs, but with the specific knowledge of local information environments is fundamental. It is important to have it in Ireland, as it is important to have it in the Baltic states or in any other member states that suffer from this phenomenon. And I think all member states to a larger or smaller extent are suffering from this phenomenon. So our idea is going step by step with the staggered funding programs to create this network of regional hubs or national hubs for research on this information. Connected to the central platform, the EDMO, the European Digital Media Observatory as a central platform with the European University Institute acting now as a coordinator of this central platform. But to act together with a strong autonomy for each hub, but also with the conscious choice to engage into coordinated research programs to maximize the effect. And just as an aside, we're having Miguel Madura from the Observatory coming to speak to us on the 3rd of November. So I think our audience might put that in their diary because we'll follow up on some of the things that you've been speaking about. But also there's a question here now as a follow up on this area of networking and cooperation from Seamus Allen in the IIEA. And he asked the question, Paulo, you mentioned the importance of cooperation between EU member states. Are there any long standing differences between EU member states on countering this information due to experience or cultural factors or are most countries mainly on the same page? The risk of fragmentation is always present. The approaches followed by the member states so far are not aligned. The risk is that as the problem becomes more and more prominent, the risk is that all member states will take their own initiatives and there will be substantial lack of uniformity in the approach, which would be a serious problem because the internet, as you know, knows no borders. The phenomenon is transnational and it cannot be tackled only by one member state. That requires a strong cooperation between member states. So indeed, one of the challenges that the upcoming initiatives have to address, it is also how to ensure that when it comes to online platforms, for instance, and the regulation of online platforms, we have the right tools in place that ensure minimum organization, but also the necessary flexibility that take account on one side, the diversity of the services provided by the platforms, and on the other side, the diversity of the conditions in different member states. Having said that, I think the need for a set of standards that ensure that all member states take their own actions along the same direction, it is very important. The risk again, you can see it in other parts of the world, Singapore or another part of the world, where, you know, under the cover or under the pretext of fighting this information, the legislator in fact set out rules that are silencing the opposition and reduce the diversity of the opinion that can be expressed. You know, that framework, I think you've emphasized really, really important. And we've had a question which about the empowering citizens from Eileen quality from Dublin City University, the Folgio Institute, we asked the question parallel in terms of empowering citizens to understand algorithms are accountability audits by experts, a better approach, this comes into your focus on the scientific side, a better approach by experts, a better approach than increased transparency between algorithms are highly complex, because they are very complex, algorithms are highly complex. And again, in terms of empowering citizens to understand algorithms are accountability audits by experts, a better approach than increased transcendencies, because algorithms are highly complex. Yeah, I fully agree with this statement. It's a question which is also a statement. I agree with it. Because there is a limit to transparency. First of all, because algorithms are indeed very complex. And people are not generally engineers, or I have no time to devote for to a full understanding on the manner in which the algorithms work. And the risk is that not contributed near transparency algorithms will not really contribute to enhance resilience and awareness in people. Yes. Perhaps we all know in the audience which are better informed know generally speaking how the algorithm algorithms works. How the popularity and the personalization in the distribution of contents on social media services plays a role and probably this knowledge it is not available to everybody. And that is important that instead is available to everybody. But there is also a limit to transparency because disclosing too much regarding the manner in which the argument functions, then the risk of gaming by a style actors of the systems increases. So it's a fine balance. Yes. So they are the focus put on accountability in my view is very, very relevant accountability leaders to a fundamental question. So what should be the optimal architecture. For this content selection algorithms. In order to ensure that the quality content emerge. This information is diluted. And the, and there are no discriminations between different source of information that are equally trustworthy. And that leads to the subsequent question, which is being discussed by the member states in the context of the audio visual council. What would be the trustworthiness indicators that could enable a better architecture for algorithms and also an auditing system of the function of these algorithms that are based on criteria that are verifiable. With the COVID crisis, we have seen some improvements because indeed, citizens have been able to access to Facebook to treat Twitter through YouTube. Much more prominently content coming from public authorities coming from the WHO coming from reputable media that has happened. And that is a good way to be pursued. But we need to ensure that there is an oversight body and auditing system indeed that verify control for the manner in which this prioritization takes place. Objective, non-discriminatory criteria that reflect the trustworthiness of the sources on the basis of a dialogue that should include all actors, publishers, broadcasters, platforms themselves. The sector has to devise them. Well, Paulo, that's probably thank you very much for that answer. Very, very thorough. And it's probably a good place, a positive note to stop on. And I'd like to thank on your behalf and our behalf, the audience for asking some truly interesting questions and for their participation. The hour is nearly up now, so we're going to have to close, unfortunately. So I'd like to thank our team here at the IAEA, our Lorcan Mullally and Sarah Burke on the production side and Seamus Allen, our digital researcher for all the work in preparing this webinar. And of course, I'd like to thank you, Paulo, very sincerely for a really fascinating presentation on disinformation. I think you've showed the complexity of it. But also, I think, really importantly, you've given an assessment of where we're at and what we not have to do. And also the idea of linking the countries together with independence, with scientific bodies. And we see now in some way the vehicles that are there to make this happen. As you've mentioned, the Digital Service Act, and we've got Christine Cannaby coming to speak to us on the Digital Service Act on the 21st of October. So, you know, the conversation will continue. But also the European Democracy Action Plan, which I think really offers enormous amount in terms of empowering citizens and the media and audio visual action plan. And I think people will be very interested to hear of your focus there, emphasizing the independence of good journalism, and also funding necessary to help transform that particular industry in this time, which has been very challenging for it. So thank you so much. As I said, the conversation will continue. We really appreciate your contribution, Paulo, and we look forward to seeing you again. So thanks to you and thanks to everybody who's listened to our video with great attention. I know you'll stay safe and keep well. Thank you very much. Goodbye to everybody. Bye-bye.