 And the talk that is about to begin now is by Christof Schmond from EFF and Iliske Perkova from AccessNow. And they were talking about this, probably maybe the biggest legislative initiative since the GDPR in Brussels. It's called the Digital Services Act and on to you too, Iliske, you're muted. I realize. Hello. First of all, thank you very much for having us today. It's a great pleasure to be the part of this event. I think for both of us, it's the first time we are actually joining this year. So it's fantastic to be the part of this great community. And today we are going to talk about the legislative proposal, still a proposal, which caused a lot of noise all around the Europe, but not only in Europe, but also beyond. And that's the Digital Services Act legislative package that today we already know that this legislative package actually consists of two acts, Digital Services Act and Digital Market Act. And both of them will significantly change the regulation of online platforms with a specific focus on very large online platforms, also often referred to as gatekeepers. So those who actually hold a lot of economic dominance, but also a lot of influence and control over users' rights and the public discourse. So I'm going to start with giving you a quick introduction into what's the fuss about, what is actually the essay all about, why we are also interested in it, and why we keep talking about it, and why this legislation will keep us preoccupied for the years to come. The essay was already announced two years ago as a part of European Union digital strategy. And it was appointed as one of those actions that the digital strategy will be actually consisting of. And it was the promise that European Commission gave us already at that time to create a systemic regulation of online platforms that actually places, hopefully, the users and their rights into the center of these upcoming legislation. So the promise behind the essay is that these ad-based internet bill, and I'm speaking now about ad-steg and online targeting, the internet, as we actually knew, will be actually replaced by something that puts user and users' controls and users' rights as a priority. So both of these legislations, if implemented and drafted right, should be actually achieving that goal in the future. Now previously, before the essay actually was drafted, there was some so-called e-commerce directive in place that actually established the basic principles, especially in the field of content governance. I won't go into details on that because I don't want to make this too legalistic, but ultimately the essay legislation is supposed to not complete through a new place, but build up on the top of this legislation that actually created the ground and the main legal regime for almost 20 years in Europe to regulate the user-generated content. So the essay and the DMA, as the legislation, will seek to harmonize the rules addressing the problems such as online hate speech disinformation, but it also puts emphasis finally on increased meaningful transparency in online advertising, the way how the content is actually being distributed across platforms, and also will develop a specific enforcement mechanism that will be actually looking into it. Now before I will actually go into the details on the essay and why the essay matters and do we actually need such a big new legislative reform that is coming from the European Commission, I want to just unpack it for you a little bit what this DSA legislative package actually consists of. So as I already mentioned, two regulations, so regulation, the strongest legal instrument European Commission actually has in its hands, which is supposed to achieve the highest level of harmonization across the member states, and we all can imagine how difficult that will be to achieve, especially in the realm of freedom of expression and particular categories of user-generated content which is so deeply context-dependent. All of those related to content moderation and content curation will be mainly in the realm of Digital Services Act, and then the second regulation, the Digital Market Act, will be specifically looking at the dominance of online platforms, economic dominance, competitive environment for smaller players, the fairness in the competition, and it will also establish the list of dos and don'ts for gatekeepers platforms. So something that, so the platforms that actually hold that relevant dominance, and now based on these new proposals, we know that these platforms are mainly called as very large online platforms. So this is exactly how the legislation refers to gatekeepers now. Now, I think one more point that I want to make is that the DSA and DMA were launched on the 15th of December 2020, so it was literally a Christmas present given to the Digital Rights Community by the European Commission, a long time anticipated one. The work on DSA started however much earlier. Electronic Frontiers Foundation, as much as AccessNow, together with Edri, were working very hard to come up with the priorities and recommendations, what we would like to see within these legislations to be enthroned, because from the beginning, we understood the importance and the far-reaching consequences this legislation will have, not only inside of the European Union, but also beyond. And that brings me to the final introductory point that I want to make before I will hand over to Chris, which is why and do we actually need the DSA. We strongly believe that there is a big justification and good reason to actually establish a systemic regulation of online platforms in order to secure users' fundamental rights and power them and also to protect our democratic discourse. And this is due to the fact that for many years, we witness in this space quite bad regulatory practices in platform governance that are coming from member states and Chris will provide for a very concrete examples in that regard, but also coming from the European Commission itself, mainly the proposed online terrorist content regulation, for instance, or we all remember the story of copyright that Chris will discuss a little bit further. We saw how not only the member states, but also European Commission or European Union, in order to actually establish some order in the digital space, they started pushing the state's obligation and especially state's positive obligation to protect users' human rights in hands of online private platforms that started replacing state actors and public authorities within the online space. They started assessing the content, the legality of the content, deciding under a very short time frame what should stay online and what should go offline with no public scrutiny or transparency about practices that they kept deploying and they keep deploying to this day. Of course, platforms under the threat of legal liability often had to rely and still have to rely on the content recognition technologies for removing user-generated content. A typical example could be also Avia law, which will be still mentioned today during the presentation and the typical time frames are usually those that extend from one hour to 24 hours, which is extremely short, especially if any users would like to appeal such a decision or seek an effective remedy. At the same time, due to the lack of harmonization and lack of proper set of responsibilities that should lie in hands of these online platforms, there was a lack of legal certainty which would only reinforce the vicious circle of removing more and more of online content in order to escape any possible liability. And at the end, to this day, due to the lack of transparency, we lack any evidence or research-based policymaking because platforms do not want to share or inform the public authorities what they actually do with the content, how they moderate, and those transparency information that we receive within their transparency reports are usually quantity-oriented instead of quality. So they focus on how much content is actually being removed and how fast, which is not enough in order to create laws that can actually provide any more sustainable solutions. And ultimately, as we all agree, the core issue doesn't lie that much with how the content is being moderated by how content is being distributed across platforms within the core of their business models that actually stands on an attention economy and on a way how sensational content is often being amplified in order to actually prolong that attention span of users that visit platforms on regular basis. I'm packing quite a few issues here, and this is supposed to be just a quick introductory remark. I will now hand over to Krista, who will elaborate on all these points a little bit further. And then we take a look and unpack a few quite important parts of the essay that we feel should be prioritizing this debate at the moment. Chris, over to you. Thanks, Alicia and everybody. I'm quite sure that many of you have noticed that there's a growing appetite on the side of the European Union to regulate the internet by using online platforms as they're helping hands to monitor and sensor what users can say, share or do online. As you see on the slide, the first highlight of this growing appetite was corporate upload filters, which are supposed to spot corporate protected content online. Thousands of people, old and young, went on the streets to demonstrate for free internet and to demonstrate against technical measures to turn the internet into some sort of censorship mission. We've made a point then and we continue making the point now that upload filters are prone to error, that upload filters cannot understand context, that they are unaffordable by all but the largest tech companies, and which happened to be all based in the United States. But as you well know, policymakers would not listen and article 13 of the Corporate Directive was adopted by a small, small march in the European Parliament, also because some members of the European Parliament had troubles to press the right buttons. But I think it's important for us to understand that the fight is the fight is far from over. The member states must now implement the directive in a way that is not at odds with fundamental rights. And we argue that mandated automated removal technologies are always in conflict fundamental rights. And this includes data protection rights. It is a data protection right not to be made subject to automated decision making online, if it involves your personal data, and if such decision making has a negative effect. But if we leave all those legal arguments aside, I think the most worrying experience with upload filters is that they had a spillover effect, a spillover effect to other initiatives. Sure, if it works for copyright protected content, it may well work for other types of content, right? Except that it doesn't. Many consider it now a clever idea that platforms should proactively monitor and check all sorts of user content made is be communication pictures or videos, and they should use filters to take it down or filters to prevent the re-upload of such content. An example of such spillover effect, and Alicia has mentioned, is the draft regulation on terrorist related content. It took a huge joint effort of civil society groups and some members of parliament to reject the worst of all text. We had recent negotiations going on and at least we managed to get out the requirement to use upload filters, but still at 24 hours removal obligation that may be much platforms to employ those filters nevertheless. And we see that particularly national states are very fond of the idea that platforms rather than independent churches should be the new law enforcers. There are now several states in the European Union that have adopted laws that would either oblige or much platforms to monitor user speech online. First up was the German NetsDG, which set out systematic duties for platforms. Then we had the French Loire Via, which copy-pasted the NetsDG and made it worse. And last, we have the Austrian hate speech bill, which is a mix of both, the German and the French proposal. They all go much beyond copyright content, but focus on hate speech and all sorts of content that may be considered problematic or illegal in those respective countries, not necessarily in other countries. And this brings me to the next problem. How do we deal with content that is illegal in one country, but legally in another? A recent Court of Justice ruling had confirmed that a court of a small state like Austria can order platforms not only to take down different material content globally, but also to take down identical equivalent material using automated technologies. For us, this is a terrible outcome. This will lead to a race to the bottom, where the countries with the least freedom of speech-friendly laws can superimpose their laws on every other state in the world. We really believe that all this nonsense has to stop. It's time to acknowledge that the Internet is a global space, a place of exchange of creativity in the place where civil liberties, as opposed to exist. So we are fighting now against all those national initiatives. We had the great first victory when we helped to bring down the French Loire Vier bill that had imposed the duty for platforms to check and remove potentially potentially illegal content within 24 hours. Before the constitutionnel, the French Supreme Court, we had argued that this would push platforms to constantly check what users post. And if platforms face high fines, of course, they would be rather motivated to block as much contestable content as possible. We've made a point that this would be against the Charter of Fundamental Rights, including freedom of information and freedom of expression. And it was a great victory for us that the French Supreme Court has struck down the French Loire Vier bill and followed our argument, as you see it on the slide. We also see that now that there's a pushback, at least against the update of the German Netsdici, which would have provided new access rights for law enforcement authorities. This and other provisions are considered unconstitutional and as far as I understand, and perhaps listeners can correct me, the German president has refused to sign the bill. And the Austrian bill goes a similar pathway. It got recently a red light from Brussels. The commission considers it in conflict with your principles, also thanks to joint effort by epicenter works. And this shows that something positive is going on as a positive development, a pushback against automated filter technologies. But it's important to understand that those national initiatives are not just purely national attempt to regulate hate speech. It's an attempt of an EU member state to make their own bills as badly as there are some sort of a prototype for EU-wide legislation, a prototype for the Digital Services Act. And as you know, national member states have a saying in your lawmaking that voices are represented in the Council of the EU and the European Commission, which will be disincentivised to propose anything that will be voted down by Council. I think that's a nice takeaway from today that lawmaking and national member states are not an isolated event. It's always political. It's always Nets politic. The good news is that as far as the EU and the Commission proposal for the Digital Services Act is concerned, that it has not followed the footsteps of those badly designed and misguided bills. It has respected our input, the input from AccessNow, from EFL, from the 8.0 network, from academics and many others, that some key principles should not be removed, like that liability for speech should rest with the speaker. The DSA gives also red light to channel monitoring of user content. And there are no short deadlines in there to remove content that might be legal. Instead, the Commission gives more slack to platforms to take down posts in good face, which is what we call the EU style Good Samaritan clause. Looking through the global lenses of lawmaking, it's very fascinating to see that that whilst the United States is flirting with the idea to move away from the Good Samaritan principle in section 230 of the Communications Decency Act. So the idea that platforms can voluntarily remove content without being held liable for it. The European Union flirts with the idea to introduce it, to give more options to platforms to act. That being said, the major differences between the US and the EU is that in Europe, platforms could be held liable the moment they become aware of the illegality of content. And that's an issue because the Digital Services Act has now introduced a relatively sophisticated system for user notification, complete mechanism, dispute resolution options, which all lead to such awareness about illegality or could lead to such awareness. And it's not quite clear for us how platforms will make use of voluntary measures to remove content. That being said, we think that the Commission's proposal could have been much worse. And the Parliament's reports on the Digital Services Act have demonstrated that the new parliament is a bit better than the old one. They have a lot of respect for fundamental rights. There are many members of parliament that are quite fond of the idea to protect civil liberties online. But we know that this was only the start and we know that we need another joint effort to ensure that users are not monitored and not that the mercy of algorithmic decision making. And I think Alishka is now going to explain a bit more about all this. Thank you. Thank you very much, Chris. So we can actually move now further and unpack few relevant provisions for everything that has already been mentioned, mainly in the realm of content moderation and content curation, which ultimately lies in the core of Digital Services Act. And maybe not to make it also abstract. I also have the printed version of the law here with me. And if you look at it, it's quite an impressive piece of work that European Commission did there. And I have to say that even though it's a great start, it still contains a lot of imperfections. And I will try to summarize those now for you, especially in the light of our unpositioning. And as I mean, civil societies in general, because I believe that on all those points, we have a pretty solid agreement among each other. And what we were actually hoping that the Digital Services Act will do what it actually does and where we see that we will need to actually continue working very closely, especially with the members of the European Parliament in the future, once the draft will actually enter European Parliament, which should happen relatively soon. So as it already as I already mentioned at the beginning quite briefly is how actually the essay distinguishes between online platforms, which are defined within the scope of the law and between very large online platforms, which is exactly that scope, where all large online gatekeepers fall into the essay specifically then distinguishes between obligations or responsibilities of these actors, some assigning to all of them or including online platforms and some being extended specifically due to the dominance and power of these online gatekeepers hold. This is mainly then the case when we discuss the requirements for transparency. There is a set of requirements for transparency that apply to online platforms, but then there is still specific additional set of transparency requirements for larger online platforms. What the essay does, especially and this is the bid, which is extremely relevant for the content moderation practices, it attempts to establish a harmonized model for notice and action procedure for allegedly illegal content. One of our red lines before I go into the details on this was that the essay will be touching only or trying to regulate only allegedly illegal content and stay away from vaguely defined content categories such as potentially harmful but legal content. There are other ways how the legislation can tackle these content, mainly through the meaningful transparency requirements, accountability, tackling issues within the open content recommender systems and algorithmic curation, but we didn't want these specific category of the content to be included within the scope of the essay. This is due to the fact that vaguely defined terms that find their way into legislation always lead to human rights abuse in the future. I could give you examples from Europe, such as the concept of online harms within the UK, but also as a global organization, both of us, we actually often see how wave terminology can quickly lead to even over criminalization of speech or suppressing the decent. Now, if we go back to harmonize notice and action procedure, what that actually means in practice, as Christophe already mentioned, Europe has so called conditional model of intermediate reliability, which is being provided already and established by the initial legal regime, which is the e-commerce directive under article 14 of e-commerce directive, which actually states that unless the platform holds the actual knowledge and according to the wording of the essay now, it's the actual knowledge or awareness about the presence of illegal content on their platform, they cannot be held liable for such a content. Now, we were asking for a harmonized procedure regarding notice and action across the EU for a while precisely because we wanted to see reinforced legal certainty. Leg of legal certainty often translated into over removal of even legitimate content from the platforms with no public scrutiny. The essay does a good job. It's a good starting point that actually it tries to attempt to establish such a harmonized procedure, but it's still lacking behind on many aspects that we consider important in order to strengthen protection of fundamental rights of users. One of them is, for instance, that harmonized notice and action procedure as envisioned by the essay is not specifically tailored to different types of categories of user generated content. And as we know, there are some or many categories of content that are deeply context dependent linked to the historical and sociopolitical context of member state in question. And due to their due to their reliance on automated measures that are usually context blind, we are worried that if notice and action doesn't reflect this aspect in any further ways, we will end up again with over removals of the content. What is probably another huge issue that we are currently lacking in the draft, even though the essay is trying to create a proper appeal and enforcement mechanisms and also appeal mechanisms for users and different alternative dispute settlement that the law draft currently contains. There is no possibility for content providers. So a user that upholds the filter, sorry. That was a nice Freudian flick there. For a user that actually upload the content to appeal to that to directly actually use the country notification about that notified content that belongs to that user, nor platforms are obliged to actually send a notification to a user prior to any action that is being taken against that particular notified content. These are for us a procedural safeguards for fairness that user should have and currently they are not being reflected in the draft. However, this is a good start. It's something that we were pushing for, but I think there are many more aspects that these notice and action procedures will need to contain in order to truly put users at first. Now, the notice and action procedure is mainly focusing on the illegal content, but there are ways in the draft where potentially harmful content which is still legal, so the content that actually violates the terms of service of platforms is being mentioned throughout the draft. So for us it's not at the moment exactly clear how that will work in practice. So that's why we often reduce these phrase that is also put on the slide good intention but imperfect solutions. However, I want to emphasize again that this is just the beginning and we will still have time and space to work very hard on this. Another kind of novel aspect that DSA actually brings about is already mentioned Good Samaritan Clause and I tend to call it the EU model or EU version of Good Samaritan Clause. Good Samaritan Clause originates in section 230 of Communication Decency Act as Christa already mentioned, but within the European realm it goes hand in hand with this conditional model of liability which is being preserved within the DSA legal draft. That was also one of our main ask to preserve this conditional model of liability and it's great that this time European Commission really listened. Why we consider the Good Samaritan Clause being important in the past when such a security wasn't enshrined in the law but it was just somehow vaguely promised to the Commission that if the platform will practically you know deploy measures to fight against the spread of illegal content they won't be held liable without acknowledging that through such use of so-called proactive measures the platform could in theory gain the actual knowledge about the existence of such a type of content on its platform which would immediately trigger legal liability. This threat of liability often pushed platforms to the corner so they would rather remove the content very quickly than to face more serious consequences later on. That's why we see the importance within the Good Samaritan Clause or the European model of Good Samaritan Clause and we are glad that it's currently being the part of the draft. One of the biggest downfalls or one of the biggest disappointments when DSA finally came out on 15th of December for us was to see that it's still online platforms that will remain in charge when it comes to assessing the legality of the content and deciding what content should be actually restricted and removed from a platform and what should be available. We often emphasize that it's very important that the legality of the content is being assessed by the independent judicial authorities as in line with their rule of law principles. We also do understand that such a solution creates a big burden on the judicial structure of member states. Many member states see that as a very expensive solution. They don't always want to create a special network of courts or e-courts or other forms of judicial review of the illegal or allegedly illegal content but we still wanted to see more public scrutiny because for us this is truly just a reaffirmation of already existing status quo as at the moment under many jurisdictions within the EU and in the EU itself it's still online platform that will call the final shots. What on the other hand is a positive outcome that we will also hardly pushing for are the requirements for meaningful transparency. So to understand better what platforms actually do with the individual pieces of content that are being shared on these platforms and how actually transparency can then ultimately empower user. Now I want to emphasize this because this is still ongoing debate and we will touch upon those issues in a minute but we don't see transparency as a silver bullet to the issues such as amplification of potentially harmful content or in general that transparency will be enough to actually hold platforms accountable. Absolutely not. It will never be enough but it's a precondition for us to actually seek such solutions in the future. The essay contains specific requirements for transparency. As I already mentioned set of requirements that will be applicable largely to all online platforms and then still specific set of requirements on the top of it that will be applicable only to very large online platforms so the online gatekeepers. We appreciate the effort we see that the list is very promising but we still think it could be more ambitious. Both EFF and Access Now put forward a specific set of requirements for meaningful transparency that are in our positions and so did Edry and other civil society or digital rights activists in this space. And final point that I'm going to make is the so-called Pandora box of online targeting and recommender systems. Why do I refer to these two as Pandora box? When European Parliament published its initiative reports on the essay there are two reports one being tabled by URI committee and another one another one by INCO especially the URI report contained paragraph 17 which calls out for a better regulation of online targeting and online advertisements and specifically calling for a ban of online targeting and including the face-out that will then lead to a ban. We supported these paragraph which at the end was voted for and is the part of the final report. Nevertheless, we also do understand that these wording of the article has to be more nuanced in the future. Before I go into the details there I just want to say that this part has never made it to the essay so there is no ban on online targeting or online advertisement of any sorts which to us to some extent it was certainly disappointing too. We specifically would call for a much more stricter approach when it comes to behavioral targeting as well as cross-site tracking of online users and but unfortunately and as we eventually also heard from Commissioner Westager that was simply lack of will or maybe too much pressure from other lobbyists in Brussels and this provision never found its way to the final draft of the essay. That's the current state of art. We will see what we will manage to achieve once the essay will enter the European Parliament. And finally, the law also contains a specific provision on recommender systems so the way how the content is being distributed across platform and how the data of users are being abused for such a distribution and personalization of user-generated content. In both cases whether it's online targeting and recommender systems within the DSA the essay goes as far as the transparency requirements the explainability but it does very little for returning that control and empowerment back to the user. So how use whether user can opt in or opt out from these algorithmic curation models how it can actually optimize if they decide to optimize it. All of that is at the moment very much left outside of the scope of the essay and so does the issue of interoperability which is definitely one of the key issues being currently discussed and main kind of possible hopes in the future for returning that control and empowerment back to the user. And I keep repeating this as a mantra but it's truly the main driving force behind all our initiatives and work we do in these fields. So the user and their fundamental rights. And on that note I would like to hand over back to Chris who will explain the issue of interoperability and how to actually empower you as a user and to strengthen the protection of fundamental rights further. Chris it's yours now. Thank you. I think we all know or feel that the internet has seen better times. If you look back over the last 20 years we have seen that a transformation was going on from an open internet to it's a more closed one monopolization. Big platforms have built entire ecosystems and it seems that they alone decide who gets to use them. Those platforms have strong network effects that have pushed platforms or those platforms into gatekeeper position which made it so easy for them to avoid any real competition. This is a special it's a special true when we think of social media platforms. This year we celebrate the 20th birthday of the e-commerce director that Alicia had mentioned the internet bill that will now be replaced by the Digital Services Act. And we believe it's a very good time now to think and make a choice. Should we give even more power to the big platforms that have created a lot of the mess in the first place or should we give the power to the users give the power back to the people? For us the answer is clear. Big tech companies already employ a wide array of technical measures. They monitor or they remove the disrespect user privacy and the idea to turn them into the internet police with a special license of censoring the speech of users will only solidify the dominance. So we wouldn't like that. But we like is to put users in charge over the online experience. Users should, if we had to say, choose for themselves which kind of content they can see what services they can use to talk to their friends and families. And we believe it's that's time to break up those silos those big platforms have become to enter dominance over data. One element to achieve this would be to tackle the targeted ads industry as Alicia had mentioned it. Perhaps to give an actual right to users not to be subject to targeted ads or to give more choice to use to decide which content they would like to see or not to see. In the digital services act the commission went for transparency when it comes to ads and better options for users to decide and recommend a content which is a start. We can work with that. Another important element to achieve use autonomy over data is interoperability. If the European Union really wants to break the power of those data-driven platforms that monopolize the internet it needs regulations that the name is uses to be controlled by the data. We believe that users should be able to access data to download data to move manipulate their data as they see fit. And part of that control is to port data from one place to another. But data portability which we have under the GDPR is not good enough and we see from the GDPR that it's not working in practice. Users should be able to communicate with friends across platform boundaries or to be able to follow their favorite content across different platforms without having to create several accounts. How to put it in other terms if you are upset with the absence of Travis and Facebook or how the content is moderated on Facebook you should be able to just take your data with you using portability options and move to an alternative platform that is a better fit. And this without losing touch with your friends who stay behind who have not left the incumbent platform. So what we did for Digital Services Act is to argue for mandatory interoperability options that would force Facebook to maintain APIs that let users on other platforms exchange messages and content with Facebook users. However, if we look in the DSA we see that the commission completely missed the mark on interoperability which is supposed to be dealt with by a related legal act and now it gets complicated. It's the Digital Markets Act the DMA and not a beautiful acronym. The Digital Markets Act wants to tackle certain harmful businesses practices by those gatekeeper platforms the very large tech companies that control what is called core services. A core service is a search engine a social networking service a messaging services operating systems and online intermediation services. Like you think of how Amazon controls access to customers for merchants that sell on its platforms or how the Android and iPhone app stores us choke points in delivering mobile software. There are many things we like in the new proposal that proposal on the Digital Markets Act for example, there's a ban on mixing data in there. The DMA wants to ban gatekeepers from mixing data from data brokers with the data they collect on the customers. Another rule is to ban cross tying so the practice that end users must sign up for ancillary services. So you should be able to use Android without having to get an Gmail account for example. You believe that this is all good but the DMA like the DSA is very weak on interoperability. What it does is to focus on real-time data boardability instead. So instead of having interoperable services users will only be able to send their data from one service to another like on Facebook to diaspora meaning that you would end up having two accounts instead of one or to quote COVID doctor who spoke yesterday already users would still be subject to the sprawling garbage novella of abusive legalese. Facebook lovably calls its terms of service. We believe that this is not good enough. In the last slide you see a quote from Margaret Vistea who made a very good statement last month that we need trust versus services, fair use of data and free speech and interoperable internet. We fully agree on that. And in the next month and years we will work on this to actually happen. However, you can imagine it will not be easy. We already see that you've been union member states followed a trend that platforms should systematically check undesirable and insightful content and share those data with enforcement authorities which is even worse. We see an international trend going on to move away from the immunity of platform for user content towards a more active stance of those platforms. And we see that recent terror attacks have fewer ideas that monitoring is a good idea and that end to end encryption is a problem. So whatever will be the result, you can bet that you've been unioner will want to make the Digital Services Act and the Digital Markets Act another export model. So this time we want to have numbers right in the parliament and the council. And we want to help members of parliament to press the right buttons. And for all this, we will need your help. Even if it means to learn yet another acronym or several acronyms after the GDPR. And that's it from our side. We are looking forward to the discussion. Thank you. Okay, thank you, Erisika and Kristof. There are questions from the internet and the first one is basically, we just had, and as you mentioned in your slides, Kristof, the copyright in the Digital Single Market which with both accountability and liability provisions. You also briefly mentioned Tarek, I think even the evidence proposal also. How do all these proposals relate to each other and especially for a lay person that is not into all the Brussels jargon? I think, Erisika, you raised your hand. Do you want to shoot? More or less unintentionally, but yeah, I kind of did. I can start and then let you, Kristof, to step in. Yeah, that's a very, very good question. And this is specifically due to the fact that when you mentioned, especially online terrorist content regulation, but also recently proposed entering regulation on child sexual abuse, all these, we call them sectoral legislation. So kind of a little bit departing from these horizontal approach, meaning an approach that tackles all categories of illegal content in one way instead of going after specific categories such as online terrorist content in the, in the separate way. So it's a little bit paradoxical thing, what is currently also happening at the EU level, because on one hand we were promised this systemic regulation that will once for all establish harmonized approach to combating illegal content online. And at the same time, which is specifically the USA, the Digital Services Act. And at the same time, we still see European Commission allowing for these fundamental rights harmful legislative proposals happening in these specific sectors, such as proposed online terrorist content regulation or other legislative acts seeking to somehow regulate specific categories of user generated content. This is quite puzzling for us as a digital rights activist too. Very often actually, and so I would maybe separate DSA from this for a moment and say that all of these sectoral legislations, what they have in common is first of all, continuing these negative legislative trends that we already described and that we constantly observe in practice, such as shifting more and more responsibility on online platforms. And at the same time, what is also very interesting, what they have in common is the legal basis that they stand on. And that's the legal basis that is rather connected to the cooperation within the digital single market, even though they seek to tackle a very particular type of content category, which is manifestly illegal. So logically, if they should have that appropriate legal ground, it should be something more close to police and judicial cooperation, which we don't see happening in practice specifically because there is this idea that platforms are the best suited to decide how the illegal content will be tackled in the online space. They can be the fastest, they can be the most effective. So they should actually have that main decision making powers and forced into taking those responsibilities, which have ever ultimately, according to the rule of law principle, should and have to be enhanced of the state and public authorities, preferably judicial authorities. So I would say they are all bad news for fundamental rights protection of online users, civil rights organizations, all of us that are on the scope today, we're fighting very hard also against the online service content regulation. There was a lot of damage control done, especially with the first report that was tabled by the European Parliament and also now during the last tree lock, since the negotiation seems to be concluded. And the outcome is not great. It's far from ideal. I'm worried that with other sectoral legislative attempts coming from the European Commission, we might see the same outcome. It will be very interesting to see how that will actually then play together with the Digital Services Act, which is trying to do the exact opposite to actually fix this negative legislative efforts that we see at the EU level with these sectoral legislation, but also with the member states at the national level. I could also mention the European Commission reaction to some national legislative proposals, but Kristovala, leave that to you and please step in. I think you explained it perfectly. The only thing I can supplement here is that if you look at this move from sectoral legislation, asylum legislation to horizontal legislation, now back to sectoral legislation. It's a problem. It's a mess. First, those files are very good coordinated, which brings troubles for a little certainty. It makes it very troublesome for platforms to follow up, and it's problematic for us in the space. We are some sort of lobbyist as well, just for public interest, but if you have to deal with copyright, we see some with terror, we tend to end encryption. DSA, DMA and 15 other files that will pop up, content by content, it's very hard to manage, to have the capacity ready to be early in the debate and it's so important to be early in the debate to prevent the worst from happening, and I think that's a huge challenge for us, to have something for us to reflect to in the next days how can we join forces better in a more systematic way in order to really follow up on all those initiatives, and that's for me a very problematic development. So, in summary, it's a mess. So, it is related, but we can't explain how, because it's such a mess, fair enough. I have another question for you, Elischa. That is someone was asking how the proposed Good Samaritan clause works compared to how it currently works in Germany, but I think it's a bit unreasonable to expect everyone to know how it works in Germany. I would rephrase it as, how does this proposed Good Samaritan clause work compared to how it is now under the e-commerce directive? Thank you very much. Yeah, so great question again. I think the first, if you put it into the context of the EU law and apologies that I cannot really answer how, you know, compared to German context, I really don't dare to, I'm not a German lawyer, so I wouldn't like to step those waters, but first of all, there is no Good Samaritan clause per se within the Scobal e-commerce directive. It did not really exist within the, and I'm using the past sentence now because the CSA is trying to change that. So that level of legal certainty was not really, really there for the platforms. There was the conditional model of the liability which is still preserved within the regulation, but if you think of Good Samaritan clause as we know it from the section 230, or let's use that Good Samaritan clause as an example because also e-commerce directive was actually drafted as a response to Communication Decency Act that was the legislation that puts things into motions. So that's the first ultimate point. I explained at the beginning in my presentation what was then happening in the space of combating illegal content at the EU level, and especially I would refer to the communication that the European Commission published, I think, back in 2018 where it actually encouraged and called on online platforms to practically engage with illegal content and use these proactive measures to actually seek and adequate responses to illegal content. Now, to mix that with this conditional model of liability, which is, of course, defined by the obtaining actual knowledge by the platform, that created a perfect storm that I already explained. So the platforms knew that they are kind of pushed by the legislature to actually seek these active responses to illegal content, often deploying automated measures, but they didn't have any legal certainty or security on their side that if they do so, they won't end up ultimately being held legally liable and face legal consequences as a result of obtaining actual knowledge through those proactive measures that were kind of the tool, how they could possibly actually obtain that knowledge. Now, what DSA does, it specifically actually simply states, and I think it's Article 6 in the Digital Services Act, if I'm not mistaken, and I can even open it. It specifically basically says that platforms can use these proactive measures or continue using some tools that actually seek to provide some responses to these type of content without the fear of being held liable. So it's an article which has approximately, I think, two paragraphs, but it's finally in the legislation, and that means that it will help to reinforce the level of legal certainty. I would also emphasize that very often in Europe, when we discuss Good Samaritan clause, and Good Samaritan is actually very unfortunate term because it's very much connected to the American legal tradition, but when it's being mixed up with the conditional model of liability and with the prohibition of general monitoring, which is still upheld, and these are the main principles of the European Intermediary Liability Law and the regime that is applicable within the EU. Such a safeguard can be actually beneficial, and it won't lead, hopefully, to these blanket immunity for online platforms or to this idea that platforms will be able to do whatever they want with the illegal content without any public scrutiny because there are other measures, safeguards and principles in place as a part of conditional model of liability that we have here in Europe. So I'm sorry, maybe that was too complicated, legalistic explanation there, but this is how these provisions should work in practice. We, of course, have to wait for the implementation of the law and see how that will turn out. But the main purpose is that this legal certainty that was lacking until now can finally come to its existence, which should help us to prevent over removal of legitimate speech from online platforms. Okay, thank you. I have two other questions from the Internet about interoperability, and I suppose I should look at Kristoff for them. The last one I'm going to ask first is would such interoperability make it much more difficult to combat harassment and stalking on the Internet? How do you police that kind of misbehavior if it's across different platforms who are forced to interoperate and also be conduits for such bad behavior? And I'll come to the earlier question after you've answered this question, Kristoff. It's a good question. First, to understand our vision of interoperability is to understand that we would like to have it between platforms that are in power, large platforms, and smaller platforms actually to make use of interoperability, so it should not be among the big platforms. Small platforms should be able to connect to the big platforms. Second, we believe it will help and not make it worse because we have now the problem of hate speech, we have now the problem of lack of privacy, we have now the problem of the attention industry that works with certain pictures that put in certain frames to trigger the attention of users because users don't have a choice of a content moderation practice. Users don't have a choice to see which kind of content will be shown and users don't have options to regulate the privacy. The idea of more competitors would be exactly that I can move to a space where I'm not harassed, to certain content that hurt my feelings, right? And that moment, I got to get control. I can choose a provider that gives me those options. And we would like even to go a step further back and interoperability is fast as start. We believe if users want to, they should be able to delegate a third-party company or a piece of a third-party software to interact with a platform on their behalf. So users would have the option to be a newsfeed in different order, calibrate their own filters on misinformation. So in this sense, interoperability can be a great tool, actually, to tackle hate speech and those sort of negative developments. Of course, there's a risk to it. I think the risk comes rather from the data industry side again, that we need to take care not to place another data-selling industry on the one that we already face. But for this, we have options as well to avoid that from happening. But to answer the question, we believe interoperability is a tool, actually, to escape from the negative developments you had mentioned. Critical counter-question for me, Dan. Aren't you actually advocating for just roll your own recommendation engines to be able to do so? Can't you achieve that without interoperability? Sure. We count a question where we think an average user can accomplish that quite easily. When we look at the Internet through the lenses of market competition, then we see that it is the dominance of platforms over data that have created those spaces, those wallet gardens where users have to be feeling they're trapped and cannot escape from. And there are so many alternative options that cannot get off-ground because users feel trapped, they're behind and don't have options. Actually, you have a better moderation system. Of course, you can be creative and use plugins on whatever you see fit, but you need to stay within the platform barriers. What we would like to enable users is to actually leave the wallet garden, go to another place, but still stay in touch with friends who have made a choice to remain there. And I think that's perhaps the difference to what you had in mind. I have a follow-up question. Well, another question from the Internet and that is historically speaking, as soon as the big players get involved in setting standards, they tend to also shape policy by being involved in that. How would that be different in the case of interprobability and specifically mentioned by the person who asked the question that must have done probably fuller issues because nobody else was involved in setting that standard? It's an excellent question because we struggle with the question of standards ourselves. In our policy paper, which is our recommendations for the European Union to enact certain provisions in the new Digital Services Act, we abstained from asking to establish new standards like API standards. We believe it's a bad idea to regulate technology like that. What we want to do is that big platforms just offer interoperability however they see it. We don't want to have a standard that can be either monopolized or lobbied by the big platforms because then we end up with the standards we already see which we don't like. But it's a good question and what we did is with our policy principles on interoperability to give food for thought how we believe the end version should look like. But there are many questions that remain particularly to be how to go there. I'm sorry, I'm sticking to the topic of interoperability because most questions are actually about that. One of the other questions is how do we prevent this from getting messed up like it happens with PSD2 and for the audience that don't know about PSD2, PSD2 is a directive that forced banks to open up their APIs to other financial service providers which is also interoperability for banking platforms. It comes with all sorts of privacy questions that weren't completely thought through when that legislation came about. Sorry for having this long-winded interaction with Christoph but I think it was needed for people who don't know what PSD2 means. It's a good question. Interestingly, we never use PSD2 or the Telecommunications Act because both have interoperability options as those negative examples. We always use it as examples that so you don't have an excuse to say it's impossible to put it in the law. What is true is that there's a lot of mess around it and how to avoid the mess is a question of let's politic again. So the question of whether policymakers are actually listening to us or listening to industry lobbyists. So the one who raised the question is absolutely right. There's a huge risk for every topic we talk about whether it's interoperability, whether it's user control of our content or reliability. Everything that we believe should be in a law could be high-checked, could be redesigned in a way that it will lead to more problems than fewer problems. So indeed, for every policy question we raise, we need to ask ourselves is it worth the fight? Do we risk opening the box of the Pandora? Do we make it worse? What we said is on that front we are happy to make pressure on them that we are the right person to talk to and that's perhaps a challenge how to make tech explicable to policymakers. So those who ask the questions I think those should help us to come to process to the parliament and tell MEPs how it's going to work. On that note a question to both of you. You said citizen involvement I prefer to term citizen over users. Would it be helpful to push for amendments in the parliament for at least the targeting points you both mentioned before? And if so how? So I guess that I will start just so Kristof can rest a little. So the question was whether it would be useful to push for those amendments? For amendments that cover targeting of citizens. So there is of course short and long answer as to every question. So the short answer would be yes but given that the wording of such an amendment will be precise and nuanced. We are still working out our positioning on online targeting and I think we all know and can name those practices that we don't want to see being deployed by platforms and where we can actually imagine a proper ban on such practices. We have recently published our blog posts where we actually unfold the way of thinking that acts as now currently, you know, how we brainstorming about this whole issue and as I said, especially those that's targeting that uses behavioural data of users, citizens then maybe let's go for individuals because states are also obliged to protect rights of individuals that are not their citizens. So I that's definitely one form where we can definitely see and we'll be supporting the ban and possible phase out that will actually lead to a ban. The same goes for the cross-site tracking of users due to the fact how users data are being abused again as it's being the integral part of the business models of these platforms and so on and so forth. So that's one of the direction we will be definitely taking and again we are inviting all of you to help us out brainstorm together with us to assess different options, directions that we should take into consideration and not forget about but I personally think that this will be one of the main battles when it comes to DSA where we will definitely need to be on the same page and harmonise and join the forces because DSA gives us a good ground at the moment but it doesn't go far enough. So yes, definitely the answer is yes but given that we will have a very nuanced position so we know what we are asking for and we are taking into consideration also those other aspects that could eventually play out badly in practice. So good intentions are not enough when it comes to DSA. Thank you. We're running slightly over time but I've been told beforehand that's okay that we do so for a few minutes and there's three questions that are open. One of them I will answer myself. That is basically have the member states responded and the answer to that is no. The member states have not taken any position and the two others I think are quite interesting and important from a techie perspective. One is is there anything you see that might affect current decentralized platforms like the Fediverse, Macedon and the other is will any review of the data protection sorry the database protection directive affect META search engines and interact with this again. Perhaps I jump in and you can take over. First member states have given opinion actually on the DSA. There have been one or two official submissions plus joint letters plus discussions in council where the DSA was presented. I have some nice protocols which show the different attitude of member states to it. So for us it also means we need to work straight away with the council to ensure that the package will be good. What was the question? Yes I think the answer to the question depends on the what lawyers call materials scope of applications where it would apply to those platform models at all. Perhaps initially can help me out here. We have always criticized for the e-commerce directive that it was not quite clear how it would relate to first non-for-profit platforms and many of those alternative platforms are like that because there was this issue of providing a service against remuneration and it was not quite clear what it means. Would it apply to Wikipedia if you get like donation? Would it apply to a blogger if you have like pop-ups of ads or something like that? So that's I think one huge question and the second question is in as much those new due diligence obligation would force alternative platforms governance models to redesign the interfaces and those are open questions for us. We have not analyzed that in detail but we see that we are worried that it would not only impact the large platforms but many others as well. What do you think, Alicia? Yeah, I can I can only agree. Especially regarding the non-profit question, yeah this is also or was and always been one of our main asks for non-profit organizations and actually it's not quite clear how that will play out now in practice with the way how DSA is standing because at the moment it actually speaks about online platforms and then it speaks about very large online platforms but what it will be and how it will impact non-profit organizations whether these are bloggers organizations like ASTIVL rights organizations that remain to be seen. I also know that the European Court of Human Rights and its jurisprudence try to establish some principles for the non-profit since Delphi versus Estonia and then with the MTE decision and the Swedish case that followed afterwards but I'm not sure how well it was actually elaborated later on but this is something we will be definitely looking at and working on further and regarding the impact on the smaller players and the also the interface idea this is still something we are actually also wondering about and thinking how this will turn out in practice and we are hoping to actually develop our positioning further on these issues as well because we actually started working all of us on DSA and on our recommendations already I think a year ago maybe a year and a half ago when we were working just with the leagues that Politico or other media platforms in Brussels were actually sharing with us and we were working with the bits and pieces and trying to put our thinking together. Now we have the draft and I think we need to do another round of very detailed thinking and what will be ultimately our position or what will be those recommendations and amendments in the European Parliament we will be supporting and pushing for so it's a period of hard work for all of us not mentioning that I always say that we stand against huge lobby power that is going to be and is constantly being exercised by these private actors but I also have to say that we had a very good cooperation with the European Commission throughout the process and I think that I can say that on behalf of all of us that we feel that European Commission really less than this time so more question marks than answers here from me I think that's okay with me, not knowing something is fine. I think we definitely learn out of time thank you both for being here and well enjoy the 2D online as a congress, thank you that wraps up the session of us, thank you all thank you very much, bye