 My name is Sebastian Bay and I'm a researcher with the Swedish Defence Research Agency specializing in election security and digital harms. This year, the DEF CON voting village is focused on the spread of election related disinformation and misinformation. I'm going to give a short talk introducing some of my research and highlighting the need for increased cyber security for social media companies as part of our combined efforts to strengthen election security. There are of course several aspects of election security and traditionally we have focused on the conduct of elections when we talk about cyber security. That means safeguarding the IT systems needed to administer elections from registering voters to counting and tallying the results. This is clear and this is also something that is needed. We've seen continuous effort to try to undermine election systems using IT attacks of different sorts. However, recently we've also seen that digital disinformation is an equal or perhaps even larger threat to the conduct of elections. Where digital disinformation is used to try to undermine trust in the election process. It is used to try to undermine the will and the ability of people to participate in the elections. And it is of course used in an attempt to toward or undermine the political process, often with the illegitimate use of influence in different forms. But I will argue today that much or a huge part of this problem is also a cyber security problem, but the cyber security problem, primarily for social media companies. Now, if we simplify enough, disinformation can be divided into two separate issues, the issue of content and the issue of inauthentic behavior. Now content can be disinformation. It can be legal, and it can be many other things. I've seen during the last few years that regulating and moderating content is difficult, even if we're now seeing better policies in regard to election related content on social media platforms. But if content is tricky to develop clear policies for inauthentic behavior and other forms of social media manipulation is far less tricky. It is simply not allowed on all the major platforms I've studied. You are simply not allowed to run thousands of fake accounts on any of the large platforms. You're not allowed to manipulate their algorithms. You're not allowed to scrape their content. You're not allowed to hack their systems. Yet, we see that this is a large problem. It is a large industry. And of course, this is something that is being done all the time. And this is what a bot farm can look like in real life. These pictures are provided by the Ukrainian security services showing what they argue is a Russian sponsored bot farm inside of Ukraine to try to avoid detection. What we're seeing is SIM card. It is various forms of antennas used to run different forms of proxies. And of course, to set up and avoid detection. Now, the European Union has long underscored the need for social media companies to intensify and demonstrate the effectiveness of efforts to close fake accounts. This has been in the code of practice to address the spread of this information, also within an election related context. These have been asked by the European Commission to conduct regular reporting to the commission where they report on the number of fake accounts that have been closed during the last quarter and during the last year. And we've seen in this reporting that fake accounts continue to be a huge problem measured in the billions. It's a huge problem because fake engagement trigger algorithms to spread content to authentic users fake engagement trick authentic users to believe that content is more popular than it really is fake engagement misleads users fake engagement manipulate democratic conversation and fake engagement create loss of genuine advertisement spend. There are three reports on the topic of inauthentic behavior on social media, ranging from the black market to trying to develop metrics and methods for assessing the ability of social media companies to counter the abuse of their platforms. In effect, trying to assess the level of cybersecurity and these platforms when it comes to preventing inauthentic behavior. If we start by looking into the black market for social media manipulation and which I think is a good base for understanding the problem of social media manipulation. I'm going to give you three main takeaways. The first is that the scale of the black market infrastructure is extensive. We see that an entire industry has developed not only around providing the manipulation services, but providing the infrastructure needed for the manipulation services to work. And that ranges from fake SIM cards, digital fingerprinting, scripts and capture services for the manipulation services to provide. That is used to generate, provide and maintain fake accounts. And we also see management platforms that are used by the inauthentic engagement services, but they're also sold as software to independent contractors and private companies that wish to run their own campaigns. Seeing this entire service, we initially thought that this was a black market. And that is also why we labeled the report a black market for social media manipulation. But what we saw is that it's not actually a black market. It is an illegitimate market perhaps, but it's extremely easy to find. And the openness of this industry is still today quite striking. We see that the larger social media manipulation service providers, they fearlessly promote their services, and they promote them on their own websites of course, but they also promote them on app stores on on the social media platforms themselves. They usually run tutorial accounts on YouTube, Facebook, Instagram, etc. And of course, you can find them in all major searches. It is even so that they they use ads on search engines to promote their services. Third, much of this infrastructure seems to be a Russian. It doesn't mean that it's state sponsored, but many of these primary service providers that are reselling their services to many of the companies offering manipulation services in the west. They are Russian and they seem to be Russian simply because this is a place where this industry has existed for a long time, and there is a lot of know how in this area. But this is also a global industry. And we're seeing more and more of fragmentation where for an example, if you live in Nigeria, and you want to buy manipulation you might go to a Nigerian provider. They might not use Russian infrastructure and but they might also use infrastructure from Southeast Asia, etc. And we're seeing these companies in all parts of the world, even though a lot of the services provided are being generated in Southeast Asia or Russia. We're seeing resellers and customer support and development in the West as well. We're trying to assess to what extent social media companies are able to counter the manipulation of their services, how good they are, and how big of a difference it is between the individual companies. And we've done this in two consecutive reports in 2019 and 2020. And we're now in the process of setting up an experiment for 2021. And this of course is done to assess the ability of social media companies to combat inauthentic behavior on their platforms. In 2019 we ran an experiment where we bought engagement during two months on 105 different posts on Facebook, Instagram, Twitter and YouTube. In 2019 we bought 54,000 fake engagement from these social media manipulation service providers. 3,000 comments, more than 20,000 likes, and more than 20,000 views for 300 euros, the equivalent in dollars. And we ran this experiment during six weeks in October and November 2020 in the context of the US election, and we then bought engagement on 39 different posts, also including Tik Tok last year. We increased the number of engagements bought so we bought 335,000 engagement, still spending roughly 300 euros. After buying these fake engagements, we measured how much of it got through, how much of it got blocked, and what happened when we reported it to the social media companies. Our main takeaway was that social media companies overall were unable to block the inauthentic engagement bot. Four weeks after purchase, more than 98% of the bot engagement were still online. Four days after reporting a sample of the inauthentic accounts to the platforms, more than 98% of the accounts reported were still active. Our conclusion last year was that Facebook, Instagram, Twitter, YouTube and Tik Tok are still failing to sufficiently combat inauthentic behavior on their platforms. Enabling the widespread disinformation, a dissemination of disinformation on their platforms. We also looked at several of the parameters that are very useful for understanding the scope and scale of this problem. For example, we've tracked the cost of manipulating social media platforms from 2018 up until 2020 by creating baskets that contain likes, comments, retweets, etc. We've seen that a standardized basket from 2018 to 2019 got cheaper, and then from 2019 to 2020 the prices in general leveled off. But we can also see here that there's a difference between the various social media platforms, that it differs between the companies, how the prices have changed, and why is the price interesting. So we believe the price is an indicator for how difficult it is to manipulate platforms. The stronger security the platforms have, the more difficult it will be to manipulate them, the more it will cost the manipulation service providers. And we're seeing that there isn't a fundamental change in the price, but we're also seeing that there's a difference between the platforms regarding how difficult they are to manipulate. We've also looked at the speed of delivery, that is how quick or how fast you can manipulate social media platforms. And if you take 2020 for an example, we could see that for an example, TikTok last year was not sufficiently effective when it came to countering inauthentic bot behavior. Almost all of it were delivered instantaneously on TikTok. While for example some of the other platforms, about half of the content were delivered within 12 hours, and then it took in many cases several days before 100% of the bot engagement were delivered. And of course the speed of delivery is an important indicator for how effective the platforms are at countering abuse of their services. Another indicator is the ability of the platforms to remove fake accounts reported to them. And last year when we tested this, several of the platforms didn't remove any of the reported accounts. The most effective platform Facebook only removed 9% or nine out of the 100 fake accounts that we reported to them. Overall, our takeaway from these experiments have been that there is a significant difference between the different social media platforms. The amount of money, the amount of resources and the amount of human skill spent directing at trying to combat platform manipulation makes a huge difference. And we can see that when platforms make concerted effort to try to change this, we also see that it becomes harder to manipulate the platforms. Facebook has made progress during the last year, for example, they've become much better, even though we assess that Twitter is still the industry leader when it comes to countering abuse of their systems. TikTok, which was a new platform last year, scored least well, but we have reason to believe that they have improved during this year, spending more efforts, more resources on trying to improve themselves. It will be interesting to see during the rerun of this experiment this year, if this ranking stays, or if we can see that some of these platforms which we suspect have put more effort into combating this problem also show a significant improvement in our measurements. We have seen another interesting takeaway that we've seen is that there can even be a significant difference between platforms owned by the same company. And the best example is the significant difference between Facebook and Instagram. Instagram is much more easy to manipulate than Facebook is. And this is surprising. One would think that two platforms owned by the same company would have equal levels of security. But this isn't the case at all. From creating fake accounts to buying fake engagement, there's a clear difference between these two platforms. And that illustrates that this is a technical problem to a large extent. When we ran this experiment two years ago, we even saw on Instagram that even when fake accounts that had delivered fake engagements were removed, the fake engagement remained, which of course is a technical issue with the platforms. So, to a large extent, combating manipulation of social media platforms is a technical problem that social media companies have to pour financial resources into solving, and that hasn't been done enough before. But we have seen from year to year from 2019 to 2020 improvement. But as it was at the end of last year when we ran this experiment, the manipulation service providers were still winning by a large margin. That is, you could still effectively and cheaply buy manipulation, and it would be speedily delivered on to social media platforms and it would remain up for weeks and weeks. To this day, some of this fake engagements that we bought already back in 2019 remain active online. So seeing this and studying the problem of disinformation and misinformation on social media. We understand that inauthentic behavior is a central component of coordinated inauthentic behavior that is coordinated spread of disinformation. And one important solution for tackling that is through enhanced cybersecurity for social media companies, protecting their platforms against technical abuse. So in that sense, social media cybersecurity equals election security. Because we're seeing that the spread of disinformation, we're seeing that intentional efforts to undermine the will and ability of voters to vote on election day. And we're seeing that the intentional manipulation of political conversations on social media platforms happen online, and some of it also happens using bots and technical manipulation. And that can easily be prevented with additional cybersecurity for the social media companies. During last year's experiment, we developed a number of recommendations for platforms and for policymakers. And of course that social media platforms need to do more to counter abuse of their services, but also that we need to set standards and require reporting from the companies, based on more meaningful criteria. Today it's very difficult to compare and contrast reports from different social media companies. We also need to increase transparency and enable independent verification of figures reported by the social media companies. And in the same way we need independent and well resourced oversight. We also need to regulate the market for social media manipulation, we need to counter manipulation services to much larger extent. We've seen some important steps being taken by social media companies suing manipulation service providers, and we will also see in a number of governments file charges. But still today, most of these core services remain online. And some more needs to be done. But we also need to understand that we need a whole of industry solution to combat this problem. And from a cybersecurity standpoint. If we take the case of the Ukrainian security services. We could see that these both farms they heavily rely on SIM cards and other telecommunication equipment. And that of course means that regulation of this equipment of SIM cards remains a critical component in combating the misuse of social media platforms. We also need to make sure that social media companies spend more efforts, and preventing the abuse of their services. So cybersecurity remains and will continue to be for the coming years, a very critical component when it comes to election security, especially in the field of disinformation and misinformation. Thank you.