 Voters is easier than hacking ballots. My name is Maurice Turner, and in this talk I will cover who I am, human-based election vulnerabilities, social media analysis, foreign adversaries, and recommendations to help prevent voter hacking. I call myself a public interest technologist. Currently, I am the Cybersecurity Fellow at the Alliance for Securing Democracy at the German Marshall Fund of the United States. I helped develop comprehensive strategies to deter, defend against, and raise the costs on autocratic efforts to undermine and interfere in democratic institutions. My focus is securing critical infrastructure and deterring cyber operation escalation. I have also held positions in the United States Election Assistance Commission, the Center for Democracy and Technology, the United States Senate, and in other public sector and private sector organizations in D.C. and California. I hold degrees in public administration and political science, as well as a certificate in cybersecurity strategy. I got my start in tech, probably the same way that some of you did. I broke things around my house and tried to put them back together. Sometimes it worked, other times not so much. In high school and through college, I built Peters and worked at a small medical product development firm where I grew my skill set from basic repairs to sysadmin to product design and CAD. It was my first experience being part of a product lifecycle and seeing how decisions made by a small group could literally mean the difference between thousands of patients going home weeks earlier or dying. That lesson has stuck with me throughout my career. I got my start in elections at around the same time. First, I volunteered on local campaigns in Southern California while in college. In 2008, I made the shift to the other side of elections to become a poll worker. The new DREs were being used, so I knew it was a good opportunity for me to bring my tech know-how to help other poll workers who might not be as comfortable with their digital voting machines. I continued that for nearly 10 years, even after I moved to Virginia. I took a deeper dive into elections after my first DEFCON at the inaugural voting village. Learning about election security was eye-opening to say the least. What keeps me interested in the field is the combination of IT modernization, cybersecurity, policy, and administration, all working toward helping people exercise their constitutional rights. The U.S. election infrastructure is maturing, but still has a major weakness, its voters. Malicious domestic and foreign actors capitalize on free-seats protections and inconsistent content moderation enforcement to seed and amplify disinformation and misinformation on social media platforms. The Alliance for Securing Democracy has developed a social media analysis tool to track these overt disinformation messages that are targeting democratic elections in the United States as well as Europe. Authoritarian regimes have long used influence campaigns as a means of controlling public sentiment internally and externally. These regimes have turned to social media platforms to extend their power asymmetrically, but it's not too late to implement technical and policy measures to improve resilience in the election infrastructure in the United States and in other democracies. U.S. elections infrastructure vulnerabilities. Malicious actors are growing adept at using social media platforms to communicate directly with American voters. They have used Facebook and Twitter since at least the 2016 general election. YouTube and Instagram users are also being targeted with visual content that may be more difficult for content moderation teams to identify and track. Snapchat and TikTok are the latest platforms gaining popularity in America. As you'd expect, the attackers will follow users to continue their efforts at perception hacking. APTs attributed to Russia, China, and Iran targeted hundreds of political organizations with hack and leak operations in the 2020 election cycle. The goal was to gain access to personal accounts in order to exfiltrate sensitive data that could be used in future disinformation campaigns or extortion attempts. The vast majority of Americans use at least one of the popular social media platforms. Most of these users are on their preferred platform daily despite saying that social media is having a mostly negative effect on the way that things are going in the U.S. today. What those users are looking for is new content that is engaging. Content that malicious actors are happy to create or amplify. The concerns about just how effective algorithmic suggestions are at leading users toward progressively more divisive content are well-known. Even the criminals know how to game the algorithms. And for those that don't, disinformation as a service operators are standing by to write articles, make social media posts, and even provide search engine optimization for disinformation campaigns. Technical advances to generative adversarial networks and artificial intelligence have made the creation of believable bot accounts nearly trivial. It's a constant battle to take down these fake accounts as quickly as possible. Facebook reports that it bans billions of accounts a year. But banning accounts is not as simple as you may think. Just ask Twitter. Last month they verified several fake accounts that had supposedly gone through more rigorous verification process. I believe that there is a greater value in fake accounts when they are used to amplify rather than create disinformation. It's far less risky to boost the reach of a real user social media post because all of the platforms value authentic engagement. Sometimes that engagement is prioritized even when it violates policies. Social media analysis. So to help researchers and journalists reveal some of these state-backed messages, the Alliance for a Securing Democracy created the Hamilton dashboard. It can be used to help shed some light on these attempts using disinformation to subvert democratic institutions. To learn more about how the Hamilton tool was developed, let me introduce you to my colleague, Brett Schaefer, media and digital disinformation fellow. Brett is the creator and manager of Hamilton 2.0, an online open source dashboard tracking the outputs of Russian, Chinese and Iranian state media outlets, diplomats and government officials. As an expert in computational propaganda, state-backed information operations and tech regulations, he has spoken at conferences around the globe and advised numerous governments and international organizations. Prior to joining GMF, he spent more than 10 years in the TV and film industry, including Stens at the Cartoon Network and as a freelance writer for Warner Brothers. He also worked in Budapest as a radio host and in Berlin as a semi-professional baseball player. Here's Brett with more on Hamilton. This is the Hamilton 2.0 dashboard, which is an online open source tool developed by the Alliance for Securing Democracy at the German Marshall Fund that tracks the outputs of Russian, Chinese and Iranian government officials and state media accounts and outlets across Twitter, YouTube, state-sponsored websites as well as through the official outputs at the Ministry of Foreign Affairs. In the coming months, we're gonna be adding in Facebook data as well. So one of the user first lands on the dashboard, we provide a summary analysis of the topics and themes that have been promoted the most often by the networks that we track over a given time period. We typically default to the last seven days. I've selected the last 30, but this is customizable. So users can select any date range of interest. I've also selected all three countries that we monitor, but again, if your interest is in a specific country, you can just select on that country and drill a bit deeper into that network. So as a user scrolls through the dashboard, it's a great way to orientate yourself around the topics and themes that are being promoted the most during this specific time period that has been selected. We also show a snapshot of the accounts that have been the most influential during that time period, as well as the individual tweets that have received the most retweets and likes, but we provide things like the country's most mentioned, the URLs that have been the most shared, as well as the accounts that have been the most mentioned. The dashboard also provides for a little bit deeper analysis though. So if your interest is less in the topics and narratives being promoted more in the accounts that are the most influential, a user can select the account page, which for example, will show the accounts that have had the most gains and followers over a specific time period. We again show a little bit more detail here in terms of which accounts have the most follows, the most retweets, the most likes, and if you scroll down, you can look at things like the most retweeted accounts that are outside of the network. So these are the accounts that official state-backed actors are trying to boost through retweets. You can also dig into an analysis of the networks that are the retweet networks that are sharing specific content. So I'm gonna change to a different tab here only because things tend to work a little bit slowly when doing a screen share. But I looked at the last 30 days, selected former President Trump as my key phrase of interest, and here it will show me the retweet network. So these are accounts that have tweeted about Trump and then have been retweeted by one or more of the accounts that we follow on the dashboard. As mentioned, we also provide a bit of analysis of YouTube and state-sponsored websites as well. So again, clicking to a new tab. On YouTube, we are tracking four channels that are financed by the Russian and Chinese government. So RT, RT America, CGTN, and CGTN America. And again, just scrolling through here, you can get a pretty quick snapshot of the content that has received the most views, likes, dislikes over a specific time period. Again, great way for a user to familiarize themselves with the kind of topics that are being promoted by each network. But also we give the user the ability to do a bit deeper dive analysis as well. On state-sponsored websites, we are currently tracking about 35 websites, most of them Russian-backed at this point. But here we'll have every article that has been published through particularly English language state-sponsored websites, as well as a few cutouts as well. So these are websites that we know to have connections to Russian military intelligence, for example. Across the dashboard, if you're interested in something that is not appearing in one of the top charts, you can always run a custom search here, as I did by running former President Trump with a network analysis, and the dashboard will show you relevant metrics. We have also created a tool that we call Hamilton Search. What it is is essentially a Google for state-sponsored outputs. So here I ran President Biden over the last two years, and you'll show all of the tweets that have mentioned Biden from Russia. We have a global control group that we use as well, China and Iran, and a few other actors as well. And then scrolling through here, you can just see the most recent content, but you can also change to what's been retweeted the most often. And this is a great way to get a pretty quick sense of how these networks are talking about a specific individual or how they're discussing a specific topic. So that is the quick four to five minute overview of the Hamilton dashboard. As mentioned, it is an open source tool that we hope OSINT researchers and others will utilize to get a better sense of how state-backed actors are messaging around topics of interest to audiences around the globe. Foreign adversaries, let's start with Russia. Russia has a long pattern of U.S. election interference online. After 2016, most in the U.S. were convinced by the evidence that Russian agents were actively trying to influence public sentiment surrounding divisive issues. In 2018, U.S. cybercom actively disrupted the Internet Research Agency, a Russian troll factory, to discourage interference in the election midterms. In 2020, Russia produced content using a website called Newsroom for American and European-based Citizens and followed American users to new social media platforms like GAB, Telegram, and Parler. Russian interference campaigns are mostly covert. The IRA and now the website R-I-A-F-A-N generate content under the guise of multiple blogging personas. These articles often contain compelling keywords or at-mention popular handles in order to make them compelling to followers. They are also signal-boosted through linking schemes with the goal of spreading the misinformation to legitimate news organizations. Russia is willing to harbor cyber criminals that target critical infrastructure. My biggest concern is the escalation from information-based attacks to attacks on the operational infrastructure of elections. The U.S. intelligence community said that this did not happen in 2020. However, the recent diplomatic and enforcement actions by the Biden administration in response to the spate of ransomware attacks on other critical infrastructure operators shows that Russia is openly permitting or even encouraging criminals to operate within its borders. Russia is uniquely positioned to launch a disinformation campaign based on a legitimate attack on election infrastructure. It would be trivial to leave evidence of a data breach on a county server and then spread claims that that same attack occurred in counties across the nation. The research shows that Russia was quite active in promoting claims of voter fraud. Stories about mail-in ballots and President Trump disputing the election results were spread across social media networks from official and unofficial media accounts. We expect that Russia will continue to evolve its tactics in order to evade content moderation efforts by social media platforms. These tactics include setting accounts within the target country, renting the accounts of authentic users, and recruiting domestic activists. Here are some examples. As we would expect, there are quite a few examples that Matt mentioned, real Donald Trump. For example, 66 tweets between November 1st, 2020, and January 31st, 2021, include the word fraud and mention at real Donald Trump. However, Sputnik affiliates are the only accounts that actually use his handle on Twitter. Iran. Iran is active in covert and overt campaigns against critical infrastructure. Iran is another country with a long history of digital attacks against the United States and a strong interest in selling distrust in the democratic process. In September, 2012, Iranian hackers directed a DDoS attack against U.S. banks. In March, 2018, Iranian hackers crippled Atlanta's city government with the Sam Sam ransom attack. In October, 2019, an Iranian government hacker group tried to breach Microsoft email accounts associated with journalists, current and former U.S. government officials, and a U.S. presidential campaign. And in June, 2020, Google announced that Iran unsuccessfully tried to breach President Trump's reelection campaign email accounts. Iran has experienced with successful and sustained information campaigns. Iranian affiliates have operated websites and social media accounts in the open. These websites and accounts are used to create and amplify pro-Iranian propaganda. The United States has been able to take down some of these websites using legal means. Last October, 92 domain names used to push disinformation by the revolutionary guards were seized. In June, the Department of Justice also seized the Press TV, all-alarm, and more than 30 other pro-Iranian domains. Press TV is the Islamic version of Russia's RT, a homegrown news-themed media outlet aimed at influencing English speakers in the West. Social media accounts on Facebook and Twitter are also popular means for spreading disinformation and stealing credentials by Iranian hackers. Twitter removed 238 identified accounts as attempting to interfere with the 2020 election by directly influencing public conversation. Facebook removed nearly 200 Iranian fake accounts that were targeting military personnel and companies in the defense and aerospace industries primarily in the US. This persistent attack used a combination of social engineering, fishing, and malware in an attempt to conduct espionage operations. One of the limitations of the Hamilton platform today is language support. As some of these messages are in English, but most of them are targeting non-English audiences. There's also the issue of what happens to social media posts that link to domains that have been seized and are no longer visible. Here are examples from Iranian's efforts. They certainly covered stolen election fraud campaigns, but largely from an anti-Trump point of view. So most of the coverage was dismissive. Also, as per usual, most of their output isn't in English. So it's targeting that non-American audience. China. China has a history of industrial espionage and anti-democratic messaging. The Chinese government is notorious for its theft of intellectual property in order to provide an advantage to its domestic business interests. The CCP's industrial espionage apparatus is quite effective. Compare the new J-31 stealth fighter on the top with the US F-35 stealth fighter plane on the bottom. President Obama entered into an agreement with President Xi in 2015 to curtail this kind of IP theft. In 2017, Chinese military-sponsored hackers conducted a cyber attack on one of the United States largest credit reporting agencies, scraping the personal data of around 150 million US and UK customers. China has begun to utilize Wolf Warriors to spread its version of information on Twitter in Mandarin as well as in English. These high-level trolls will join divisive conversation online, like Black Lives Matter, in order to deflect from domestic human rights issues like pro-democracy protests in Hong Kong. It's amplifying the voices of influencers who are motivated by a pro-Beijing anti-West worldview. China is doing something that Russia isn't, which is engaging with inauthentic accounts, or at least accounts that are highly suspicious. It's hard to know if these were created by the government or computer-generated accounts. Either way, Beijing's diplomats have retweeted these accounts hundreds and hundreds of times and they're included among the top-most retweeted accounts. China is a well-established presence on social media platforms as a means of influencing foreign policy and attempting to control the Chinese diaspora. The strategy intensified in 2020 once reports of COVID-19 originating from China began to gain traction globally. Chinese officials themselves started promoting outright fake information. Starting in March 2020, a foreign ministry official shared conspiracy theories tying the virus to a U.S. military research facility in Maryland. Later that month, one of his colleagues shared a video reporting to show Italian-singing Thank You, China! from their balconies after receiving Chinese medical supplies. Italian fact-checkers determined the video was at least partially doctored. Another takedown of over 30,000 Twitter accounts in June 2020 also showed that the Chinese state is still resorting to inauthentic behavior to amplify its propaganda. There's no evidence of China using informational or cyber attacks against the United States elections. What is particularly concerning is that China has tremendous cyber capability that it's not using. It's currently ranked second to the U.S. on the Belfer Center's National Cyber Power Index. To date, it appears as if U.S. elections don't fall into the category of being valuable enough to hack. Here are some examples of China's wolf warrior diplomacy in action. Recommendations. Despite these growing concerns about foreign information attacks, there are still some precautions that election officials and state leaders can take to mitigate these risks. Continuing training on the basics is near the top. Officials have access to free state-of-the-art training from several organizations. Learning about strong passwords in 2FA is the foundation for getting officials and leaders onto social media platforms where the disinformation is being propagated. Having verified accounts in those platforms is an opportunity to counter divisive narratives and report fake accounts. It's not enough for elections to be administered smoothly. Officials need to be able to communicate the integrity of the election to the public. Communication and incident response training can help. Both resources emphasize the importance of having a communication plan that provides timely updates to reduce the space for disinformation to spread. The social media platforms themselves play an obvious role in the spread of misinformation. The evolution and enforcement of their content moderation policies are what lead to message deletions and account bans, but platforms are still struggling to apply these policies in a consistent and equitable manner. Having verified accounts for officials helps to establish their trustworthiness as an authoritative source. This is the first step in pre-bunking misinformation and reducing the chance that it is amplified. Preventing amplification of false information got a boost from governing agencies this election cycle. CISA's Rumor Control website headlined in November as a source of reliable election information to counter the common rumors about ballot handling and vote tallying that we're circulating in the media. On the other hand, another debate that it inexplicably continues is the claim that ballots in Arizona were tampered with. In response, the Maricopa County Election Department launched the justthefacts.vote website and Real Auditors Don't hashtag campaign to specifically bust myths propagated during the fake audit process. As new and upcoming networks like Snapchat and TikTok or Parler and Gitter grow in popularity, they only need to go through the same deliberation process of how to manage divisive election content or whether to allow users to engage in political speech altogether. It will be important to include election officials in these conversations to make sure they understand how quickly and effectively voters can get engaged and connected with authorities that have the source of information on elections. The one area of responsibility for security that lies squarely with the federal government is in deterring attacks by foreign malicious actors. More coalition-based responses are needed against regimes that harbor criminals or themselves conducting operational or information attacks on critical infrastructure. That includes gathering more support for the Paris call on global cyber norms and further developing the offensive and defensive escalation pathways when there's an attack in the cyber domain. It's up to the United States and its NATO allies to leverage foreign policy and military options that can buy time for network defenders and the American people to build resilience against these attacks. ASD will continue to develop the Hamilton dashboard and authoritarian interference tracker to help uncover subversion attempts of democracies around the world. The current dashboard has specific technical limitations that need to be overcome. The biggest is that it is text only and restricted to languages that are human coders and automated transition services can understand. That is sufficient for text-based platforms, but obviously a much larger problem for visual first platforms. Video content on YouTube is analyzed by services, including Viderover and Microsoft. At the end of the day, it is still a manual process that has the potential for human error and variability is greater than would be acceptable in an academic study. So we urge caution when using or citing categorical data. I'd like to thank you all for joining me today on this talk and to especially call out the DEF CON voting village, the Alliance for Securing Democracy, election officials, voting advocates, security researchers, and most importantly, voters. Thank you.