 Welcome to the Ethics Village. I am Andrea Matwishan from Penn State, and we are incredibly lucky to have with us today Lisa Bogan from Tech Congress. Welcome, Lisa. Thank you for having me. I'm so excited. Thanks for joining us. So I'm going to not talk at all after this, but if you wouldn't mind just telling us a little bit about yourself, how you got involved with tech policy, some of the exciting jobs you've had, and what you've learned from them, and how someone who's interested in following in your footsteps might be able to also get involved in tech policy work. And then after that, I look forward to your talk. Great. Thank you so much. Well, I have been working in technology for quite a while, but I didn't start there. My original intent was to do more global affairs, foreign policy work, and I did some of that. But I began working in new media technologies out of grad school, and then I was able to do a research fellowship at Stanford University, originally on international institutions, but then I began work on cybersecurity through that research. I did a number of projects related to cybersecurity and technology. And then at the same time, when former Secretary of State, Condoleezza Rice was coming out of office, she hired me to start with her, a new consulting firm. It was a boutique consulting firm and it had technology clients from the Silicon Valley and large international banks and other organizations that needed help navigating complex challenges in emerging markets all over the world. So I was able to combine some of my economics background with this new technology expertise that I had to help do strategies for clients. At the same time I was doing research at Stanford in a number of areas, particularly focused on cybersecurity. And then after that I went to a tech company for a bit and from there I went back to Stanford. I was doing a lot of tech policy advising from that perspective because I had worked with senior government officials because the consulting firm was with former National Security Advisor Steve Hadley and former Secretary of Defense, Robert Gates, and so the firm was Rice Hadley Gates. And so I, at Stanford, I was briefing members of Congress on big data issues, privacy, technology, national security. And then from there, I went back to the private sector where I worked on global strategy for cybersecurity at a large professional services firm. And then following three years of that work and recognizing the importance of technology in the entire infrastructure in which we operate as citizens. I was compelled to join Tech Congress, which is a fellowship program that places mid career technologists on the hill. The fellowship is called the Congressional Innovation Fellowship, and I was a senior fellow in the office of Senator Mark Warner last year, and I advised and supported on national security issues, cybersecurity and technology. After that I was hired by Tech Congress to lead a new initiative by the organization, which is called the Congressional Digital Service. And that is an attempt to place developers on the hill to help Congress deal with the COVID crisis currently. And I've been working in this research space for a long time and I'm very excited to discuss cybersecurity and disinformation and some of the tactics and methods that are currently in use today, but also the legislative efforts to counter them. So with that, I will share. Oh, how can you get interested? Sorry, I left that out. How can those of you who are interested in getting into this space are a number of ways you can do that. I think finding mentors is extremely important. I was really fortunate to have incredible mentors who pointed me in different directions than I may have thought of myself. Secondly, I think if you can look for fellowships or research internships or ways to sort of get in on the ground floor, it's really important. In terms of technology development, if you don't have tech skills, there are plenty of boot camps you can join. If you don't want to do a formal degree, you can take classes online. There are a lot of free resources that will teach you how to do information security or programming or coding from a variety of different perspectives and disciplines. I think getting involved in the information security ecosystem in terms of there's a lot of infosec people on Twitter and on blogs and it think tanks. I would also encourage you to look into civic tech, which is something I wasn't really familiar with until I was introduced to tech Congress, but this is a group of technologists who are trying to work to make technology work better for the public. I think that's a really important initiative and really important effort. It's something that I am committed to because I think the last few years have shown us technology can be an incredible force multiplier for good, but it can also be something that does quite a bit of bad in the world. And so belonging to organizations that are really looking at things from an ethical and public service perspective is really important and I think incredibly valuable. So that would be my advice and feel free to contact me if you have any other questions. I'm going to share my screen. So we're going to talk about disinformation and security. These are obviously two different disciplines, but I'm going to talk about how they intersect and then what we do with them from a legislative perspective. I'll briefly give you a history of disinformation, some of the methods and tactics that are used. Why I think we should take a more of a security versus a content approach to the problem. Some of the global regulatory efforts that are in use today and then what the US regulatory approach to methods and tactics is currently. So the history of disinformation is very old goes back into the 1800s Russia had monitored all internal and external communications, including those clear way back until way back until they were in the SARS. In the 19th century Russia, the Ministry of Internal Affairs had a cryptanalytic vision division that analyzes encrypted potentially anti SARS communications, using code books that were purchased on the black market is sort of dark information market existed not just in Russia but also but they were quite good at organizing it. Sorry Alexander the first gave credit to a Russian cryptanalyst for the defeat of Napoleon way back in 1812. Although by the early 1900s Russian cryptography was far superior to any other world power, some of their disinformation tactics developed by the Russians after World War two were then gleaned from Russia to buy the military German military. And they, the German military had a division that was dedicated to creating false military plans that were intended to confuse the, the enemy. So some people will trace disinformation efforts to Nazi Germany. The Russian word for disinformation includes all deception including except for camouflage they made an exception for camouflage but it meant any type of deception. And it quickly became part of their military operations with the intent to confuse and pollute the opinion making process in the West. Many Russians believe that the dissolution of the Soviet Union was due to deliver information attack perpetrated by the West. And then, Timothy L. Thomas notes that Russian, Russians perceive the Cold War as a war of information which the West conquered the Soviet Union so there's this deep sense that information had been weaponized against them and it was fair play to weaponize it back against the West. So there are different concepts that we'll be talking about today and they exist within fairly distinct domains that all sort of have overlapping Venn diagrams although my Venn diagrams are quite overlapping but those involve information influence operations which are often confused with this information. So that is the state sponsored collection of tactical information about an adversary as well as the dissemination of propaganda and pursuit of a competitive advantage over an opponent. So it's very systematic and very strategic disinformation is deliberate usually state sponsored false information used to mislead and deceive. Cyber security is a completely different separate domain and that is the domain security that protects networks devices and data as many of you know. But there's also this sort of subset that it has become a big part of this discipline which is what I think Herblin and others have called cyber enabled disinformation and that's using digitized methods to spread deliberate false information intended to deceive. The regulations and authorities that govern these domains are title 50 of the NDA which proposed international standards to us federal regulations. And then there are several other authorities that guide information operations in the intelligence community throughout international law within the State Department that they rely on, and a number of other executive agencies. So we're just going to discuss the global anti disinformation and US anti disinformation regulatory efforts not sort of the broader frameworks which exist. So this brief history of Russian Soviet disinformation in the 1920s and 1930s the Soviets created an enormous influence and intelligence apparatus across the globe. They were relatively weak during that time, but they built these organizations that some of which were overt some of which were covert throughout the world that were able to challenge all the major powers of Europe and that the United States and most without the knowledge of the United States and most of Europe. They use this apparatus was called to accomplish a lot in the 1970s and 1980s. They were nearly able to split Europe and split NATO and Europe in the 1980s, and they began those efforts in the last year of the Carter administration and continued those efforts into the Reagan years. When communication discommunication information was digitized Russia's functional approach to monitoring and manipulating information didn't change. It actually just accelerated it and made it faster and more efficient. But 1981 Reagan established the active measures working group to fight back against the global Soviet subversion efforts. This was a really enlightened approach to take the active measures working group was created as an interagency working group so it was chaired by the State Department but it included representatives from what was then called the US information agency we don't have one anymore but I actually think it's quite an important institution, the CIA, the FBI, the DIA, the DoD and the arms control and disarmament agency. All of these groups came together to counter Russian disinformation efforts around the world. And they realized that this was a really complex problem and it required a multi layered approach. So they played the public interest in the active measures working group and US successes because it actually received quite a lot of positive responses from the US public. Americans really liked this effort, but the Soviet Union continue to successfully distribute disinformation some examples of that of those successes include the US military claims and beliefs that were spread all over the world. So the United States military had created AIDS and released it as a weapon. That story gained considerable traction throughout the world by the end of 1985. There were articles published in 13 countries that made those claims. In 1986 it had reached 50 countries. So one year later, and by 1987 the story had been published over 40 times in official Soviet press and was reprinted or rebroadcast in 80 countries in over 30 different languages. And they didn't retract the story until 1987. They had already done its damage quite all over the world. In January of 1987 the Soviets launched another disinformation campaign so as soon as they retracted that one they started a new one, or earlier they started a new one that was aimed at convincing that the CIA had perpetrated the November 1978 mass suicide at Jonestown, Vienna. They also ran false allegations of the United States development and use of biological weapons and that the United States was importing Latin American children butchering them and using their body parts for organ transplants. These stories ran repeatedly in the Soviet press and were picked up worldwide. We'll see how this is similar to events today. So there's the pre-digital Soviet disinformation method and the post-digital toolkit and they are actually quite similar. So the pre-digital toolkit, they use a lot of forgeries. They often use them to discredit or malign influential figures, particularly those affiliated with the United States Information Agency. In a report to Congress in 1987, noted that the Soviet Union had mailed a forged letter to the Washington Post and the U.S. News and World Report, who then picked up those stories and ran with them. The document purported to be a letter between U.S. Information Agency official Herbert Rormann Stein to Senator David F. Durenberger, who was a former chairman of the Senate Select Committee on Intelligence or SISI. The letter dated April 29, 1986, described in a ledge United States Information Agency campaign to spread this information on Chernobyl nuclear power plant disaster. And it was designed to discredit U.S. government and damage our relationships with Europe. They also use a lot of PR firms. Back in the 1980s, the Washington Post reported that Moscow had approached several PR firms in the United States to help represent their narratives to the U.S. public. They relied significantly on amplification. Disinformation narratives would be circulated domestically and fringe publications and then slowly move toward the mainstream publications like the Washington Post and U.S. News and World Report in previous examples. There was a lot of media manipulation, the cultivation of reporters, getting reporters to pick up stories, investigate and report on them. There were a number of front organizations. They were established to look like legitimate organizations, but they actually had disinformation purposes. Then there's the Post Digital Toolkit, which is actually very similar. Again, they're utilizing amplification, PR firms, forgeries, media manipulation, and front organizations. As you can see, this clip is from Twitter and it was when the Russian Ministry of Foreign Affairs accusing White Helmets worker of being an agent of Britain's MI6. And they continued to perpetuate this false information and actually did quite a lot of damage to him. This has some contemporary references. So coronavirus and COVID-19, some recent examples related to a Chinese government official starting to circulate the narrative that the United States military brought the epidemic to war. Despite that being debunked by a number of other sources. Reuters reported on disinformation campaigns by Russia, Russian state media and pro-Kremlin outlets regarding COVID-19. And they really pointed out that the aim of the Kremlin was to aggravate public health crisis in Western countries in line with the Kremlin's broader strategy of attempting to subvert European societies. Again, has long history of this. We also saw other disinformation on platforms like Twitter. This is an example of the Chinese Communist Party capitalizing on the real rise in racism against Asian Americans in the U.S. Apparently there were a lot of videos of Asian Americans saying that there was a Chinese American brigade had had to be formed because violent white riots were attacking Chinese Americans in Los Angeles. This was proven false, although obviously there is quite a bit of racism being directed toward Asian Americans, unfortunately. Computational influence and digital disinformation. So what do we mean by that? It's kind of a word salad. So it's obviously not just Russia. Every state actor gets involved in this type of information operations. It's Russia, China, India, Iran, Pakistan, Saudi Arabia and Venezuela have all been identified by social media platforms as being engaged in these methods targeted at other states. They're not just doing it to the U.S. They do it to each other. You know, it's a global effort. And the things that the platforms have pointed out are the amplification efforts where they use artificial amplification and media amplification. Artificial amplification is when you take a bot or you use some sort of scripted algorithm to make it seem as if something is more popular than actually is or has a broader audience than it actually has. There's media amplification. So where something starts out marginally as a small story on a blog or website and eventually is migrated through manipulation into more mainstream media forms. There are bot farms and what bot farms are going to those little more in a moment astroturfing troll farms, dark ads, deep fakes, fake accounts, either sock puppets or social bots. Again, the terms and language to describe this type of behavior is evolving as this is sort of a new space. And then this concept of the gating are going to those little bit more detail in the next slides. So fake accounts. Those are quite popular on online platforms in November of 2019 Facebook room 5.4 billion accounts that said we're fake. They reported up to 5% of its monthly user base of nearly 2.5 billion worth fake accounts. There are a number of sock puppets, which is an online identity used for the purpose of deceiving so it looks like it's a real person and then it isn't actually the CEO of Whole Foods was convicted of engaging in this behavior in 2007. The CEO of Whole and your international is found guilty of mail fraud and obstruction of justice as well. The social bots are sometimes described as algorithmic software programs designed to interact with humans. So like chatbots, but in this case, they often have more autonomy. And those aren't always nefarious, but sometimes they can be when people don't know what they're interacting with so video and much of this can sometimes be very positive really great. It's just oftentimes when it's used with a nefarious or manipulated manipulative intent, it becomes problematic. So this is a quick clip of a video made by an American YouTube channel called clean TV calm, and it was reposted by a Russia linked Facebook page called secured borders and so you see a lot of craft cross platform pollination as well with these so amplification This is another area where there's a lot of studies are going into how artificial amplification so when you take us information or narrative and you use a scripted method to distribute it more widely than it than it normally would. And that's often used in often by using fake accounts or or bots which aren't even real accounts. There's media or platform amplification like I discussed before where you know migrates from French platforms to mainstream media platforms an example of amplification having really dire consequences is the case of Facebook in in Vermont. And so the Rohingya Muslims in Myanmar were driven out of the country or raped and slaughtered by the military in a genocide that was orchestrated through artificial amplification of this information so it was. And some of it was done by the military they're creating fake accounts spreading false information about the Rohingya and and Facebook wasn't able to shut that down before many of them have been slaughtered. But what about farms and what about farms those are their entities that are created by a scripted behavior that enable machines to do wrote operations to upload or download music or videos or products online or to otherwise automatically create a false impression of popularity or the impression of an audience that may not actually exist. What about farms are people who engage in similar road operations to publish a task like upvoting music or downloading videos so it's not a script that's running. It's an actual person that's following direction. It goes to play a particularly destructive role when used to falsely amplify messages that are destructive but it's also a false form of approval so you may not you may purchase a product on Amazon thinking that it has X numbers of positive reviews when many of those reviews might not actually be positive or a piece of music may seem incredibly popular. But like here here's an example of that place. So these are all cell phones that are are playing music or they're downloading apps or uploading music. But as you can see that these are actual people connected to those. It's, it's a bunch of cell phones that are doing it with a scripted behavior that's connected to all of them. You can use troll farms and brigading. You can use bought forms that you can also use sock puppets, which is in sort of new term but Oxford researchers called fake armies of fake online online influencers manufactured consensus. So that's when it looks like a ton of people agree or think one thing about something but it's actually not a ton of people. It's a fake army of influencers. All firms usually refer to state sponsored anonymous accounts for stock puppets or bots that engage in biased commentary, brigading that's when they identify a target and gang up on them or the attempt to drown out online commentary by sheer volume through harassment or other methods. And for gating is an online harassment technique. There's a vote for gating as well. It's a massively coordinated online voting either with scores or approval ratings, and it creates a false impression of approval where there maybe isn't such an impression. And there is often tied to what people call perception hacking. So there's astroturfing as well. This was a term coined in 1985 by Senator Lloyd Benson of Texas and it occurs both online and offline. It's a political activity designed to appear as if it's unsolicited or autonomous political engagement. So people will be paid to express their opinions, paid to show up at a town hall, paid to create the impression of an organic grassroots effort when in fact it's not that several American corporations and political campaigns have engaged in astroturfing. And it's often difficult from an intellectual academic or practical perspective to distinguish it from certain types of advertising. There is this concept called dark ads. And what it is that are precisely targeted ads that don't show up on a user's timeline or feed organically, Facebook invented them, but all platforms use them. So following Cambridge Analytica and blowback from the platform's role in the Brexit campaign, Facebook and Twitter developed an ad transparency tool that allows all ads run on the platform to be visible. Targeted online advertising algorithmic bias and discrimination are highly problematic. And we have examples of these. They'll create a psychographic profile of someone. So if you see here in this slide, they you can drill down the specific type of consumer that you want to advertise to or political participant that you want to target by saying a French and English speaking woman ages between 31 and 56 located in the 10 mile radius of Boston, Massachusetts, who works either from home or from a small office and the retail production industry. I mean, if you look through this, this drills down to their, you know, their fit moms and green moms, they scraped grade school kids, they live in a condo down to the exact square feet of that condo, what they enjoy doing for their activities. And you can get even to the types of travel apps and specific types of travel that are on their phone and that they've used. And this is a level of specificity that a lot of consumers aren't aware is being used against them to target ads to them. There's this other concept of deep fakes shallow fakes. So deep fakes use artificial intelligence or fake videos or photos of real people and it often uses neural network AI there's three types. Generative adversarial network scans auto encoders and VA is variational auto encoders. Google's deep mind showed that the VA could outperform Gans on face generation. Variational auto encoders are capable of both compressing data like an auto encoder and synthesizing data like again, however, Gans generate data and find granular detail and images created by VA is going to be more blurred. Shallow fakes are doctored videos that are less dramatic like the speaker policy video. And what researchers have been telling us is that a lot of the time what the hardest part about deep fakes actually often isn't the images but the, but the voices and the sounds and getting those to mimic the actual intonations. Here's an example of a deep fake. At the beginning, before anything, you get together and you read through the script. And so it's like, you know, all these heavyweights like, you know, been stiller, Jack Black, Robert Denny Jr. Everybody at the end is like me, like, you know, like, hey, happy to be here guys. And some other supporting guys and then and then Tom Cruise walks in and even those guys like whoa and he's super stoked to be there. He's like, he's like, he's just immediately excited when he walks into a room. And so he comes over and he sits next to me and I think he had been briefed on some of the supporting guys, but he was like trying to place me, you know, so he sat down next to me. It's like, I love your work. Thanks. I love your work too. So as you can see, very impressive manipulation of that, of that media. So some of the technical issues with addressing disinformation as considered by Congress are the identity validation piece, financial transactions, moving VPNs, bot detection technologies, spam detection and sentiment analysis, network analysis and behavior analysis. I'll just briefly, I think all of these involve constitutional rights. So identity validation, forcing individuals to validate their identity online has a lot of benefits for a variety of groups. It also has a lot of problems for those who believe in the First Amendment right to end the online. Financial transactions are difficult because they're layered on many of these platforms. So it's not always clear who made the payments and where they came from. Another problem that members of Congress thought, well, if you just identify the individual where they are based, are they a U.S.? Are they in the U.S.? Or are they in, you know, in Russia? I've had a number of members say like, if I'm in engaging with somebody, I want to know if I'm talking to an American or if I'm talking to a Russian operative. Well, the problem with that is that you can't always tell. There's spooking, you can use VPNs, and people all over the world use these. But bot detection technologies have developed considerably. There is an enormous amount of, depending on how you define bot, but there's an incredible amount of work that has been done in this space. And the techniques are very sophisticated, but at the same time, things get thrown. There's also spam detection and sentiment analysis. These also involve constitutional considerations given the privacy issues involved in sentiment analysis. And that's where certain types of language can be in behavioral online, can be analyzed to assess whether or not you are positive towards something or negative. It works really well with some of the targeted advertising. Obviously, network and behavioral analysis, those are ways to look at the kind of broader infrastructure in which some of this activity is happening. And Congress does consider it when they're thinking about bills and ways to control some of the problems that are created by some of this behavior online. So there are a lot of First Amendment issues with addressing disinformation as content. And so I think going after the content on these platforms is never the right approach. And I think doing a content neutral approach is always best if possible because of the First Amendment. So the First Amendment issues involved in regulating disinformation online include the right of the users, but also includes the right of the companies. And it protects the First Amendment, protects anonymity. It protects against compelled speech. The government can't make you say anything. The companies will fight disclosure laws that are often well intended efforts at transparency, but because the government can't compel you to speak about something, they'll find it. SCOTUS has historically allowed greater regulation of commercial speech, but since the 70s it has been increasingly protective of commercial speech, which has been an interesting development. And SCOTUS, the Supreme Court of the United States, has sometimes appealed laws that regulate speech if it's part of a larger regulatory scheme focused on conduct and only incidentally burdened speech. It's also allowed regulation of speech in a commercial context that there's a reasonable government interest in compelling factual and uncontroversial information. And addressing disinformation is a security rather than a content problem. So over the history of cybersecurity and infrastructure security, we've developed this perimeter-less security. So organizations realize that traditional perimeters for information security like firewalls, signature-based anti-mower, those approaches were becoming less effective, particularly as a more mobile workforce. So traditional cybersecurity defenses leave organizations vulnerable to what many call the Maginot Line problem. And the Maginot Line was a military worker genius. It was this wall that was supposed to surround France. And everyone thought it was impenetrable, but the bottom piece was supposed to be this forest that nobody could get through, but the Germans just went around the wall. And in many cases, organizations would design their cybersecurity infrastructure in such a way that they had, well, we have the best firewalls, we have the best security, and then, you know, an adversary would just go around it or move louder within it. So I think that content and speech is a Maginot Line, that when we attack content or nearly scope methods of deterrence without thinking about the infrastructure or the consequences, I think state actors can simply go around our defenses. So, and I also think it looks, when some of the legislation considered to address some of these problems can often look like the authoritarian laws we have seen in other parts of the world. And I'm going to briefly go over those. So, Global Regulatory Regulation of Disinformation are 54 countries have introduced legislation to address this information online. 16 of them have enacted or introduced some form of fake news or misinformation laws. France, Belarus, Bangladesh, China, Kenya, Guinephaso, Malaysia, Cambodia, Vietnam, Thailand, Egypt, Germany, India, Myanmar, Singapore, and the United States. So France is, I'm going to go over just a few, I'm not going to make you go through all these countries, I'll just go through a few, but France's disinformation law emerged as a number of disinformation efforts were directed toward the French president. In 2018 they passed a law that provides a definition of fake news, which is an exact allegations or imputations for news that falsely reports facts with the aim of changing the sincerity of a vote. It's designed to enact strict rules on the media during electoral campaigns and more specifically in a three months preceding vote. They made changes to the draft law in July 2018 to account for satire so as you can see like satire would have definitely fallen under some of those restrictions so they had to make a change. The legislation gives authorities the power to remove fake content spread via social media and block the sites that publish it as well as enforce more financial transparency for sponsored content. And the three months before the election period that builds upon an 1881 law that outlines dissemination of false news was a long history of false news in the world, as well as disinformation. As you can see it has three major provisions and it borrows from the honest ad act actually the United States so what's also interesting is that a lot of legislatures around the world whether they're democratic or not will borrow from each other's legislation and in this case they use some provisions from the honest ad act to apply existing standards for TV and radio and apply them to social media which actually is a fantastic idea. It allows political candidates to sue for removal of contested material, but again because it is a restriction on speech it is hot the whole bill is completely contested Singapore has the anti misinformation law. And this one criminalizes the dissemination of false information online it is one of the more comprehensive and aggressive pieces of legislation in the world. It passed 72 to nine Singapore's parliament and it makes it illegal to spread false statements of fact in Singapore that compromise security public tranquility to find particularly problematic public safety and the country's relations with other nations. It punishes people who post false information with heavy fines and even jail time. And this law has been condemned by a number of human rights groups and publications for unduly limiting free speech, it lets the government demand publication of corrections alongside allegedly false claims. It also outlaws the spread of misinformation and private messaging apps and gives the government power to remove false content that undermines public trust so it increases surveillance 10 fold. It is quite comprehensive but what's also problematic is that I found in the law itself it utilizes terms that were published by the social media platforms. Things like coordinated in authentic behavior CIB that type of language originates US companies and then it's adopted by foreign states for their legislation to regulate behavior and perhaps a way that maybe the social media companies weren't necessarily intending. The US regulatory efforts on disinformation because of all the constraints that I mentioned before the complexity of the language issues the different domains that they operate. There are a number of ways why it's been harder for the government to legislate on some of this activity. First of all, when it comes to social media companies of government is the social media companies are not the government. It's not they're not violating the First Amendment by imposing restrictions on speech. It's when the government forces a social media company to impose those restrictions that there's a lot of legal problems that Dr. Matt Wishing could probably elaborate on much better than I could. Some of these provisions are the Countering Foreign Propaganda Disinformation Act that was included in the National Defense Authorization Act in 19 and 2017. The HR 5910 Defending against Russian Disinformation and Aggression Act. We have the Honest Adds Act, the Deep Fake Report Act, the Militious Deep Fake Prohibition Act in 2018 and defending each of those as a deep fakes. They turned it into an acronym, which is quite clever. Again, Congress wants to act on some of these bad behaviors that are having negative consequences. And they've been doing a lot more in the last few years. But again, it's quite complex. So this bill, Countering Foreign Propaganda in the NDAA, that was introduced by Senator Rob Portman and Senator Murphy. The bill created a grant program for NGOs, think tanks, civil society and other experts outside of government who are engaged in counter propaganda related work. I actually think we need to do this could have been more aggressive. The Global Engagement Center is great under the State Department, but I do think we need some form of a U.S. Information Agency, just to take a more holistic approach so that it requires that all executive agencies like the former working group come together and consider solutions for this ecosystem in a coordinated manner. We had the Honest Ads Act, as I mentioned, adopted by France. It was introduced by Senator Klobuchar, Senator Lindsey Graham and Senator Warner in 2018. And it helps prevent foreign interference and election advertising. And it amends the definition of electioneering communication in the bipartisan campaign for Mac 2002 to include paid internet and digital advertisements. A lot of these problems are a result of old legislation not being updated for new technologies. I encountered this early on in my career at a studio in Hollywood. They had a whole framework for television and music rights, but that was pre-digital. And when we had introduced streaming and downloading all the licenses that applied to television shows, movies, their music that accompanied them, those were all completely outdated. So when we started analyzing that and how do you, you can't go through and renegotiate every single license. Most of the studios in Hollywood, the one I worked at, but as well as many others, renegotiated those licenses based on a framework that looked at the new media technology landscape and said, download is equivalent to a VHS, for example, and streaming is equivalent to broadcast license. So they were able to expedite some of the framework, some expedite some of the licensing in that way. But I think a lot of legislation in Congress right now could be expedited if we simply updated some of the older legislation for a new technology environment, because as you know, technology moves very, very quickly. So yeah, these are the other requirements. And it also required them to make every reasonable effort to ensure that foreign individuals and entities are not purchasing political advertisements in order to influence the American electorate. Seems like a small ask. But as again, the purchasing piece of this is quite difficult sometimes online, trying to validate who's buying what and how. The deep fakes report act was passed in the Senate by unanimous consent, which means that it was introduced and everyone, something that passes by unanimous consent if no one objects that it passes. And it was passed by Senator Portman shots, joining Ernst Martin Heinrich, Senator Gardner, Senator Peters, Senator Rounds and Senator Hasen. And that required the Department of Homeland Security to publish an annual report on the use of deep fake technologies that would be required to include an assessment of how both foreign governments and domestic groups are using deep fakes and digital fordries to harm national security and mislead the American people. Now that's just a report and Congress will do this a lot. They'll issue a, they'll pass something that requires an executive agency to create a report to help inform the public a little bit better. So this is the malicious deep fake prohibition act. It was introduced by Senator Ben Sass in 2018. And it was supposed to create a new criminal offense related to creation or distribution of fake electronic media records. It was criticized at the time for being sloppy and poorly thought through it. In fairness to them, you know, these are brand new concepts and ideas and, you know, there may be flaws, but the, but I do think that Congress just needs better resources to help them inform their decisions. Defending each and every person from false appearances by keeping exploitation subject to accountability act of 2019. This was would require intentionally deceptive content creators to label videos with the digital watermark and written disclaimer informing its viewers that the content has been manipulated. That is, it does provide a legal course of action against content creators while also giving people have been portrayed in deep fake videos of private right of action. Some of this was considered unenforceable as we go into that the defend against Russian disinformation aggression act this bill is included a package of bills in the house to secure America from Russian interference act of 2018 and it includes 17 bills to address subversive Russian activities. Congress when people say Congress isn't doing anything. It's not entirely true. They wrote 17 bills to try to address the problem. It included legislation to codify the State Department sanctions office and it required reporting on Putin's bank accounts and authorized enhanced NATO cooperation, which is, I think I like cooperation is really important. The 2020 technology legislative agenda. These are just some of the things that early in 2020 we identified as, you know, efforts that Congress would try to tackle and this was before COVID and, you know, another crisis had social justice crisis had had emerged so that includes trying to factor with federal data privacy framework a greater shift toward consumer protection rather than disinformation protection. We I suspected the house is still can actively considering breaking up that they are actually considering bringing up the big technology companies and you may have seen the hearing more recently. We highlighted copper the consumer online protection act. Senator Cantwell torpedo this a few times in 2018 but the sticking point remains with the private right of action for consumers platforms misuse our data is a non starter for Republicans. The House version of the honest ads act will continue to be debated. Senator Warner introduced the detour act the access us and Josh holly and with Josh holly the smart act, the detour act is about dark patterns that was to try to address manipulative design on on websites. The access act was an interoperability bill the senator wanted to make it easier to move your data from one platform to another. And I think we'll continue to see this movement toward any incremental breaking up of large tech companies and or antitrust reform but I think the one of the 2020 legislative agenda has shifted due to covid and families trying to work from home and relying so heavily on technology so I just want to ensure that that's probably going to continue to shape the public today. And that is all I have. There we go. Thank you. Thank you so much leisle for joining us today and for that amazing talk and q amp a with you follows after this. Thanks so much.