 Welcome back everyone. I'm really excited to announce this next talk. Now it's going to be Amit, I hope I'm pronouncing that right, Adversarial Misinformation Playbooks, and a little bit about the speakers. So Roger Johnston is a security analyst at Ubisoft Montreal where he specializes in adversary emulation and threat intelligence. In 2019, he worked closely with the Credibility Coalition, this Infosec working group to develop counters for disinformation and to provide tooling to the Amit community. Today, Roger volunteers with the Cognitive Security Collaborative where he builds capabilities to bootstrap health communities, provides trainings and evangelizes the need for greater awareness of disinformation. His recent work at Cognitive Security Collaborative includes the launch of an MISP sharing community for influence operations. Through Cognitive Security Collaborative, Roger recently joined the CTI League to counter COVID-19 disinformation. And Sarah Jane S.J. Terp is a data nerd with a long history of working on the hardest data problems she can find. Her background includes designing unmanned vehicle systems, transport, intelligence, and disaster data systems with an emphasis on how humans and autonomous systems work together. Developing crowdsourced, excuse me, advocacy tools, managing innovations, teaching data science to Columbia's international development students, designing probabilistic network algorithms, working as a pyrotechnician and CTO of the UN's big data team. Her current interests are focused on misinformation mechanisms that counters. She founded Podacia Light Industries to focus on this, worked with the Global Disinformation Index to create an independent disinformation rating system and runs a credibility coalition working group on the application of information security principles to misinformation. S.J. holds degrees in artificial intelligence and pattern analysis and neural networks. Super excited to welcome these folks. Please give them a virtual round of applause. And yeah, let's get started. Okay. So hello there. My name is Roger. And today we're going to be talking to you about cognitive security. And it's a rapidly growing domain that interacts with both cyber and physical security. And it includes things like information operations and disinformation. Specifically, we want to introduce the tools, techniques, and resources for threat sharing response and practical applications. There are five things we're going to be talking about today. What disinformation is to threat intelligence folks, how to run distributed teams for combating disinformation, the standards you'll need to do so, and the tools required to do that. Yes. So over to you, S.J. Okay, so next slide. So we're going to have to start talking about what this thing is. So cognitive security is the thing we're trying to protect. They're traditionally being cybersecurity, networks, wires, computers, and physical security has become part of that. The idea you need to protect your physical domains. But there's also this idea of cognitive security, that you're protecting not just the computer networks and the physical domains, but also the mental domains around them. Now, part of that has all has been within cybersecurity for a very long time. Always benefit if you look for it. This idea that it's a lot easier to mess with people's minds to get at what you want them to mess with the systems. Now we're seeing it turning up in very large scale that it's a lot easier to mess with people's minds to get what you want for pretty much everything. So the thing that you're seeing a lot of right now is disinformation. You're seeing a lot of disinformation campaigns. You're seeing everything from things like healthcare disinformation out there and misinformation through to 5G conspiracy theories. And lots and lots of different motivations for it. People are doing this for money. They're doing it for attention. They're doing it for geopolitical aims. They're doing political aims. Lots of reasons behind. But focusing back to what what it is we're talking about. So what we're talking about is how we deal with disinformation, how we deal with cognitive security as part of an embedding inside a CTI team. So how do we do threat intelligence when the thing we're trying to protect is cognitive? And one of those threats is disinformation. So lots and lots of definitions. Very easy to get hung up for a very long time on disinformation and disinformation definitions. So we just use one working different definition for the moment. And this is idea of this deliberate promotion of false misleading misattributed information. So the information that you use in a disinformation disinformation campaign doesn't have to be false. You can use real information. But you can use false attribution. You can put it in a false context. You can use fake groups to amplify it in a fake way. So we have disinformation campaigns, we are interested in creation propagation consumption of those. And we're interested in large scale disinformation. We don't care that my grandmother thinks my favorite color isn't purple. We do care if the intent is to change beliefs in large amounts of people. We do care if the intent is to create confusion. We do care if the intent is to create harm. And we can talk about digital harms for a very long time as well. Next slide. So originally we started working on Amit looking at two levels. So this was a geopolitical level. So countries have this is a dime model of different ways of affecting each other. So diplomatic information military economics. So these were traditionally a country had borders. You have to kind of find a way across those borders. Now every single one of these gets influenced by information. And every way you have information, you can use disinformation to influence that. So now countries influence each other using disinformation. So we've got these large scale attacks coming country to country. Next slide. We also got interested in what the business correlation of that was. So equally this dime model for businesses. So each of these things now relies on information. Each of these things can be corrupted by disinformation. So but now we're in a different world. COVID has shown a lot of cracks in information societies where we're all at home. We're all relying on information. Not all of us are at home, but a large number of people are relying on online information on what's happening in the world. And there are unholy alliances forming between crowds like the anti-vaxxers, far right-wing terrorists. There are people looking to make money off the back of these that there's all sorts of motivations going on. There are all sorts of squills of things going on. And this looks very, very similar to the sorts of storms of information you get across networks, but just on a human level. So next slide. So what we see as people responding to this are these layers of information. So as somebody creating a disinformation event, quite often you will see these longer, longer scale campaigns. So 5G is a very long term thing. People are working on anti 5G. Anti-vaxx is a very long term thing. But you'll see inside that incident, the Stafford Act incident just took a few days. There was a room where the Stafford Act was going to get implemented, people are going to be locked at home. 5G COVID, 5G causes COVID came up, appeared, went away again. And we keep seeing incident after incident. People generally run on stories. So narratives are the stories we tell ourselves about who we are, what's happening in the world around us, who we're not. So we have this idea of in groups, so who we belong to and who we don't belong to. And you can use narratives to strengthen beliefs in what's happening. You can use narratives to strengthen beliefs in who you are. You can use narratives to confuse people. And you could use that to weaken whole populations. But as a data scientist, as a responder, what you tend to see are these artifacts. You don't see the campaigns, the incidents and narratives, you have to work out what those are from the messages, from the text, from the images, from the groups, from the accounts, from the social media, from websites. And you work your way back up this pyramid. So what we see in practice is we track incidents. We look for narratives that tie together those incidents. But we work it all out from artifacts. So there's a correlate here again, with traditional TI. But this is where we come from. So next slide. As with everything else, people process technology culture, every system. And this all starts with people. Everything starts with people. So there are people generating this, they are using people, both as endpoints, but also as carriers. And there are people to respond to this. So next slide. So we built the CogSET CoLab. It came out of the Miss InfoSec groups from last year, we spoke about last year. The CogSET CoLab is a nonprofit that was built to bring together all the different people who could start building the CTI response. So to create the resources you would need, create the processes, create the tools, to start defending this cognitive domain, and do this in the similar way to other CTI. And then COVID-19 happened. So we had this great idea to build an American equivalent to Lithuanian elves, which are groups that do disinformation response, volunteer disinformation response in Eastern Europe. And we were going to spend a whole year doing it, just setting it all up and COVID. So we found ourselves running and supporting community deployments around the world. So next slide. And luckily, some of us have been here before. So 10 years ago, I was lucky enough to be part of the crisis mapping community. So I was one of the original leads, who was setting up processes and working out how to build data responses. So that the maps and data sets that people responding to disasters needed got into their hands in disasters during those disasters. And it was a very, very similar problem. There was data in lots of different places. There was social media, there was disinformation information, there were flows. There were volunteers who wanted to help. There were people who had tools and it wasn't joined up. So we have a similar thing happening now. So now we have four teams here that we're part of COVID-19 activation, which is a team that's working on information and has a different disinformation feed over to us. COVID-19 disinformation, which was set up by myself and a fellow former crisis mapper. So he's now over at the Atlantic Council. That was a search team, which also has a disinformation feed collection. Cogset CoLab providing the back end process and tools support. And the disinformation team inside the CTI leak. We were lucky enough to be asked to set this up. So we're going to talk about this. And as we talk about this, we're not, we're too humans here, we're two people talking. But we're talking about what a team is doing. What a team is still doing is still building. And I just want to take a second and just say how immensely proud I am at all the people who've come in, brought their skills and brought themselves and just stepped up and started building this thing. It's immensely impressive. Okay, next slide. So that's the people building. There's no point building a pretty view of what's going on if you can't actually act. My friend Pablo has a saying for this, which I'm completely forgotten at the moment. But it's basically something like yeah, and then. So you've got to think about who can actually act at the other side. So you have the ISACs, which are the critical sectors to feed out to. So the ISALS ISAC system, we've been part of helping set up the cognitive security ISAL. One of the reasons we picked the tools that we did was because the existing networks already use those tools. So we can feed straight in. So we have set of set of groups over to the left, we can connect over to and being part of CTI lets us connect into more people who can respond. So quite often, when you hear talks about disinformation, there's this assumption that, well, you can educate people on the platforms can take stuff off their sites. And then there's a pause. There's an awful lot more that can be done with an awful lot more responders. And we're going to talk about that at the end of this talk. But just a sense that there were more people who can get involved and are getting involved. Okay, next slide. Let's get to the process bit. So this is the tech setup that we're using at the moment. And don't get hung up on what's in it. This is just basic tooling toolset we're using. Again, the, the MISP and the Hive match in with tools that are other people are using the slacks. Well, everyone uses Slack, so hell, we're building Slack bots all over the place. But this idea that we've got alerts coming through, we've got collection coming through, and we analyze in tools that other people understand within the CTI world. So it's not a surprise to anybody. And we're going from this alert to action. And just next slide. And again, this is for reading later, what's this sense of your your flowing through this, you're not just collecting just because you can do some pretty when we collect, when we decide to start an instant report, we do it in the hope that we can make some difference. So it's like, tell people you're doing this, make sure it's actually going to do something or has a potential to do something useful. Start putting it somewhere other people can find it, and make sure it goes somewhere. So this is the plan. Next slide. So let's look back at that collection part. So that was like the alert how you start this and then look at how we actually get stuff into the systems. So next slide. So we spent last year building out Amit. And I'm sure we bored some of you to tears talking about how we built a disinformation version of the attack framework. So adversarial misinformation influence tactics techniques. This idea is that we needed a framework to understand how the bad guys would organize a disinformation incident. So it's at the instant level. So that if we understand and we can map the the activities that they do, and the techniques they use, we can then map counters to those techniques, ways to deal with those techniques. So we've got the framework, we've been building out the process, we're building out those counters, building out tools we can use. So originally, we built Amit as the business based at working group under the credibility coalition. We've now moved it over to my two are taking it on, which is wonderful because then they get Amit and attack. But for the moment, we're running it under cogs at co lab because we're using it. So next slide. This is what it looks like in practice. We built this by looking at every type of stage based framework that might be useful. So we looked at the attack framework, and other infraset frameworks, we couldn't get any of those to quite fit this information. We looked at advertising technologies. So we looked at the funnel networks, the funnel frameworks that were used to describe how people move through ad tech. Those are really useful for describing how people get radicalized. And there are parts of those in Amit, the fact that you have things. In fact, there's a stage missing off the back of this, which is the monitoring and evaluation stage that you go through a campaign, and then you check how well it worked and adjust. So there are pieces all the way through this. There's also a task list behind Amit that you don't see on the front end, but these are techniques. So just like attack, there's tactic stages, and there are techniques within each stage that you see happening. And now I'm going to hand it over to Roger, because he's been quiet for way too long. Okay, thanks Esther. So we have the Amit framework, and we have some ideas for talking about disinformation in general. But we need to make that practical. We need to actually apply that and be able to do work with it. So this is an example, probably most people are familiar with. Plandemic is a deep orange conspiracy theory video that recently came out and makes some pretty dubious claims about the nature of COVID-19. And it's an interesting example to look at because it doesn't just resonate with a normal conspiracy crowds, but it really threatened to become mainstream for a number of reasons, in part due to the high production quality. And some of the specific techniques they used, which we'll get to in the next slide. But when we look at this example, when we look at this influence operation, we want to be able to deconstruct it, specifically enumerate the individual techniques so that we can go and counter them. In this case, pandemic relies very heavily on an appeal to authority. It uses the technique, use fake experts. So Dr. Judy Mykovitz is not a disgrace doctor, but the message resonates because it suspends disbelief with the viewer. A person that would perhaps otherwise not take a message seriously, thinks to themselves, well, this is a doctor, so perhaps we should entertain this thought. And that's pretty interesting. We can then go and counter this technique by focusing on things like her credentials, individuals involved, whether science debunks this, whether there's previous research that says this is false. But that doesn't necessarily matter that much. The damage is already done. And we see this with fake experts in other fields such as anti-vax science, climate change, or whatever. And so this technique, using a fake expert to push to push out a narrative is effective because it's a human story. And it plays into this narrative of a low researcher who is out, you know, trying to defend herself against the powers that be the political and scientific elites that would crush this theory. And that's, you know, that's not about the science, that's about the human story. So moving on, each of these techniques that we're looking at, whether it's pandemic and these appeals with fake experts or the use of bots to amplify message, we need to be able to work with these ideas quickly and map them out so that we can, we can model these behaviors and model them in a progression over time. And so one of the first tools that we actually turned to was the MITRE ATT&CK matrix and ATT&CK navigator, excuse me. And we kind of retrofitted the navigator to work with Emmet. So now we're able to, you know, just click through the UI, create layers for red and blue team planning to model out what the attackers doing, bootstrap our ideas for how to counter them and then export this into other tools such as NISP or Thread and Tell Reports or helper, we want to use that. Lighter recently upgraded or modified the ATT&CK matrix so that it supports sub techniques, which is really interesting. So previously, you had something like PowerShell, it was a very clunky kind of technique, lots of things would be contained in it. The idea that we have now is to go the same route using sub techniques and be able to refine these further to get a little more granular in what we're representing here. So that's in the works. Okay. Okay, this would be me again. So that, that was the techniques and tactics, how we represent those. We're also looking at how we fit things like narratives into NISP specifically. And we've just started looking at how narratives grow and die because narratives also appear and disappear and attach to different groups in different ways. So we've looked at the CMU list for COVID-19. We're looking at other lists from other groups. We're also seeing narratives being combined. So COVID is interesting because we're seeing COVID narratives combining with conspiracy narratives from elsewhere. So there's a whole bunch of interesting work from storytelling to be to be brought in here. So just putting this as a placeholder, we're on this, but there's work. If anyone's interested in playing with this, you can play. Next one. Fun. Oh, shit. Okay. Yeah. Okay, so I guess we're gonna race through this. Okay, so a lot of the work we did in the last year was really to bootstrap our set of tools. And putting everything in sticks was a requirement. NISP can use sticks. Most tips do use sticks. And it's a standard for the ISOs and ISOcs, importantly. So we've added some new objects. We have new SGOs and SROs in the works to further extend the language that we can use with Emmet. But one of the major things we recently concentrated on was MISP. We did a lot of work to get Emmet included, work natively in MISP, so that we're able to be able to tag events, use inappropriate Emmet techniques. And, importantly, we recently set up a new MISP sharing community specifically for disinformation. And so we're working with friends in NATO, Canada, RRM, other partners to start sharing and collaborating on information operations in a dedicated channel. So yeah, so Emmet and MISP will be working to set up tags, new objects that are specific to information operations. And the DFR lab dichotomy of disinformation is a set of tags specifically for talking about the nature of an operation and it's like a relationship to states or political actors or whatever. And this is why we're doing it. This is why we want to use MISP. Really it's about showing the story that's happening. It's not just collecting the indicators. We don't really care to just have a set of URLs or a set of Twitter handles or posts. What we actually want to do is represent how those relate to each other, how they're being used. So does a certain Twitter handle cross posts to a set of blogs and how does that relate to domains that are owned by some party or whatever? As we collaborated in the MISP community, hopefully we can flesh out and begin to enrich these models and push those out to a wider distribution group. Okay, next slide. Analysis, next slide. So this is about making sense of what we found. So graph analysis is big in what we do. So super spreaders, people who are actively spreading out disinformation. Finding the origins of rumors. It seems to be tracking to money an awful lot. Finding new artifacts, tracking those movements over time. So this is basically how disinformation is spreading over time and getting a handle on that and being able to block it. Next slide. Image analysis, looking for similar. We haven't done much looking at shallow fakes. We haven't seen a lot of that yet. Next slide. And some early work on narrative detection. Next slide again. And to action. Next slide. So this is the last section. So this is looking at what can actually do and it breaks down to four different parts. It's actions already exist. Those take down the botnets, educate people and then tactic level action, technique level actions and then doctrine level actions. So doctrine based actions which I think is actually some of the most interesting stuff. To you, Roger. Okay, so yeah, so we spent last year looking at counter measures, thinking of things that we can do. Ultimately, the way we organize those into buckets are based on JP313 effects, actions that we can detect or deny, you know, disrupt, degrade, destroy. We want to have some type of effect on adversaries capabilities. So we looked at those. We organized them under that model and then started to put them into meta categories, which means can we cluster a set of disruption activities in such a way that, you know, it really belongs in its own bucket, maybe adding friction to a platform to make it harder to use sock buckets, for example. And once we have these, we can then start applying them to the individual techniques in the Emmett framework, to the individual tactics and techniques that are in there. What we're looking at right now is a simply account of our current progress on which techniques we're able to counter. And that's how we started but that's not really how we finished. That's not what we're doing anymore. SJ and Pablo and Greck had a conversation at some point about the nature of disinformation operation and what came out of that essentially was the idea that influence operations have some essential requirements. You need resources, it takes time, media infrastructure and everything is universally scarce or at least time is universally scarce. And so with these like with these critical elements, we can apply those to our countermeasures or think of our countermeasures in reference of those critical elements, task questions about how we can disrupt those. So for example, when we're looking at resource exhaustion, things like money and audience, are there ways that we can waste an adversary's money or make an operation so expensive that it's no longer worth it? Or in reference to execution or time, can we waste their time so that they're not able to achieve their goals in the required frame? Example of that being the 2020 election, you know, if you're trying to meddle in the election, it'll only make sense that you do your actions prior to people voting. Afterwards, it doesn't matter. So if we can throw a wrench in that or slow it down, that's a specific element that we can apply effects to for that purpose. But to do that, you kind of need to know a little bit about who you're dealing with. You can't just go and speculate. You actually have to model who your adversary is or do some research to figure out how they're resourced, what their limitations are. This was a super interesting example. One of my recent favorites, it's double deceit. And the nature of this operation, it's essentially like a dozen kids, 12 or so young persons sitting around the table with smartphones posting all day long in Indiana. So it's a very low resourced operation. There's one or two managers, maybe a bunch of kids, very little technology, very little administrative oversight or management, as far as we're aware. And no automation, no bot capabilities. It was just people dining posts into their phone. So how do you disrupt that? What can we do? Well, there are a number of ways. One of the things that came to mind that we looked at was disrupting their ability to understand their metrics or know whether or not they're actually being effective. If we can engage the adversary such a way that we lead them as to whether they're messaging successful, given the fact that they're a low resource actor, they're not or may not be able to necessarily see the bigger picture, we might be able to push them in a direction that's counter to their integral. So we get their time, waste their money, keep them busy doing whatever, but ultimately aim for their failure. And so we started to model this in our playbook or playbooks which is what we have on the screen right now. And we break this down into individual response actions. So a particular thing we want to do, like de-platforming an account. And then we bundle those up into a bigger playbook that's situation appropriate or actor appropriate, based on the critical elements and the effects, the critical elements we want to attack and the effects we want to put on that element. So that's the core idea of us building playbooks. And I think that's where most of our time is going right now. I know we're short on time and I wish I could get into this a little more. But yeah, I'll hand it over to you and I'll just say thanks so much for listening to us today. Thank you. So back to you, Esther. Oh no, just thank you very much. Oh, sorry about that. I was talking well muted. Great with technology. So thank you so much, folks. This was amazing. It was actually, I know that MISP is a really popular subject among our participants and this way of working with it and its application to, you know, the world situation to permitting. So I'm just gonna attempt to show the questions. So let's go right to it. How do you account for an irrational party or the one that is strongly biased by beliefs over facts? Ah, this is the whole working on emotions rather than facts, I think. So a lot of disinformation, remember I said that it doesn't have to be false. It does actually play with emotion. You can play in that space. A lot of really good disinformation works on network effects. So you're looking at who's connected to a person and how you get information to those connections. Quite often the counters can be very similar to the problem. One thing that has become very apparent with COVID is that countering disinformation isn't just about removing the disinformation. It's not just taking out the problem. It's also a twin thing of you have to produce better information sources. So part of the reason disinformation has become so good is that there are information deserts. There are not those strong sources of clear good information that everybody recognizes and goes to, especially unfortunately there are some countries where the state would normally do that and doesn't have that state representation. So yes, so some of this is produce or amplify better information sources and the platforms putting up links to better information sources. Every time you look up COVID you'll see the WHO marker on some of them has been a good counter. I'm not sure if that's enough Roger. Yeah, so I think one thing to add to that is that there isn't necessarily so like when we're looking at disinformation and this information there isn't always a counter and that's just something we need to bear in mind. If somebody has a deeply entrenched irrational belief like bleach is good for you that's going to be really difficult to steer them out of and so I don't know if attacking that hat on is always a solution rather to try to maybe steer them to other sources or distract them with something less harmful to other folks. Yeah, well Nudge. Awesome, all right so the next question is do you attempt to do attribution? If so how successfully? Sometimes we try to track back to source. I have taken back to source where there've been private individuals and not attributed. So sometimes the best thing to do is to not give attribution. Sometimes it's not helpful and you don't want to be doxing somebody. Also attribution is hard. There are countries, organizations who work hard to look like somebody else. So sometimes what you're trying to track is not necessarily attribution but motivation. So why is somebody doing this thing? Why is suddenly somebody getting excited about MH14 in the middle of an anti-vax conversation or the base question is can we stop this or slow it down? It's not who did this. I think the one thing we can look to in that respect though and SJ you're touching on it is the narrative analysis. We can pool the narratives that we suspect an actor is producing and ask questions. Who does this benefit? Does this narrative help actor X? Yes or no? That won't necessarily tell us who that person is but can at least hint their motives and that might be as good as we can get in some cases. That makes sense. All right, next question then. You mentioned money is often behind the source of the rumor. Can you trace your way to an idea of what kind of interests this money has such as government, corporation, mercenary? Oh in COVID there's an awful lot of people who are just selling books or selling talks. A lot of private, essentially private individuals selling their wares and we've seen everything up to and including t-shirts out there. There's also some big money behind some of the protests and stuff but more often than not there's been small money. If it's a large part of it it's just like add revenue, just clicks, right? Clicks and merchandise. Yeah, yeah that's kind of sad. All right, another question. Do you know of tools using AI in combination to have a situational big picture or would there be no way for such things to be efficient? We actually as part of COGSEC CoLab have a data science subgroup. So I'm a data scientist. We kind of play with AI. So the narrative stuff we skipped through. We did some clustering analysis. So we basically scraped EU versus disinfo, scraped out their text and did clustering analysis on their text looking for narratives. So yes we can and yes I think it would be useful. It's just on the to-do list at the moment. Maybe someone watching wants to get on it. Yeah, one of the things I saw, it's not really AI, it's more NLP but MITRE did a project. I think it was called TRAM for parsing CTI reports and then like matching out the sentences or words or whatever to the MITRE attack matrix. That's something that I've been eyeballing to do but it just requires a lot of like initial parsing of those reports manually to build your data set. But if anyone wants to help me then you know we could make that happen. Yeah I mean I also gave a talk at AI for good with a wish list in it in the back of it. So there's a list there if you want to go play. Awesome. All right I think we have time for one last one. Have you found a way to create a common operating picture possibly using AI dashboard that can show real-time events with possible events that caused them? Any thoughts? That's not a bad thing to aim at. I mean we've got Hive with the cases, we've got MISP with the situations in there. We're not too far off just dashboarding it but it's about making it make sense. Yeah everyone go sign up. All right well thanks so much this was really a pleasure and yeah have an excellent evening. Everyone who's in the audience please give a virtual round of applause for these folks and also a special shout out to Roger's extremely I don't know spacecraft kind of battle station situation. All right bring their insults. Thanks so much. Thank you.