 I'll start. I'm just giving the background, the other work we've done around this. Back to the beginning, a lot of our work has been on how you apply information security principles, practices to disinformation and other online harms. We framed this as there being three layers of security. So you have physical security, breaking into stuff, sealing the box, cyber security, the networks and endpoints of machines that we know and love. But this third part, which is cognitive security. So the endpoints of human brains, its beliefs, our emotions, senses of belonging, senses of community, and the networks between those communities ranging from small community up to country levels. And we're looking here at a set of online harms that includes misinformation, disinformation, malinformation and hate speech, primarily on disinformation. So misinformation, false information might not be intentionally spread, but it's getting out there, it's causing problems. Examples, a lot of the COVID narratives, the fake cures, the Black people can't catch COVID type stuff. Disinformation tends to be more deliberate. It is someone actively trying to mess with those beliefs, emotions, senses of belonging, other human things. And generally, you've got somebody creating it. The information doesn't have to be false. The falsehood could be in things like fake amplifications. It could be in things like fake emotional responses. So it's a much more structural thing. So as part of this, we built frameworks. The Amit frameworks are how do you talk about behaviors within this, but also frameworks on how do you talk about the whole object space and share rapidly. So next slide. Next slide, please. Well, I'm just going to talk. So one of the first things we've done this year is frame differently. So we've moved people up from looking at just objects to talking about narratives to talking about incidents and how they fit within a larger framework. Now we're talking about it as a risk management problem. So in fact, risk assessment, you're looking at things like attack surfaces, you're doing patches, doing phones, doing response plans, works the same in disinformation. So managing the risks. So part of the problem is like this stuff is everywhere. Everything's on fire. If everything is on fire, where do you put your effort? And you have a set of resources that you can use. But also how do you understand what the problem is? So what are the attack surfaces? What are the volums? How do you patch? And we talk about risks. We're talking about how bad is it? How big is it? How far has it got? Who is it affecting? When is it affecting? So standard risk management. Next slide, please. And the way we've been doing this has been starting to look at the landscapes involved. So we are working in an information landscape. We're always working in information landscape. We're also working in security landscape. So how are people looking for information? How are they sharing information with each other? It's not the same thing. You'll find people looking on Facebook, sharing on WhatsApp, different platforms, different styles. Where are the information sources? So where are the trusted sources of information? But where are also the injected sources of information? And voids. So avoid is a space where somebody is looking for information, but that information doesn't exist in that space. It might be URLs. It might be hashtags. It might be groups. But those are right for disinformation. So this is the one thing we landscape. We start gardening that landscape. The risk landscape. This is the one that we generally understand. We're talking about the how bad, what these things are. So who is in this space? What are the motivations? Where is stuff coming from? A lot of the time we're tracking back to the source of a disinformation narrative or artifact. What are the effects it has? Narratives, stories we tell ourselves. So the identity narratives of who am I, in group, out group, who do I belong to, who don't belong to. And there are crossovers. So for instance, you see the combination of COVID and 5G narratives become COVID 5G. And we have this giant snuff ball of narratives in the past year. Tactics and techniques. What are the behaviors? What are the behaviors that people are taking as part of this information creation? And those artifacts, things you can poke at. Messages, images, groups, accounts, links all between them. But the landscape people you should forget is a response landscape. So who is actually out there doing something and what are they doing? Who is monitoring the problem? But also who is mitigating it and countering it? And is there a coordination between them? So we're looking for things like existing policies, we're looking for technologies, but generally we're looking for what are the groups in this space? How do they hang together? Next slide, please. And one thing we've been doing is adapting the idea of SOCs, Security Operations Centers, to become Cognitive Security Operations Centers. So next slide, please. So what does this look like when you're building a SOC or part of a SOC for disinformation? So we look at the actors. Disinformation actors, we've basically done into like the big persistent actors, the advanced persistent manipulators, the service providers as a growing industry for disinformation as a service. And then the one-off opportunists. People are selling t-shirts on the back of a campaign, conspiracy groups, suddenly attention seeking individuals. And next slide, please. That's the creators. But we have a similar set of actors for the response. So we're looking at disinformation SOCs and we're looking at what does it mean to be a disinformation eye-cell? A large organization whose point is to connect together different efforts. So sending out alerts, coordinating responses. What are the large actors? Do you have the platform? So anybody who has disinformation is a main problem within their space. You also get events specific. So war rooms set up for things like elections in different countries. So groups coming together, responding, breaking up again. That's SOC size. But within that space, you've got this mid-range again. The size of a desk in a SOC. So you might have a disinformation desk inside an existing SOC. So we've been working out what does that look like? How does it connect? How do they talk to each other? It might be set on its own. Most at the moment are on their own. And we're looking at their existing examples of teams, journalists, academics, independent researchers, working at how to do this small group, not connected out to the infosocks. And then you get that individuals again. So individuals in organizations, individuals on their own. So different scales. Next slide, please. And one of the things we've been doing in this space is making a list of everybody we know who responds. There are lists already out there. CMU connected one from Credibility Coalition and other providers. And we took the lists we could find. And we've been looking for other actors and looking at them country by country. You can also do this vertical by vertical, so area by area. You know, water, nuclear, the usual ISAC groups. But if you look at a country response, you can look at what the country issues are. So what types of disinformation, what types of techniques are being used. And then see if that matches the types of responses that are there. I'm no surprise, really, that most responses fact-checking. But disinformation is somewhat bigger than that. So next slide, please. And looking ahead on that, what do they actually do all day? So generally, you've got three parts to this. We've been looking at this as risk mitigation. So what are the things you can do before you're in the middle of a disinformation campaign? And you're responding to it. So how do you secure your systems? Last year, we read Teamed every week. We work on simulations, looking at from the bad guy point of view, what are the problems we're likely to have, and then patching that. How do you build resilience? And most moves are educating people, which is make your system more resilient. Compliances for later. We think there will be compliance the same way you get infestate compliance. Just keeping that on the list. Enablement. Before you can do a lot of this, you have to do the foundation work. And a lot of that is people-based. So you train people. You train infestate people about disinformation response. You train disinformation people about how infestate responses work and socks work. A coordination. There's always a coordination piece. And then there's a data engineering and information frameworks to be able to share information rapidly. That's what we're actually talking about today. And then the real-time. So most people see the real-time shits on fire. Let's go do something about it. And that, again, is that discovery. How do you find you've got a problem? How do you investigate that? How do you respond to that? Behind that is much longer-term risk intelligence, investigation, attribution, type stuff. Next slide, please. I am not going to go heavily into this slide. This is just four different configurations. One is the ISO size. The whole point of this organization is to act as a very large cognitive sock. To the left is what we think we're going to see in big platforms, war rooms, et cetera. So you're likely to get a cognitive sock large enough on its own that's outside the infestate sock and has to talk to it, but also has to talk to legal comms platforms, the business units that can respond to this info, as well as other organizations, other ISOs. Mid-range, middle of this, is what we think is actually going to happen in most businesses, is that you'll get a desk. You'll get a disinformation desk that talks, is embedded within an infestate sock. I've just talked through the ideas of disinformation risk, cognitive security, and cognitive security operation centers. So now I'm going to pass over to Roger, who's talking about the plumbing, the way we actually get these groups to talk to each other and work with each other. Roger, yours. Great. Thanks for the awesome introduction, SJ. So thanks also to NERFSEC for hosting us. I'm really happy to be back this year to talk about the Amit framework and our updates. So part of the problem with enablements that SJ mentioned is having a common lexicon and models to rapidly share alerts with. And the high-level entities that we model are shown here. At the top, we have disinformation creators, which are creating longer-term campaigns, for example, to destabilize French politics. And below that, the incidents that they create, which can be viewed as short bursts of messages around a specific topic or events. And below that, the narratives are the level that most disinformation works at. And these are the stories that we tell ourselves about who we are, who we belong to, who we don't belong to, and what's happening in the world around us. And at the bottom, as responders, what we generally see are the artifacts. So these are the messages, the images, the accounts and relationships and groups between these messages. And this pyramid is fine as a high-level description, but we need to look a little deeper before we can start to model the entities and relationships of a disinformation campaign in a way we can share. And so over the past few years, we have adopted and adapted principles, processes, and tools from information security. And this image is a STIX diagram, which is a message format used by ISACs and other infosec bodies to share threat data. The pyramid layers that we saw in the last slide are outlined here in pink. The campaign objects include threat actors and campaigns. And below it, the incident objects have the techniques used by both the incident creators and the defenders. And below that, the artifacts, which include observations, accounts, and other objects that we can use to identify these activities. This is another STIX example. This one shows links between narrative-based and socio-technical models, in this case, Francois's actor behavior content model, which just categorizes entities into deception vectors. Now, whether we're using STIX or MISP or some other method, the important idea really is that we can model the entities and relationships of a disinformation campaign, much like we do in information security. And with the AMIT frameworks, we're specifically interested in the behavior or TTPs because if we can model these behaviors of information operations, then we can start to build a higher quality analysis on how these operations progress, their resource constraints, and ultimately the responses we'll need to counter individual behaviors. So this is the AMIT red framework. It's the disinformation version of wider attack that we built to model the stages and techniques in disinformation creation. And at the top, the first row is the sequence of operational phases. So information operations start with initial planning and they transition to the preparation phase where we actually prepare to execute the plan and then to execution where we're performing the actions in our plan to achieve some desired effect. And finally, evaluation where we measure if we did the right things and did those things well. And below this in blue, each operational phase is split into tactic stages which organize the techniques in gray that enable you to complete each phase. And so using this matrix of TTPs, we can visualize the techniques used in an information operation. But most importantly, we can model the sequence of techniques used to achieve some effect on a target audience. And we've made the AMIT framework available in a number of tools. And because AMIT is deliberately similar to ATT&CK, you can use most ATT&CK compatible tools with AMIT. So each technique in the AMIT red framework has a page like this which describes how it works and the incidents in which we've seen it being used. This technique describes amplifying a message or adding credibility to it by targeting high-influenced people and organizations. So if we can bait a person into responding by appealing to them somehow, we gain some influence and credibility with their social network. And we see this a lot with conspiracy theories with, I don't know, digital currencies or any place where celebrity is used to give credibility to some idea, even when that idea on its own may not have much value. And over the past year, we've been working on improving the fidelity of the AMIT framework by adding sub-techniques. So this update is the result of a huge amount of work with groups in the disinformation community, collecting recommendations to augment the AMIT framework. And we're really proud to be showing this development model for the first time at a conference. So you heard it here first at North Sec. So like MITRE ATT&CK, the AMIT sub-techniques describe a specific implementation of a technique. So before where we had a technique like social media account creation, we can now describe how those accounts might be created or acquired or stolen or whatever with greater detail. And this is actually really valuable because sub-techniques give us greater precision when we're describing information operation techniques. And for responders, precision's important because it makes it easier to communicate what you're working with, the exact techniques observed during an incident, but also to respond with appropriate tailored countermeasures for that technique. So AMIT sub-techniques are already compatible with most attack tooling, such as the attack navigator shown here. And we'll continue to refine the sub-technique framework, but expect a public release sometime in the near future. So I mentioned that AMIT sub-techniques describe a specific implementation of a technique. And here at the top in dark green, we have a content development technique called cheap fake, which is the deceptive alteration of media using low-tech methods. And below it, and light green are the sub-techniques of that technique or implementations of that technique, such as digital alteration or selective editing. So for responders, when they say that some photo or video is fake, we want to equip them with a proper language that describes exactly how that was accomplished and eventually with processes and tooling that make it easier to respond to that particular problem. Anyways, not all AMIT techniques have or need sub-techniques, but many do. All the boxes on the image I showed was a great drop-down, currently have sub-techniques. And we're going to continue to develop them as we build up to a release. But for now, you can find all of our work on the Cognitive Security Collaborative GitHub page. Okay. So the other thing we need to talk about are countermeasures and the AMIT countermeasure framework that we built. So before I get into that, we'll just do a quick recap of effects. This slide shows the effects summarized from JP313, which is Joint Chiefs Information Operations Manual by the US government. And when we talk about the effects of a countermeasure, what we mean is the effect it has on a target. And each of our mitigations and countermeasures will produce at least one of the effects shown here. And the effects range from denial, which is completely stopping or denying an adversary capability to deception, which diverts or deceives an adversary into a less desirable state to deterrence, which discourages them from maybe taking an action at all. So disinformation countermeasures span the full range of these effects. But there's a second axis we also need to consider when looking at counters, which is shown here as the sliding scale of cybersecurity by Rob Lee. And on the left of the scale, we have planning and maintenance of systems. So these are architectural considerations for things like, let's say, educational media literacy or public policy information objects or information operations. And on the other end of the spectrum, we have the offensive capabilities, which includes stuff like legal countermeasures or bot takedowns or whatever. And that distinction is actually really important for us because not all actors are permitted to respond in the same way or have the needed resources to form appropriate mitigations. And not all countermeasures are intended to be reactive. You can't go and educate a population after things have already collapsed, for example. So what did we do with all of our countermeasures? And how do we deal with this problem? So currently we have 140 countermeasures and mitigations. And over the last year, we've been tweaking and organizing this list to refine the properties and how the techniques and how they affect the techniques of the Amit framework. And this table shows for each Amit tactic stage, the total number of effects which can be applied to it. So, for example, if we're trying to counter and influence operation and we're specifically interested in the developed people tactic stage, which is the stage at which accounts and personas and people are developed, this table shows us that there are seven did I counters for disrupt counters and so on available to us to use as counter moves at that point. And this table, along with others, are useful for visualizing our mitigations and they're all available on GitHub with the rest of our work. So in GitHub, each countermeasure has a page with its description and the page lists the individual techniques and tactic phases which it can be applied to. And we're working on fleshing out the actor and sector information. And that'll help responders understand who's able to act on particular counters as well as completing some of the summary information that will make these more accessible to a wider audience. And we're also working on adding resource requirements for each counter. And that'll help responders assess resource allocation for counter disinformation campaigns. You can expect to see more updates from us on that in the near future. So this is the Amit blue framework. This is the new framework where we've collected and organized all of the counters and mitigations for each Amit tactic stage. And like the Amit red framework, the blue framework uses the same operational phases at the top in purple and red and the same tactic stages below in blue. And the Amit blue framework is complimentary to the Amit red framework. So where the red framework tells you what moves are available when executing an information operation, the blue tells you the corresponding counter moves and mitigations that can be used to defend against one. And it's worth pointing out that some of the most effective mitigation strategies are at least those with the greatest reach and impact across the Amit red framework are the counter measures and mitigations that happen left of boom at the planning and preparation phase. So that means things like privacy regulation and like media literacy education and public policy are all important but necessary for effective disinformation response. So we're using both Amit red and blue frameworks in the real world partnering with organizations and community groups to identify and respond to disinformation. And these are counter disinformation groups like reality team.org who use targeted counter messaging and metrics to push away disinformation narratives. And as we do this, we're learning from each other and updating our models to ensure the techniques we describe and the counter measures both reflect real world operations and are useful to the community. So like the Amit red framework, the Amit blue framework for counter measures is an actively developed project that's open source and available online on the cognitive security collaborative GitHub page. All of our work is a community effort and we welcome folks from all backgrounds to come and get involved, especially with disinformation and disinformation response. It's super important that we have a multidisciplinary group of individuals who can think about counter measures, who can use them and when, under what circumstances. So we have these two frameworks. We have offensive and defensive capabilities. But there's another kind of like layer here that we need to think about when we're discussing each move and each counter move, which is the concept of resource allocation. And one of the greatest challenges we face in cognitive security response is being fast enough to make a difference in time. So it's far less effective if you put out messaging to warn of violent extremism after a target audience has already been radicalized than if you were able to reach them before and deescalate the situation. And similarly, the window of opportunity to protect the integrity of an election ends once the final votes are finalized. So we need to be fast when we're identifying these events and doing something about them before they have their intended effect on the target and before our window to act closes. And if we want to be fast or effective, we need to think about how we allocate resources on both the attacker and defender sides to do that. So all actors have limited resources, limited infrastructure and limited time and capabilities to create their campaign or to execute their campaign. And countermeasures deplete adversary resources in one form or another, either by increasing the time to complete some technique or the monetary cost and so on. So like our countermeasures don't always need to completely shut down an adversary's capacity to act. You know, an adversary may still retain the ability to create botnets or deep fakes or backstop their troll accounts. But by making those techniques expensive or time consuming or prone to failure or difficult or whatever, we can reduce the likelihood of that technique being used against us or reduce the pool of actors who are capable or willing to field those techniques. Anyways, countermeasures themselves are means that consume resources, which means consideration. And if our strategy is to simply be reactive and put out fires as they come, we're going to spend a huge amount of our time and resources just keeping our house from burning down and perhaps without increasing the cost for our adversaries to carry on their attacks. So I'll end my section just by saying that we need to engage public policy makers to implement left to boom counter disinformation strategies and to build resilience against influence rather than simply respond to threats as they come along or those threats will continue and continue to get worse. So SJ back to you. Thank you. Next slide please. So practically, I use a lot of these techniques at different scales. So business scale at region scale at country scale. And this is how it looks in practice. So next slide please. So those three landscapes, the information landscape, the risk landscape, and the response landscape. So information landscape for a country, you're looking at the types of media, you're looking at those sharing and seeking behaviors, and you're looking for those information voids. So what is the space that we're working in? How do you guard on that space? Next slide please. The threat landscape. You're looking for things like what are the motivations that you can find behind the disinformation around you. So in this case, that there wasn't much country to country. There was an awful lot of internal motivations. Who is apparently sourcing a lot of these things? So who are the actors in this? So not just the points of delivery, but also things like influences in your network. What type of activity? So this was a faith based community. So there was a manipulation of that. A lot of discrediting of election process. So this is election based, which is different from things like medical based. And discrediting journalists as harassment of journalism of journalists and hate speech towards them is quite often rolled up inside campaigns. Risk severity. So what are the bad things that could happen? Where on which platforms online offline do these things seem to appear from? And what sort of routes does this information take through the ecosystem? So hijacked narratives are biggie. WhatsApp is used a lot. So sourcing through WhatsApp, starting there, people looking at Facebook, forwarding it out through WhatsApp. Some interesting loops between online media and offline media. So for instance, things being pushed on to social media, getting into the radio network, and then being pushed back from there to word of mouth and social media. There are all sorts of loops in this. Next slide, please. So looking at the behaviors in there. So this is the amic red for this area. And these are the types of behaviors seen. So distorting facts. Just the WhatsApp, Facebook platforms. The reason we put the platforms up is because it helps with understanding who can actually act on these responses. They very often have very different rules. They have different ways of ways of working. There are different modalities and abilities to influence. But, you know, imposter new sites, finding dozens of imposters of well-known webline sites, beautifully taken down. But a lot of conspiracy. But you can look at this and you can get just a visual on what happening in the space. But behind that, every single one of those techniques has a list of counter techniques. So then you can start putting up a list of counter techniques and people who can, or other groups that can respond, how they can respond. And we're working on how effective. So what are things that reach on those? Next slide. So on the other side, we're looking at the response landscape. So everybody who could be a stakeholder or part of this response or contribute to a response in these four places. So risk reduction. That first set of what can you do ahead of that bigger problem. Lots of media literacy. Lots of influence literacy. Some information landscaping. So some void hunting. Some repeat hunting. A few other moves. Monitoring. So who's monitoring what? In this case, there were lots and lots of different tip lines. So there were lots of monitoring, but not necessarily connected analysis. So who is doing that very first layer of, and I will show this in a second. So who is doing the triage of there is stuff coming in. Which of this should we pay attention to? Passing to the second here, which is let's analyze where this has come from. How widespread this is. Let's find the artifacts. Let's find the narratives. Let's work out what the mitigations we can do tactically are. Tier three is the more longer term things, which is creating reports on situations so that you can go fix your landscape again. And tier four is a coordination holding it together. So quite often you'll find different groups doing different ones of these. And connecting them up could actually gain you quite a bit. Response. We see a lot of messaging based responses. So pre bunking is that red team where you're working ahead of the narratives coming at you to what they might be and placing messaging ahead of time. Making sure that you have strong messaging, strong amplification of the information you want to get out. So things like where polling stations are, what the election dates are, ways to vote. Just get those out, amplify them. Counter narratives. Once something has happened, you've got these moves of you can debunk it, which actually is pointing at the elephant, you're actually pointing at the disinformation could amplify it itself. Or you can build counter narratives, which is something the reality team does, which narratives that are stronger, you push those out, they stick. Actions, site removal, so group account removals, other actions beyond that. And also important in response is that reach. It's not just what you did, whether it was effective, but how effective, how far did it get if there is a word of mouth in different languages? Did you make your response move out to word of mouth in those languages? Did you get to the radio stations and break those loops? Next slide, please. So this is the Amit Blue for the responder behaviors. Again, you can just glance at this and you get a sense of where the activities are. So a lot on the planning side, some on the application side, not a lot in the middle. But this is probably appropriate for where they are at the moment. Things that we're missing were things like a shared fact check database, just make it more efficient. Dialogs, so government to group dialogue, so that level four connection. But you are seeing things that were good and already happening, like influencer training and codes, media literacy training, accounts coming out, this sense of gardening that landscape, in this case the new landscape. So there's another piece to this. So next slide, please. And that's how do you make it all work together. And this is the internal slide that we're not talking about today because we could do an entire talk just on this. But I've talked about tiers. I've talked about triage. So what does the triage group do? So again, this is modeling against ordinary sock. So you have a layer that scanning your systems, getting information coming in, things like those tips, things of the platforms, things of social media, triaging that starting tickets to then pass to tier two, which is analysis remediation. So that first tactical responses, tier three, that deeper work, all of this backed up by knowledge of both the disinformation environment, so the threat environment, but also the knowledge of the information environment. So quite often you're working on a vertical. So you need to know about the politics of the area, or you need to know your industry, or you need to know the marketing around you. So you need that double store. In fact, we have a triple store in a second. And underneath this is that tier four of keeping it together, keeping connected out to business units, responders, and that the main object to that is the crisis plan. Please, please, please, if you're responsible, just make a crisis plan, work out who you're going to need to talk to. Even if you never use it, make me feel better. And just rounding this off at the end, this is what, so next slide, please. This is what it looks like in practice for this specific unit. So it's tagging the disinformation space, the threat space with Abbott labels, the behaviors in it, the same with the response space, checking that they match up, looking at collaboration methods. So this is how you get groups to collaborate. You allow the people in, people in blue are the ones who are within your system, using your system components, your system stores. But you also have touch points for people who are running their own systems, so that you have access. You have that coordination, you have that way to work together. And a lot of the importance for that is how do you build for search? If you just build a system for where you are right now, that's only be good for where you are right now. If disinformation hits this area, then it needs to have a system that'll adapt to that. And part of that adaptation is looking at these pieces and working out which one of these you can supplement the humans with machine learning. So you can add in things like the triage. You can do analysis on the triage and there'll be the cut and dried stuff and then the stuff you need, you want the humans to look at. You can automate some of the channel search. If you have a disinformation data store, you can use that for both triage checking, channel search check, and automate suggestion in that for new information. And so on and so forth. You can automate some, but you're always going to have the humans. So how do you build a system where the humans work together? And next slide, and we're done. There is a next slide. Yes. Thank you. Thank you. Thanks so much.