 Gweithio am ymddangos i chi. A gweithio i gweithio gyda'r tanfodol yng Nghymru, yr ysgol sydd yn Pan Dora y Box, beth sy'n gwerthio'r gyflawn James Patrick, ond mae'n gweithio'r Gweithfodol CIM, CERITIAN SOSIAL MACHINING GROOP. Fynol yng Nghymru gyda'r tanfodol CIM, ond mor wych yn cyflawni'r tanfodol CIM, ond yma gweithio i'n gweithio gael gyrraedd, yng nghymru a'r ddechrau yn y mae'r gweithio'r gweithi. all y gallu ei wneud yw'r cymorth o'r cwrdd Cymru yn ei ddweud o'r cymorth Cymru, ac mae'n gweithio ar y cyfnod ymweld i'r cyflwyffol. Felly, rwy'n gweithio i'r gweithio'r James Patrick, yng nghymru sy'n cymorth hynny. Hei, rhaid i'n gweithio'r gweithio'r gweithio'r gweithio, y gweithio gyda'r pan ddora'r bocs. Felly, rhaid i'n James Patrick. Felly, rhaid i'n gyfrifio'r copa, rhaid i'n gwneud. I was an intelligence analyst, and for the last few years I've been building up a company which specializes in publicly available electronic information, but done ethically and done right. So, what is PAEI? Well, it's publicly available electronic information where public is defined as information published in HTML, PDF or other common formats on the internet, which can be gathered lawfully and ethically by a person or algorithm using the internet or via application programming interface, which is API or API, and those queries that are made to third party platform providers. I can't stress enough the lawfully and ethically piece, and this does warrant a little bit of explanation. Lawfully means that you are complying with platform terms of service when you're accessing information, so you can't just go and scrape information from Facebook. You have to do it through the authorized protocols and channels. The same goes for Twitter and the same goes for everything else. And ethically is something that I introduced into the thinking around this after the Cambridge Analytica scandal. And bi ethically, for me, what that means is actually ensuring that the information that we get doesn't contain individual private information, PII. Because you don't actually need it to get a really good understanding of discourse or a topic or a population or things that people care about. You don't need to intrude into somebody's private life or individual psychology. When we talk about ethically, what we mean is actually using all this information in a way which is socially beneficial and not destructive. So the types of projects using PAEI. So I've used it for, and we use it for quite a lot, communications and engagement strategy development. Who do we need to speak to? Where can we get hold of them? How do we speak to them? How do they take in their information? Health equity needs assessments, which is something that we do quite a lot. Prevention strategies, and this could be reduction of serious youth violence or mitigating the impacts of dis and misinformation on elections. And then social movement design. So this is, if you're a grassroots group, how are you actually going to get your message across effectively in particular if you've not got a huge amount of money? Interventions that actually loops back into the prevention strategies, but it's also how can we reach people with a health intervention or something else beneficial? How can we relay information about where people can be vaccinated, for example? Campaigns, I think that's fairly self-explanatory, and evaluations. So some of the interesting stuff that's being done around this is we're actually combining PAEI work with traditional research. So we're actually getting a really hyper-honest view of what people genuinely think. And if there's something that is universally true, it's that if you give people a mask, they will tell you the truth. And social media is a really powerful tool for that because it provides everyone with a mask effectively. So the first thing that I want to talk about is self-isolating communities. And this is quite a potent example of precisely what it is that we're talking about. And back on the 12th of February in 2020, I picked up on something that was clear in the social data and wrote to the WHO. And what I picked up on was that there was this coronavirus outbreak which was spreading rapidly from Wuhan. And it had been designated a new name. But at the time, the WHO were actually using these internal terms, NCOV 2019 and 2019 NCOV. Whereas the general discussion amongst everyone in the world across social media, they were referring to it as coronavirus. So the WHO was actually excluding itself from public discourse at a time when it was needed the most. So highlighted this and wrote to them and showed them just really simply the scale of that conversation, which is huge. Looked at Facebook data. There were just 75 posts that we picked out that had 49.5 million interactions on them. Of those, 48 million were talking about coronavirus. While the WHO had trapped itself in to speaking to about 750,000 people talking in these obscure medical names. And what they actually had done is they'd segmented themselves off and put themselves in a little box where they were just having a conversation with themselves around what to call this thing. And the conversation kept moving with their ability to get information out as quickly as possible. So we wrote to them, gave them some advice and we used timelines and some other data to show them where that conversation was happening, when it was happening. But also who was talking about it and we picked up some really interesting facts in there. Like one of them being at the time, one of the biggest places where the conversation around coronavirus was taking place was Spanish language, US outlets run by Russia. And we just gave them some simple advice based on what we were seeing, which was if you adapt and use Spanish language to communicate using this specific term, you're going to increase your reach substantially. Now, I can't imagine that during a public health crisis, anyone thought for a minute, because I certainly didn't, that they were going to reply to us or adopt the advice. But a day after sending them the report, we actually got a reply saying that they had adopted the advice and put it in place and we're going to be using coronavirus. And it just rapidly accelerated their ability to communicate with the global population. So that is the very practical thin end of the PAEI wedge, if you like. Behind it sits these key principles, which I've developed over the last couple of years, doing large projects in particular across Europe, where what we've been looking at is miss and disinformation, the impact on elections and the impact on populations for pushing conspiracy theories or creating dissent or all of these other things. And I've actually given names to a couple of these things, so that it's easier for people to understand them. And those names are disinfornomics and emotioneering. And these are really key principles in particular, if you find yourself dealing with, say, for example, anti-vax or certain types of extreme behaviors and political ideologies. So emotioneering effectively is controlling trust and distrust. That's all it is. That's all it means. And the way that it works is psychological. So brains absorb information from this range of sources to create like a balanced input, which tells us about the world, tells us what's going on and allows us to go about our business. But when a crisis situation occurs or we feel at risk or we get frustrated because not everyone is directly exposed to risk. So actually, your brain increases the weight it puts on everyday frustrations. It can trigger the fight, flight or freeze response. Now, in extremis, I've got commendations hanging on my wall here for tackling armed suspects and bringing people out of fires and dealing with all manner of horrible. And that does involve that reflex directly. But as I said, you can layer it down and some people can have their fight, flight or freeze reflex triggered by a news story or by the washing machine not working properly. So it's that psychology of crisis situation, which is really key. And to help us through what our brains do is they restrict the amount of information that we actually take in and they kind of filter it so that our brains can make focus decisions about how to keep us safe from the perceived harm. And what also happens during that process is biases or fears or notions. The brain tends to amplify those as well to provoke a stronger response. So what people who create and spread disinformation do is they either amplify feelings of crisis threat or frustration in your day to day life and they can do that by using bots to create a swell around certain news stories, or they can hyper risk of war, or they can hype or amplify a conversation about something which is not strictly relevant, like the release of somebody's emails and use it to destroy opponents. They can also actually create and manufacture crisis situations, especially at the hostile foreign state level, or they can prolong them so they can actually draw them out. And the other thing that they do is they control the information which relates to that crisis situation. So flood the media with certain stories or flood social media with certain stories, which is what troll farms are used for. They use to distort public perception of an issue. And this is, of course, incredibly powerful, not least because we are 24 seven connected in a way that we've never been before. And all of that information is suddenly on tap so it's flowing through everyone all the time in our palms in our pockets. It's pinging through the night. It's pinging through the day. Now disinfonomics is revenue driven amplification. Now disinformation itself is the creation sharing or amplification of deliberately false information and disinformation actors. They use these alternative media outlets and social media outlets as a direct access route because it is far easier in a way that's never existed before. And if you think about it in the past when we were thinking about things like this and propaganda effectively, they would fire flyers out of a cannon. You don't need to do that because you can fire a piece of information at the whole world without leaving your living room. The other key aspect to this is that these outlets and social media platforms because of laps regulation, they're not bound by the same standards as traditional press or some of those other operators. And the action is actually combined with a strategic objective so you can geographically identify an audience and target them directly. So an example of that would be negatively impacting the ability of a government to control COVID-19 spread by creating myths about mask wearing or getting people to behave in certain ways which evade public health measures can actually directly put pressure on public services. So you could treat it as a critical infrastructure attack. The problem that comes and has grown from this is that the fact checking industry, which has swung up over the last few years, is actually creating this self perpetuating cycle of amplification. And it's had the effect of removing the need for disinformation actors to do as much heavy lifting so they don't need to be in the message boards anymore. They don't need to be operating the trial farms as extensively as they did or using bots as they once had to to extend their reach and gain the algorithms of social media. In fact, all they do now is they rely on fact checking services to go and find obscure pieces of information which are then turned into reports which are circulated, which turn into press releases which go to the mainstream media. And in part, this process is driven by the collapsing circulation of the mainstream media as well. So they're constantly hunting for digital content because it's cheaper. And what it means is that disinformation has got like a direct access route into traditional media, which it never had before. And this has just come about for a fundamental misunderstanding of the types of people that are implementing disinformation programmes. So it's actually a cycle. So at the very top, the people are seeking out the disinformation to create their products, their outputs, their reports to go and ask for people to crowdfund their activities. They create those reports, they're circulated, they go out into a group of people. That's actually turned into revenue generation by a second layer, which is the media who are obviously looking for circulation. And as they see the revenue stats go up, they think, blimey, this is a really good way for us to make money. So they go and actively seek more and you just get this perpetuating loop of disinformation getting bigger and bigger. So as I say, it's by misunderstanding the nature of this problem and focusing on a really narrow solution, which is correcting information, which doesn't work because once people have made their mind up, they've made their mind up anyway. We've actually made it easier for dissident groups and extremists and hostile foreign states to do their work. It's unintended consequences, which is one of those things if you open Pandora's box, you should always be afraid of what's going to come out of it. So in practice, when you look at something where disinfonomics and emotion hearing has all been happening, you can actually see how it sets public health campaigns up to fail. So we'll take a good example here, which is there's this well-known trope for vaccine dissuasion, which is about the presence of gelatin in vaccines. And actually, because of the way that the algorithms work and because of the ways that the backlinks work and because of the ways that people are using SEO to spread disinformation as well. If you Google it, instead of just being taken to the NHS or reliable health services, you're also taken to a fact check and directly to the piece of disinformation and actually backlinks you out to the source material for it as well. So what we're doing is we're actually connecting people with disinformation while also asking people for donations. It's just a really, really powerful example of how that stuff works. When we look at how we can analyse PAEI to actually do some of the lifting for us to come back from this cliff edge, it's a really rapid process. And you can actually use it to get from a huge amount of data to targeted messaging design really quickly. And that's something I've been working on for the last couple of years and it's just impactive and it's effective and it's fast. So in this example here, what we actually did is started off with a whole view of France and we were looking at lockdown reluctance. So PAEI isn't just social data. It's also anything that you can map, anything which you can access. So it could be ONS statistics on deaths or suicides. It could be the index of multiple deprivations. You can layer it, you can map it, you can geographically understand a population, you can add demographic data to it. So in this case, we mapped out lockdown hesitancy or reluctance. And in 24 hours actually brought this massive volume of millions and millions of tweets and social media posts into alignment with that geographical case data. And we were able to completely understand when that audience is reacting to stuff, what hashtags they're using, what keywords. Keywords are important because if you understand the keywords in a conversation, you can understand that if you construct your content from that lexicon of popular keywords, the algorithm is actually going to lift your content to the front of the queue. So it's almost like SEO in the way that people gain that as well. Again, you can geographically prioritise because there's a lot of geofencing data which is available. You can extract device usage. So are people using Android devices or iPhone? You can see what domains they're sharing. So you can actually make decisions around where to focus PR efforts, but also to understand what kinds of information they digest, so preferences. When you've got a massive volume of data, what you can also do is take the colour scheme. So you can actually start to build on these principles of neural efficiency and present information to people in a way they're already familiar with. And you can actually structure it and lay it out so certain types of wording appear at certain points in an infographic and call to actions are positioned in a certain place and there are conventions around the amount of words to use. And that's really informative because it means that you're cutting through and using a lot of pre-existent psychology to do some of the lifting work in terms of getting your message across. So psychology is your friend and foe. If you get it right, it's really powerful if you get it wrong, you are out of the loop. Just very quickly, conscious versus subconscious. Our conscious mind is processing only a mining fraction of what our unconscious is processing. In fact, it's about 99.99% occurs without us even knowing about it. And we make tens of thousands of decisions every day, whether we know it or not. So people have been studied. There are people who have had different types of injury to the brain, which have stopped them from processing emotions or makes it more difficult for them to make choices. But that's a small part of the population. And for the most part, it's the unconscious that's doing the lifting, which is why understanding how to structure and colour information breaks down those barriers much more quickly. Two core elements of psychology, which are really impactful in particular in designing social campaigns. On the one side, you've got mood recall, which is our emotional link to a type of information or a sight or a sound or a smell. So if we present people with familiar information and we know that they're reacting to it in a certain way, we can predict the response to it. And collective common sense is an influencer of group behaviour. So effectively, we hear or see something which catches our attention, makes us curious. We compare it to other information that we already know. We share it. We attach evaluation to it as a group. And then we define common sense and that defines our conversations. So if using PAI analysis, we can actually see what information people are comparing against. We can see what information they are hearing, seeing and sharing. We can see how it's being valued. So that allows us to work to redefine that collective common sense for a positive outcome. We can capture emotion from everything which is publicly available. As I said, without going down to the level of an individual. And while people say measuring emotions is notoriously difficult, since the explosion of social media, there have been massive studies which have collected huge amounts of data on emotional responses. And there are now Python packages and algorithms which can be deployed against subsets of tweets and Facebook posts to give you a really balanced view of nine core emotions, positive and negative and sentiment, of course, so that you can understand and get to grips with how people are responding to information and what emotion certain types of information is relaying. So we turn this into actionable insights. We digest it. We break it down. We use AI analysis. And we make some really, really exciting discoveries using this stuff. So emotion analytics is just an incredible tool. Action equals ownership. And this is all about trust cap. You can implement behavioral change campaigns with expert partners. You can use the principles of emotion hearing to identify your intervention points. You can send your strategic communications around psychology to harness mood recall and collective common sense. And a key lesson to learn from disinfonomics is actually, if you occupy information space first, what happens is that if disinformation subsequently springs up, they end up amplifying you. So you're flipping that entire system on its head and evading all of the nonsense which stems from the fact-checking environment. Now, trust capital is really key. And this is to do with messengers, networks and their personal relationships. And the most valuable to you is obviously earned. So personal trust is organic and it's subjective. And it's much more difficult to convince a person that their personal truth is wrong. Which is why if you look at Facebook family products, for example, they're a really logical place to put campaigns because they have access to an older demographic. But that older demographic tends to inform the younger demographic by Facebook family products, which include WhatsApp and Instagram. And that's how that actually works to define and redefine collective common sense. I want to give you a quick case study now. So NHS England, they identified there was a problem in a certain area which was to do with vaccine hesitancy. And they attributed it to cultural reasons and the prevalence of certain vaccine myths. And they'd done some intervention work and done some restructuring around it, some training. People were concerned about vaccine safety. There was a lack of information. They were worried about infertility. And there was this key sort of lack of trust in authority and government. But what they'd done is they'd kind of referred to this stock bag of ways of fixing it. And they didn't understand the Eastern European community at all in this area, despite the fact it was quite significantly large. So, actually, we investigated this using PAI. And we went out and looked at not just English language or commonplace social media websites, but social media websites which exist across the world and in different languages because the tech that we use facilitates access to that. And we were able to uncover some really exciting findings. So, for example, with the Polish community, one of the most commonly spread nationalist tropes is that the Jews are trying to take over the world. And this was actually being pedalled that they were spreading mutations of the virus through vaccines. And then secondary to that was the introduction of microchips to monitor people. But then in the local community based in the UK, this was obviously filtering down through trust networks, but also there was this huge barrier to trust arising from the media rhetoric around Brexit and the way that people were being treated and some of the racist abuse which was taking place on the street. And it had reduced the trust of people in all government agencies, which obviously included the health service. So, there were these stories that perpetuated about the safety of the AstraZeneca vaccine. The final sort of thing that came out was that there had actually been an intervention by the Catholic Church against the AZ and Johnson and Johnson vaccines, saying that they were morally questionable. And that had a huge impact on trust. And you could see this in that conversation. So, the things that we picked up, if we thematically look at them, the common themes were, they believed that there was a causal link between vaccines and thrombosis being true, that vaccines contained aborted fetal cells, that mutations were emerging from the vaccines, that the coronavirus test was a medical experiment, that vaccines were being used to sterilise communities. And this is an example of how they tapped into mood recall, because in the Roma community, there were forced sterilisations within living memory, and this had been tweaked and targeted by disinformation actors. So, we actually went through, did a full extract of PAI. We dissected this audience. We extracted the way that they digested information. We extracted the colour schemes which are familiar to them. And we used it to design some outputs. And so, we knew that the do it for us, do it for the NHS narrative wouldn't work. We knew that presenting fact checks wouldn't work. And we knew that it was best to avoid discussions about vaccine efficacy and problems with vaccines to sort of separate things out and disconnect from the trust arguments and the pre-existing disinformation to avoid amplifying it. So, the concepts when they were built out is we could use certain types of authority figure in strong clip language using these colour schemes to prevent things in certain ways. We could use certain types of authority figures. There's a massive trust in the Eastern European community of doctors in white coats to actually tell the stories about how to engage with these processes and what's actually true. And use a variety of community messages and family-centred messaging to actually take people straight through to book in for their vaccinations. We're at 30 minutes. There's quite a lot of information I've bombarded you with. And I am more than happy to answer any questions that anybody has. That's brilliant. Thanks very much, James. That was really insightful and thought-provoking presentation. So, we're now going to have a short Q&A session. So, let's see. First question is, you say we get information from balanced sources, but on social media, is it balanced information when we're fed what the algorithms show us? When you do the extract through developer portals through the API, you don't deal with the algorithms. You deal with information in its totality. So, you actually capture everything. If you're looking at it through front-facing, so just searching social media manually, your individual algorithm is going to give you a tailored result. And what we have found as well is that some of the traditional social listening services which exist, because they seem to be linked to personal accounts or business accounts in some places, they also produce distorted results. Okay. Next question. Recent studies on how we absorb information tell us that the range of sources we listen to are the only ones that conform to the way we see the world. And that crisis accelerates this, but don't you think that achieving a balanced and truthful social discourse is a battle we can't win at all? I mean, you can win that. It's very easy for people to sort of wave a white flag or just go, there's no way that I can compete with this, but in almost every circumstance where we've gone in and seen what people's existing understanding is, there's actually a huge knowledge gap which is what the real barrier is. There seems to be a lack of distinction a lot of the time between social media and private messaging as well. So there's a huge spectrum of stuff, and actually it's not just about understanding what's going on on Twitter or what's going on on Facebook. When I do an analysis, you look at every conceivable platform, every conceivable distortion, and that can be from mum's net to gap. You capture all of it in volume data and then you can actually dissect and cut through. You can win. You just have to understand the audience, the segments within those audiences and actually where you can communicate with them. Next question. Are there any common themes on why disinformation is generated and spread? I guess you could look at that on quite a few international members on the call today. Are there any trends that you've identified? I think people get bogged down when they talk about disinformation, looking at individual disinformation topics, and they forget that it's strategic. One of the key things is understanding who it benefits. Some disinformation relating to vaccine hesitancy comes from people who market and sell nonsense remedies or vitamin pills. So you've got an objective, you've got a strategy, you've got a delivery. For other people, you might have a hostile foreign state behind some disinformation, and actually what you need to do is deconstruct the purpose and the actual outcomes. So it could be putting pressure on public services. It could be causing money to be spent in certain ways. It could be causing dissent and protest, which puts people in conflict with authority. It's all about understanding not just that little piece of disinformation, but what it is, and it always comes down to one of two things, which is either about control or cash. You actually have to deconstruct each problem individually to understand which the root cause is before you can move through fixing it. Next question is, how could this area be regulated so that organisations and parties aren't benefiting from these troll farms? And also, is it regulated already or is it something which is fuelled by planted PR? Right, so regulation of all of this is messy, and it's because of the international nature of it. So when I, for example, look at what data we can get legally, we are dealing with civil law in the UK. We're dealing with every aspect of everything from the Regulation of Investigatory Powers Act to GDPR to the Data Protection to the Equality Act to everything else, but those are different everywhere. So trying to come up with a unified regulatory framework for social media platforms, some of which are set up by hostile actors and have all their structural arrangements buried is really, really tricky. There has been some great work since Cambridge Analytica within platforms and outside them like Facebook has completely restricted its API, so you can no longer get personal data out of it, and actually if you have people who are trying to do unethical scraping, the platform will identify, ban them, and it will do things like block IP addresses too. So technologically there's some great stuff happening, and obviously we've got the online harms legislation coming through here and a lot of work being done on that in Europe and a lot of stuff being done on that in America. They're changing the California privacy law around children too. Germany has done some amazing work on online abuse, which has actually led to Twitter to partition its algorithm and some of its databases to keep people away. There's tons going on, but it's complicated. I don't think we've ever had to confront as a global community the cross-border challenges which something like this presents, and it's caught us completely unawares, because I think in many ways there's been a mindset of thinking, the only thing that we can treat as internationally really is trade and weapons, and this kind of snuck under the radar, and it's become so big that it's very, very hard to put back in the box. Next question, someone's comment is a really interesting subject. Could this be used in prevention-led messaging for wider conditions with the same principles, and also are there any challenges in certain age groups? Okay, so you can use it anywhere. Places where we have and where we are using it, it can be anything from vaccine hesitancy to breastfeeding to reduction of knife crime across the board, you can use it. There is no restriction on that, and people don't use the internet in the same way. They don't even use the same platforms in the same way, so it's really tricky, and actually part of the big job of doing this, and actually doing this for a living, is understanding these demographics of all of these platforms as they come and go, and there are some generalised rules, so you can say that older users tend to use Facebook, millennials tend to look to things like Instagram and Reddit, and younger kids use things like Discord and TikTok, but those are like hypergeneralisations. It's much, much more complicated than that, and there are divisions, subdivisions within each platform as well as to what different users will use it for, so it's complicated. Next question on the subject of vaccinations. Which options work best to get people in the UK vaccinated in relation to the option shared towards the end of your presentation? So that actually is hypervariable on geography. When we do these analysis, we actually break them down into two component pieces. So the first is geographical or geostatistical analysis, which looks at things like vaccination rates, health inequalities, deprivation, demographics. So we identify areas and behaviours within those areas which are really clear. So you kind of start to drag out the opportunities and parts of combi. And then you actually look at the PAI in geofence terms to determine what those different behaviours are going to be, and actually every area that we've done, there are variations, and there are variations in the way that people respond to colour, to information, to structure, to static, to video, to the length of time that they will watch things, to the types of outlet they will read. It's hugely variable. But I think it's quite an important piece of work to start being able to chart that, because I think we tend to over generalise and go like, oh well, the anti-vaccination problem is specifically these three things, and therefore that must be true nationally. And in fact, it is just not. OK, thank you. One person has commented, is it all just social media data? How does this differ? You have got a wealth of publicly available. So you've got forums, you've got news media comments, you've got anything which is publicly available in terms of vaccination, population, demographic, you've got articles, you've got blogs. It's vastly different. There is a much richer picture of the population than I think we perhaps traditionally think about. OK, and I think we've probably got time for one last question. How quick is it to do a PAEI analysis? It varies on topic, but say for example, you wanted to do a PAEI analysis of social media in the immediate aftermath of a serious violent event. That can be complete within six hours, and you can implement legacy monitoring for that. If you wanted to do a detailed piece about a whole CCCG or something like that, then you would be talking seven to ten days to do it justice, because what you're actually doing is taking in every piece of available data and turning that into a complete report, actionable insights and creative design strategy in principles. OK, and I think that's all we've got time for now. So that's been great. Thanks very much, James, for your presentation. We've had some really good questions there from our viewers, and it's a shame we don't have more time to go on and on all day. In fact, one comment is that it could make a whole course out of the subject. So hopefully there's some useful tips for people to take away and some food for thought. Sadly, that's all the time we have for our webinar today. I'd like to say a big thank you to James for his excellent presentation and to the CIM charity and social marketing group for organising the event. Our next webinar express is Strong Brands have multiple personalities. How do you identify yours? With Richard Jillingwater, which will be on Thursday 3 March at 1pm. You'll find further details listed on the events page on the CIM website and you'll be able to register for the session. So on behalf of CIM, that just leaves me to thank our speaker James Patrick once again for a great presentation and to say thank you to you for joining us today. We do hope that you enjoyed the session and we look forward to welcoming you again to our webinars in the near future. Take care everybody. Goodbye.