 My name is Henrietta Wilson. I'm really pleased to be here today. I do freelance research on weapons of mass destruction disarmament, and I'm really delighted to have a chance to work with Dumblesh and Olymide Samuel from the SOAS team on this. I need to give a special thanks to all the scrap volunteers that have helped this and a special shout out to Annan Sarria, Julia Alfden Brinker and Ruth Innes Dottir. So thank you all very much for your help making this all happen. I'm delighted. We've got some fantastic panellists today. We have Jamie Withorn from the James Martin Centre for Nonproliferation Studies. We have Eric Tolar from Bellingcat and Richard Guthrie from CBW Events, a really exciting set of insights and experiences from different open source projects. Before I hand over to them, I'm going to give some overviews about what all these webinars are about and to fill you in on some of the themes that we've had in previous webinars. So we had two webinars in July and when I finished giving my opening talks, I will post the links to the recordings of those that are on the SOAS YouTube channel. But to recap on some important definitions, by open source verification, what we're meaning is any tracking that uses publicly available tools and information. So some of this, as we saw in the July webinars, is really what you might call high tech. There's a complication in using these words, but some of it involves satellites and AI and automated processing tools and some of it isn't. Some of it is much more people driven. That's looking at information from traditional media sources like newspapers or broadcasts. But underpinning all of this is the internet and the internet really enables and facilitates information in itself and also the tools to process the information and the internet also really facilitates communications. And so enables communities of practitioners in different places in the world to come together and track things in different remote spots. So this open source sort of stuff is happening in many different places and in many different ways. So I've been talking to a whole bunch of people. There are people in traditional media places like the New York Times. There are people in different research groups in think tanky sort of places or universities. There are practitioners on the ground, isolated ways and also governments are doing this sort of work. So a whole bunch of stuff. So one aim for these webinars was really to capture some of that variety and the differences that's going on and also to start developing conversations around the possibilities and the challenges of this sort of work. So for an example, in our July webinars, we featured people who have been monitoring political violence or small arms flows through Africa or radioactive materials in Somalia, as well as signs of illicit nuclear proliferation. We also had analysts talking about general insights about the relevance, the contributions this sort of work might make to global governance structures or other systems. So I'm now just going to share my screen. One thing I wanted to flag up in these opening comments is through the different webinars we've had so far and through some of my wider research conversations is that there are some common emerging themes which I might actually classify as questions. It's unclear to the extent to which these are relevant or universal considerations. So it feels very much to me as though in principle, the new technologies that I quickly overviewed do make possible some element of global tracking, whether that's tracking disease or shipping flows, or all sorts of different sectors. New technologies are making a difference in the possible scale of monitoring work that's going on. But there are some massive challenges in that in thinking about global systems. It's clear that there's a really uneven playing field. There's very different access to the technologies that you need or the skills you need to make those technologies work or the structures that enable you to do this, the social and political structures. There's also big differences in attitudes and practices in how people are conducting this work, including things like attitudes towards privacy protection, attitudes towards law on who owns data. And there's also a sense that despite the fact that lots of these technologies and methods enhance the visibility of different practices, different artefacts, not everything is visible. There's still areas where there could be murkiness. On the other hand, despite those challenges, it's also clear that verification has been often is pivotal to the implementation of global governance structures, including international treaties designed to regulate different weapons systems. And even if the existing technologies are imperfectly visible, it's still better to have some sort of enhanced visibility, and that the promise is that it could feed into supporting different structures. And it also seems that technologies can have a role above and beyond their technological component. So when there's real friction in political conversations or negotiations on a multilateral front, the conversations around technologies and their possibilities can really stimulate wider political engagement. So those are just a couple of thoughts for me, it's very much work in progress, but I wanted to frame this event in terms of some of the things that I've been finding out. So I'll just briefly say a bit about how this event is going to work. I'm going to have three short talks from Jamie Withorn, Eric Toller and Richard Guthrie. And throughout those, I invite everybody to post comments or questions via the chat and I'll be monitoring that throughout the session and putting the questions or comments to the panellists. The event is being recorded and will be published later on, and the presentation parts of the event will finish at 3pm. After that, in response to feedback from previous events, we're leaving this space open for half an hour for an optional breakout discussion. Now, some people I know will need to leave at that point and so I will flag up three o'clock, start with changing mode and enable people to leave if they want, but please do feel free to stay if you want to. So thank you very much for being here everybody and I'm now going to hand over to Jamie for her talk. Thank you. Yes, so I'm just going to share my screen. All right. Hi everybody. My name is Jamie Withorn. I'm a research assistant at the James Martin Center for Nonproliferation Studies. And today I'll be speaking specifically about open source information and how it relates to nonproliferation issues. So before we begin, I'd like to kind of define some terms. I know Henrietta led with some definitions. However, I think it's best for me to kind of describe to you all how I approach open source information analysis. The first thing is that I view open source information as a research framework and a methodology as opposed to a specific type of source. And so when I speak about open source information, I'm really referring to a process of, you know, data collection and analysis in the open source realm. To that end, it's also important to define data and information. So data is, you know, the smallest building block of a fact and information is layered or structured data that can help paint a better picture. And so for most of this, even if I use the term or the acronym OSINT, which stands for open source intelligence, I'm actually probably really referring to information. And why that is, is because intelligence is a very, very specific term that refers to kind of an assessment of potential adversary behavior and what it, you know, might do or might kind of what consequences it might have. And while to some of that degree is covered in general kind of nonproliferation open source analysis, a large part of it is actually more so just information who's doing what and how. And next I'll also define the term evidence as I think that sometimes thrown around evidence refers to information that is admissible in court. And so throughout my research process I often try to refrain from using this phrase as well is because I don't think necessarily that what I'm finding would be admissible in court. And so again all that to say is that what I say OSINT I'm generally speaking about a process and I'm generally speaking about information. In the open source process, I think it's really important to differentiate also between techniques and methodologies versus tools. And so what I mean by that are there are tools, or kind of single use applications so for example you can go to a website and be like this website will pull the user information from this one particular website. And while that can be useful, I think it's much better to understand how that that tool is functioning or what methodology or technique that tool is applying in order to apply that to different case studies. And so oftentimes a lot of you know satellite imagery analysis, a lot of those principles are founded in GIS or like kind of earth sciences studies. And those are can be really useful to understand the underlying methodologies to apply it to a number of friction framework so again, and that's a really an integral step I think. Next, I'll just kind of briefly define the process of open source analysis. I think the first step is exploring the resources seeing what's publicly available, and what is available to you. And the next step is collecting. I think in collection you should cast a wide net and gather basically everything you can that you think might in some way shape or form begin to answer your question. And then the next step is processing so kind of going through your data and information and making it understandable to humans and kind of help it begin to tell its story. And then lastly is the analysis part where you're kind of going through your information and your structure and information and understanding how it begins to answer your question or what questions still remain after you've kind of conducted this entire process. And then the final step I guess is reporting or publishing or presenting your work. And there are some ethical considerations there, especially in the non proliferation realm. You know you don't want to accidentally create a WMD shopping list for bad actors and so kind of understanding how and where to publish your findings is important. Throughout this entire process to have peer reviews to make sure others are kind of understanding your chain of thought and your kind of analysis and other agreement or helping you to realize where you could kind of push your hypothesis a bit further. And so next I'll speak more specifically about non proliferation applications. And so I know this is you know called verification the age of Google Earth but satellite imagery and which is Google Earth is really only one type or form of open source information. So there are a bunch of different kinds of open source information that when pilot together when used together can actually help paint a better kind of, you know, picture, I guess. And so satellite imagery is one of those which is again satellite imagery taken from satellites. There's also ground imagery or imagery that's quite literally taken from the ground so actors on the ground who are then publishing those pictures to and social media so that can be useful in you know, helping to define weapons dimensions. That's like a really interesting aspect of potential non proliferation applications of ground imagery. However corporate information is also super important so understanding corporate entities and their networks so who they're related to who they're trading with what they're importing what they're exporting, how they're doing it who the owner of that corporate entity helps begin to kind of assess non proliferation in terms of mainly like, you know, export control and sanctions but it can also pertain or paint a better picture of potential proliferation concerns in the corporate realm. Next is transportation data, and I think often this is overlooked in our excuse me open source analysis. However, it is extremely beneficial to understand how items of potential proliferation concern are moving from one country to the next or another. So, here I've listed planes and boats which are the kind of main transportation mechanism for proliferation related goods. And so it's can be really useful to better understand how these, I guess, entities transmit data and I'll go through a case study in a second on vessels specifically. And lastly, I think it's important to include text because it's not as fun as satellite imagery perhaps. However, it does provide a kind of a very important foundational context and understanding of what's going on, and where and kind of what the primary sources say versus, you know, comparing trusting different sources to better sys out what's fact and what's more author assessment. So next I'll go into two case studies very briefly. The first is on North Korea and sanctions of Asian tactics. So North Korea is designated by the United Nations, and this limits their import and export. Excuse me that limits their ability to import or export coal and oil. And so in order to evade these sanctions, North Korea often relies on vessels or boats and tries to kind of spoof the data in a couple of ways. The first is kind of more traditional as spoofing so as refers to automatic information system transponders. And essentially it's the data that is what about tells other boats where it is. So it's really important to have on so as to not, you know, crash in a is spoofing what happens is the boats IMO or MSI or just like its unique identifiers are manipulated so it looks like it's a different boat. So sometimes North Korea will spoof it and be like, oh, this is not my vessel. It's actually a Fiji vessel going into the, the East China Sea. And in reality it is actually a North Korean tanker going to connect the ship to ship transfer. So understanding kind of how it's manipulating its AIS or spoofing it there. Another way North Korea will do that is simply just turning off its data so a lack of data is sometimes data in and of itself or information in and of itself so for example my colleagues did a fantastic study on the Tae Yang where this vessel was going into port in Nampo in North Korea. But once it was past the barrage, it just turned off its AIS and using satellite imagery they were able to identify the vessel and they were able to spot it on loading coal. And so again that kind of paint that speaks to the importance of relying on more than one source or one type of open source information. Next I'll just speak very briefly about a traditional kind of geolocation case study I conducted. So earlier in or back in 2019 in the US Department of State released a report on chemical weapons compliance in Myanmar. And in the report they released this one picture and providing the image provider and the date and all it says was near Tombow. So using those three pieces of information again near Tombow the date of the picture and then on the provider of the image. I went to Image Hunter, kind of founded the area of interest, overlaid that onto Google Earth and then was able to find this particular this exact facility. So what this does is kind of helps us to better understand compliance, not necessarily from a, you know, enforcement way but in from a monitoring lens I suppose so we're able to kind of assess is there activity at these buildings are their vehicles are there is a smoke into these buildings and facilities look like they can produce chemical weapons if you know State Department says so or perhaps these can be other models for chemical weapons development in smaller states so again it provides a lot of information without monitoring and potential you know future analysis lens. And I know I'm running short on time so just briefly wrap up up to talk a little bit about the future of open source intelligence. This going back to Henryetta's opening. I think that as emerging technologies continue to advance to so to will open source information analysis, the ability kind of to go through large amounts of big data and faster and more efficiently and yield better results. I think will also pertain to open source information and kind of streamlining those analyses. I also think that there's going to kind of as emerging tech continues to improve or involve. I think that there's going to be new mediums and new forms of data that are going to help open source information analysis and kind of the help paint again more complete So by new mediums or forms I mean everything from like tick tocks and memes to using radio transponders on boats instead of ASI or using SAR which is radar instead of satellite for a better imagery. And so that's I think one positive we can look forward to in the open source realm. I mentioned this earlier but I think ethics or code of conduct or kind of a standard operating procedure slash of due diligence process in open source analysis as it pertains to nonproliferation is particularly important to develop as. Again, you don't want to be publishing sensitive material and like how to make a nuclear weapon. That's not good. And so kind of understanding the ramifications of publication, I think will be a useful tool. So with that I will wrap up and I look forward to your questions. Thank you. Jamie, thank you very, very much. What an amazing amount you fit into a short time. I'm really fascinated by the range of information you gave us about what you're doing, the things that you're using how you how you think about them in your head. And also this sense that part of your overall project is consideration of how you disseminate things and that you're not just creating generating findings for the sake of it. There's a real mindfulness of a political purpose in there. I'll be really interested to hear you talk more about that at some stage. I'm also really interested in the example you gave of how your open source work was enabled was was able to spot North Korean spoofing of systems that was meant to be complying with and why I'm particularly interested to pick you up on that is. In the first webinar, there was a conversation about how misinformation and disinformation makes all sorts of tracking monitoring much harder because you have an extra job to distinguish real from unreal information. The example you gave showed that open source is really can be really powerful at identifying those sorts of things. Do you think that sort of insight is generalizable in any way, or is that a bit of a reach? I think it's generalizable, but I don't want to kind of, you know, overstate our analysis. So even though we were able to catch this inscrubbancy, we always try and caveat it by saying that it may be doing this because we don't know for sure that there could be multiple factors in the waters that are affecting potential, you know, discrepancies in the AS transmission data. It can also be data aggregation areas on the side of your supplier, so who is providing us with AS information, and that can also kind of, you know, be a little misleading. And so while we are pretty kind of confident in this assessment that these two vessels happen to be in the same place at the same time and they might potentially are using the same kind of information identifiers, it's really important to not overstate that, and so like we're not necessarily recommending in any capacity to censure in this vessel, but rather we're saying this is something that could be a value in future analysis to kind of continue to monitor. I see, and there's an understanding there that maybe your findings need to be understood in a certain context too, so that's really helpful to hear. So I'm going to pause my other questions for you, and thank you very much Jamie. I'm going to invite now Eric Toler from Bellingcat to give his presentation. Thank you, and we'll come back to Jamie later as well. So, I have my screen up now, you can see it fine right. Yep, thank you. Great. Good deal. So, I'm with Bellingcat. My name is Eric, I'm in Kansas City over in the US Bellingcat, which is an organization that focuses on digital evidences, investigative raw materials, I guess you could say, if I want to keep my metaphors going with the mining thing. So I'm going to talk a little bit today in my short 10 minutes about a case study and a broader theme about digital investigation regarding incidents in the war in the Donbass, which is a fancy way of saying Eastern Ukraine. So the war is still going, but it doesn't really get any headlines anymore but the war broke out in spring 2014 and it's still going today. The most periods were in the summer 2014 and the early winter of 2015, but it's still kind of simmering around still. So in particular, the most, I guess infamous and horrible incident within the war was the downing of MH 17, which was the Malaysian airliner that was shot down on July 17 2014 over eastern Ukraine killed 298 people. The trial's majority of the citizens who died were Dutch because it was at Amsterdam to Kuala Lumpur flight and the trial is going on right now in the Hague. And this is a very nice and neat, well not nice and neat, but it's very messy, but it's a very luster, luster of case study of looking at new forms of digital materials and digital. I don't want to be too, I want to be a little bit more careful with the words I use so digital evidence you can use possibly for the current court case. It's the word choice could be very loaded depending on the context. But what this is a very interesting case because as compared to things like the Lockerbie bombing, you know back in the 80s over over the United Kingdom. I mean, there wasn't a whole lot there right I mean it blew up over the sky that over Scotland right and you know there's a long trial and eventually some people were charged and then later released. So with MX 17 we have, we have everything I mean we have dash cam videos which one you're watching right now to the side. We have photographs videos witness accounts from the participants people actually were involved in the, in the convoy that transporting the weapon. This is my non proliferation of a transporting the weapon that shot down the plane eventually, and a lot of information from locals as well for what they saw they saw missile trail they saw a convoys and missile missile launcher moving through the center and all sorts of other horrible things. So I'm just going to give a kind of a brief outline of the process of collecting this evidence, verifying and making sure it's real and also analyzing it and keeping it somewhat sort of. So the user generated content with this is very, very, very important so user generated content is a really fancy way of saying photos and videos from from people. So things are uploaded on the YouTube, Twitter, Facebook, Instagram, but also regional social networks, like of contaction on the class Nikki which are Russian language networks that are very popular in Eastern Ukraine. So here are a couple of video photos and particular one is taken by a journalist or a stringer who was driving by on the right, you can see this big green monster on the back of a truck that's the missile lecture that downed. And the left you see the same big green monster, kind of parked along the road with a minivan and back of a white car that's visible to these two photos look like there's not a whole lot going on but there. If we, we could spend a two hour webinar just talking about these these two photos and all the little tiny details you can extract from them so for example on the left photo. Okay, okay, there's it's a three lane road because you can see that there's a there's a car in each of the lines, and they're all facing the same direction, which means it's probably a one way road, which means the, and you see a green area behind it. So that means that if we were to kind of extrapolate the urban landscape of this you can see it's a three lane one way road heading one way, which means on the other side of those trees you see behind the big green monster is probably another three lane road heading the opposite direction. It's not like a parkway or boulevard or whatever the urban architecture city planning term is, and there's not a whole lot of these throughout eastern Ukraine, which is a development, developmentally, I guess regressed area from the war and lack of infrastructure. So you can find this area relatively quickly just by cruising up and down Google Earth and Google Maps, just based on that one little fact alone. It's a generated content, they were just taken by people who took them and then sent them to the investigation or uploaded them online. Again, the big, big dispatcher from the Lockerby case, some of their cases too. But this was actually in. This is in eastern Ukraine, this is from the hours and both these taken photos were taken about seven hours before the shoot down in the actual near the location of the shoot down in eastern Ukraine. So we're stuck. This thing didn't appear at a, at a thin air. It was in Russia before it was in Ukraine and there are also videos and photos from normal people in Russia again, Ukraine doesn't have a monopoly on dash cam videos and cell phone cameras in Russia you have just as many. So, there's a gigantic convoy that was moving from Russia City of Kursk, if you know anything about World War Two, you know about Kursk, down to the Ukrainian border and eventually across the border into the eventual launch site for the Malaysian airliner is down. And this is the convoy that took about three days to reach from Kursk down to the border. And during that time, there were about 20 or so photos and videos that were uploaded on Instagram, YouTube, Russian social networks, local and regional groups. People showing this convoy. Again, they weren't saying like, oh my God, look, this thing is going to shoot down a plane. It's really just, you know, you're driving to work and you see this big, this convoy of 20 tanks drive by. Now you go home, you upload it to YouTube, you get 500 views on the video and you feel good about yourself or whatever. But no one was uploading these thinking like, you know, I'm a spy. I'm seeing these things. I'm reporting enemy troop movements, whatever. No, it's you're going to work. You see something weird, right? So you upload a video of it. The same way as people upload videos of, you know, those ring door door cams right of things like funny things that happen outside. This is now turned into an actual case with the joint investigation team, which is the Dutch lead team that's doing the criminal investigation to the trial of the downing and a lot of these sisters only four suspects having charged currently but there's a lot more brewing in the background. And a lot of the eventual conclusions about who's guilty came from this user-generated content. So photos and videos from the ground that helped us identify the exact weapon that was used because every missile launcher is kind of unique in its own way. It has certain dents and scratch marks and paint splotches and, you know, even like the wiring of the cables on there are a certain alignment that you can then cross-reference between the stuff that you see in Ukraine. And then the same photos and videos you see from ordinary citizens in Russia, but also the participants of the war too. So on the top right you see a bunch of lads with their arms hanging out and posing in front of a missile launcher. These are a bunch of Russian conscripts and contract soldiers in the early 20s. Just like in the U.S. here, maybe you're from a small town, you join the military, you go all around the world and blow stuff up, then you post photos online to make your friends back home jealous, seeing all the cool things you're doing. Related to the same things with this missile launcher, which was really good to investigate the materials for us. And of course, in the bottom, if you're really interested in the case, you recognize some of these faces of people who are also involved in the war between the Russian military intelligence and military itself. So this is just kind of goes towards a democratization of information that, again, that's kind of a buzzword people always use, but here it's shown in a very, very discreet case of how it's not just, you know, people on the ground who are interviewing people and gathering information and doing criminal, like, additional criminal investigations who can piece together what happened. You know, if we had this during the Lockerbie case, if we had this during, I don't know, whatever big, big name event you want to think of, there could be more information out there. And of course, possibly more disinformation and misinformation too, to piece together exactly what happened. I mean, we've seen this with 911, you can see all the stuff with all the trutharies and loose change and all those about how they kind of misinterpret information too. So this is not entirely a good thing because more information also means more misinterpretations of information too. But in this case, specifically the MiG-17, it was, it was a very good case to see how all this information on the ground can provide people who just with 3G internet connection can provide. So allows us to piece together exactly who's guilty of the deaths of these 298 people. I think I'm probably about out of time, but if I'm sending these slides everyone to, if you're really interested in this kind of stuff, if you're kind of a nerd for Eastern Ukraine and information spaces and all that stuff, I put together a brief survey of the information landscape of the war, and about how if you want to gather information about the war, where do you go. So where are both normal people who live in the conflict zone. This is for example from Instagram, some soldier was sitting with a gun in the US, this is normal, but Eastern Ukraine apparently is not. Some guys sitting with the gun holstered in the bus and they were kind of talking about like, oh my God, you know, this guy, he's a gun in the gun, you know, he's a gun in the bus and everyone's like, oh, it's fine, he's a soldier, it's okay. And maybe if there's later a shootout with this guy involved, we, you know, we have this photo and video. Telegram crowdsource information as well. So if you're into this, you know, YouTube, which are uploaded by the participants of the war, and also the observers of it too. If you're really interested in this topic, I provided a brief survey at the end of the slides, I don't have to, you know, we need about two hours to get through all this. If you're interested in how to conduct research in the modern, you know, get the only, the only European word be conducted with massive amount of internet available, and how the information landscape is completely different than pretty much every war we've seen. And you're up outside of Syria and Libya. I think that's 10 minutes, right? You want to make sure I don't go way over time. Sorry, I'm just unmuting in a particularly uncomfortable way. Thank you very much. I mean, I really, Eric has also shared his slide with me and will be posting them to participants afterwards. I'm really excited to go through them. I really enjoyed your presentation, Eric, in that sense of you showed us the opportunities that are available in these sorts of projects from start to finish in the project in Ukraine. You know, you've got some information and you showed us the sorts of information that was all fascinating. And what it made me wonder was how you decide what data flows to follow, how you decide there's something to look at. And if, you know, I appreciate it might be a long answer to this or a pity answer, but is there a sense that you have any understanding of things that you might be missing? Or is it kind of spontaneous serendipitous occurrences? There are two different questions here. One is the selection of what to research in the first place. And the second is, once you do decide to research something, what do you gather, what do you ignore? So the selection in the first place is really driven by interests and also just once you do this for a while, you kind of get a sixth sense for like what can be researchable and what isn't. But kind of an incident like where would you actually have somewhere where you do have massive spread of internet connection and internet literacy where people actually be sharing photos and videos and witness accounts of an event versus somewhere where it's a little bit tougher. So what the example of the nevolity poisoning everyone's talking about the news now, that's not quite so fertile for for investigation just because it's a guy who ingested poison at some point, and we don't really know when or where and you can't really see poison on a Instagram photo right so it's really good for this but you can see a giant missile launch you're moving through the middle of the town. And then once you do decide to research something it's it's really just about collecting as much as you possibly can and also doing as quickly as you can to. So we saw this with the downing of the Ukrainian airliner and also Tehran. So that was actually that was and believe it or not that was in 2020 that was January I think it feels like it was five years ago. Yeah, so it was technically this year and how when we were doing and and we were collecting information about that witness accounts videos and photos. We did as fast we possibly could just because, as we saw with mx 17 case people would post a witness account to a local kilogram group or local group on VK which is a Russian language network. And they would post a thinking only people are reading this or other locals, because people are much more honest and forthcoming when they talk in the local and hyper local groups like even like neighborhood groups. And then once they realized that, you know, me, some guy sitting in Kansas City is looking at their message they realize oh I probably shouldn't have this out of the open and they may believe things. So the real challenge is not so much about being choosy and selective and picky about what you research but really about collecting as much as you possibly can in the early stages before stuff disappears into the ether of the internet before it can be archived or saved away. So that's, that's more often the challenge is trying to do, you know, there's no section is completely comprehensive data analysis unless you have like the NSA, like pipeline of the whole internet or whatever. But for everyone else. Yeah, you have to just be very quick and go to where, again, if you know the area, if you, I don't know anything about Iran, I don't know, I don't know Farsi, so I'm not wasn't the best researchers but I do know Russian, I do know Russian social networks and help the Russian Internet how it operates so I knew where to go right away. So it depends on your local, your specific area studies knowledge, and also just knowing kind of, you know, if you if I were a person living in this place, where would I go to share a photo or video and you have to kind of innately know this and you can't just go to facebook.com slide and I just type it in right it doesn't work like that. So a lot of these needs actual expertise and not just complete wide data mining. Yeah, so this is really interesting Eric and I think you've just given a perfect link to Richard although I don't know what Richard Guthrie is going to be talking about I have talked to Richard about this sort of issue around how monitoring and tracking often relies on understanding the context of the thing that you're looking at. So, thanks very much Eric. I'm going to hand over to Richard Guthrie now from CVW events. Thank you. Thank you very much, Henrietta. Thank you for organising this and and thank you to all those also helping out those visible and behind the scenes. I've only got a short time to speak so I'll dive straight in. The purpose of this presentation is to draw out some of the lessons from past uses of open source information and as Henrietta said, context is key. Importantly there are areas in which open source information enables clear assessments, not only to evaluate a situation itself directly, but to enable effective assessments of the analysis made by others. This is the key point that information always has a context, and not only does any particular piece of information need to be evaluated within its initial context, the context itself needs to be evaluated. This is where intelligence activities have often fallen short, whether state run and operating on closed information sources or those outside of government analysing open source inputs. In the history of state intelligence operations in the realm of WMD, there are numerous times where insufficiently rigorous analysis has been done of the many individual pieces of information within their respective contexts, especially where the arena of information analysis rubs up against the arena of high level politics. There are a number of clear examples, such as the US missile gap assessments of the late 1950s and early 1960s, the Soviet operation Ryan analysis of when, not if, there would be a western invasion, or the western assessments of Soviet chemical weapons stocks in the 1980s. The one I will examine for this brief discussion will be the yellow rain allegations. I've chosen this example because the comparison between the analysis of the source information by the government making the allegations can be compared with the analysis by allied governments and with the analysis of non governmental analysts at the time. Those non governmental analysts were utilizing open source information. Key amongst those non governmental analysts were Matt Messelsen of Harvard University and Julian Perry Robinson of Sussex University. Julian is sadly no longer with us, having succumbed to COVID-19 in April. I worked with both for many years, but the events I will be describing date from before my work with them. In the late 1970s there were allegations of use of toxins produced by fungi or moles as weapons. These toxins or broadly the term for these toxins is mycotoxins. It's important to note that a mycotoxin is a poisonous substance made by a living thing and therefore falls into the overlap between chemical and biological weapons. However, the international law in place at the time on the use of such weapons the 1925 Geneva protocol covered the whole spectrum of lethal CBW. There's a key moment in the yellow rain allegations 13th of September 1981. So 39 years ago on Sunday US Secretary of State Alexander Hague spoke in Berlin to the local press association. His words included and I'll quote. For some time now the international community has been alarmed by continuing reports the Soviet Union and its allies have been using lethal chemical weapons in Laos, Campochia and Afghanistan. He went on to say and I quote again. We have now found physical evidence from Southeast Asia, which has been analyzed and found to contain abnormally high levels of three potent mycotoxins poisonous substances not indigenous to the region, and which are highly toxic to man and animals. End of quote. The timing was important. Hague was scheduled to meet Soviet Foreign Minister Andre Gromyco soon after. There was a lot of pressure on both countries to be making progress on arms control. The Hague announcement brought any prospect of moving forward on any forms of arms control agreement impossible in the near future. Indeed if such allegations were true, there could be no such agreement. The physical evidence that Hague referred to took the form of samples of a yellowish powder. US official sources indicated that high levels of mycotoxins have been found in these samples, although it was later clarified that high levels had only been found in the analysis by one laboratory. When these samples were looked at under the microscope, they were found to be mostly composed of pollen. This immediately raised questions as to what role the pollen might play. Contemporary reporting indicated that an Australian government laboratory did not have confidence in the veracity of the samples that have been supplied by the US. That did not imply that the US itself had tampered with the samples as the US had acquired samples through a variety of routes. Contemporary accounts also indicated scepticism from the UK lab at Porton Down that the samples contained anything in types or quantities expected to cause illness in humans. This was confirmed in the UK Parliament by a defence minister in 1986, although he added the following caveat and I quote again. The absence of positive results is not necessarily incompatible with positive findings from other samples. While our results are negative, the MOD's view is that from epidemiological evidence, chemical warfare attacks probably did take place in Southeast Asia, although we cannot identify the chemical warfare agent, nor do we know who the supplier might be. End of quote. So what was the epidemiology? The first thing to note is that the local epidemiology details were sparse. Much of the epidemiological information was open and so carefully examined by non-governmental analysts. What information was available was inadequate to use as a baseline to understand the prevalence of relevant medical conditions that might naturally occur in the particular areas. Another major gap was a lack of useful information that could provide a time sequence that could relate the onset of relevant medical conditions with the appearance of the yellow powder spots on leaves that were the hallmark of the alleged attacks with yellow rain. This led to further epidemiological work by parts of the US government, but this effort failed to find convincing evidence in support of the allegations. There were other oddities in the claims. Closer examination of the pollen grains in yellow rain samples were shown to have been processed in some ways. The contents have been removed, leaving behind the protein husks. No clear explanation was given by those in support of the allegations of use as to why such processing was needed to make a weapon. All species of pollen identified in the samples were local to the area, but each yellow spot contained a different mix. If the spots in the same area had all been sprayed out of the same tank on the same aircraft, there would be an expectation of greater similarity between the spots. So perhaps the spots were each processed slightly differently from each other in some unknown way. So was there an alternative explanation? The processing of pollen was indistinguishable from the processes pollen undergoes in the digestive systems of many insects. Biologists were able to establish that contrary to Hague's claims, the mycotoxins that have been detected were indeed indigenous. Moreover, they were identified in rotting materials such as insect droppings. In particular, one species of fusarium fungus, the toxin of which the US had claimed as a yellow rain constituent, was shown specifically to be able to be grown on insect droppings. This line of thinking offered a simple explanation for the variations in pollen content for separate spots in the same area. If the spots were insect droppings, separate spots would be different as their composition relied on the selection of pollen each insect had ingested. But if these were insect droppings, surely local people would have seen the droppings being placed there by the insects. Thomas Sealy, a honeybee expert from Yale University, was able to show from open sources that the yellow rain samples were indistinguishable in particle size, colour and general appearance from the droppings of bees native to Southeast Asia. It turns out that these bees have very clean habits. Periodically they leave the hive collectively and as a swarm go to the toilet at a height of tens of metres and some distance from the hive. At the height they fly, they are not visible from the ground. Thus the spots appear on the leaves from what on the ground appears to be an unknown source. The pieces of the puzzle therefore all fitted together and the conclusions could be drawn that yellow rain was not a Soviet toxin weapon, but were a natural phenomenon of collective bee defecation. So what lessons can be drawn from this? The first is that understanding the context of all of the pieces of information was key. Distinguishing a deliberate event from a natural occurrence may require understanding the context for a variety of aspects and many of those aspects of open source materials available for study. This is true as an aside, this is true today to distinguish naturally occurring disease from deliberate acts and so it was possible to use more recently developed techniques to have a high confidence very early on that COVID-19 was not an example of a deliberate disease. In yellow rain, one of the key context examined was the question of what a real attack would look like. This could then be compared with the evidence on the ground and indeed it did not match. So this is the key lesson from the yellow rain allegations. Are there credible alternative explanations for phenomena or evidence that is observed? There have been many cases from history where such credible alternative explanations to dubious claims can be derived from open sources and I'll leave it there since I'm just bumping up to my 10 minutes. Thank you, Henrietta. No, thank you Richard, really interesting and brings us, you know that it feels like there's so much overlap in some of the things that are coming out of the three talks. So it felt to me very much that what you were giving us Richard was a historical example of a slam dunk moment where non governmental researchers showed an issue demonstrated that a interpretation widely held was incorrect. And it feels not dissimilar to the examples that Jamie gave us from her work and Eric gave us from his work. So my question to you, Richard, before I open it up to other people is, I wonder if you could reflect on other contextual issues. So you pointed out that in doing these sorts of analyses, context is in very important and Eric mentioned similar trends that it was very important that he understood the specifics of the Russian language and how social media is using that particular local context. And you pointed out similar, even though the technology is different, that you need to understand where the information has come from. Do you think it's also true to say that the context of how you write and how you disseminate the information is very important? Because I know that Julian Perry Robinson and Matt Messelson have had enormous impact in various decision making. Jamie mentioned that she was mindful of how and how she publishes things and I wonder if you could reflect on the impact of the yellow rain study. Yes, there's a whole set of questions to unpack there, which probably be several hours of seminars. But yes, the communication of what you found is key because you need to put the key contextual information there. But also it's a bringing together of the different areas from the scientific and technical to the legal about what is it supposed to be in compliance with something to the political of one of the implications of certain results. And you have to produce your information is such a way that people who are primarily only comfortable in one of those areas, say a scientist or a political figure can understand the implications of what you found in all of those other areas. And that can be extremely difficult because, you know, in summarizing things, you can add incoherence, you can, you know, shortening a store shortening a narrative in order to make it more understandable can potentially lose some key information. But it is really very important to look back at something like yellow rain and to realize that, you know, in all of the discussions that have been held since on things like investigating unusual outbreaks of disease, the lessons have been learned to an extent from the the yellow rain work. And that has been really impressive for a group of don governmental people to have such an influence on governmental processes. And it's very sad to me sometimes those things are forgotten because one of the interesting things about the story of chemical and biological weapons is the real success that there has been is in reducing their salience reducing their usefulness on the international stage as symbols of power. And we've now moved forward some decades and people coming into the field now assume these things have much less significance because that reduction in salience has been successful. But actually there's some really key lessons from the chemical and biological areas that are really important for global security issues now. That's very interesting, isn't it? Because that opens up a whole other set of things about the potential for detecting elicit or dubious artefacts might the work monitoring about feeding to actual narratives for how the world thinks about them. So, thank you. I'm going to start asking some questions of the whole panel. We've got about five to six minutes. Sorry, before I'm going to let people go at three and leave the space open for an optional 30 minutes of informal chat. I've learned such a lot from all three projects. And I think this sense of the specificity of information is key to lots of what you've been talking about. Eric, I'm going to start with you because Alan Hill has posted a comment for you. He says, collecting the information as soon as possible is critical before it is taken down or removed, which may be connected to something I was thinking about when you were talking about what vulnerabilities are there in relying on user generated content. I mean, there's always vulnerabilities because user generated content, at least it's often seen as inherently less reliable than other forms of information, which I guess the other forms would be things like government provided or anonymous source and things like that. But often I think user generated content is, I mean, it's just as reliable, if not more so than other forms, just because it's verification process for it. Again, this is a week long webinar on its own, but the verification process for verifying photographs and videos about like Jamie mentioned earlier about geolocation about determining the location of a photo or video. But that process is not super complicated. It's often not a binary yes, no, verified, not verified, but you can definitely get to degrees of confidence that are practically a binary yes, no. And once you can, once you go through the verification process to determine the originality of a piece of content, its location and the time it was taken, then I mean, then it's contextualized and you can then use it for whatever so I mean a lot of times this is pretty cut and derived you have multiple photos and videos from multiple sources showing the same military convoy then you have a pretty good idea that it's that it's real unless you have the the greatest siap of all time with all these different you know people like fabricating this fake thing I mean that doesn't work that doesn't happen. Though some people say the CIA maybe is competent enough for that but I don't think they are. So yeah but the retention of information is a tricky thing, though, because it there's different levels of types of retention uncertainty. So if you're trying to maintain information that can then be used and something like the international criminal court. The standards for evidence are much higher than if you're just publishing an investigation on, you know, I'm going to add or a news website or something else, because it has to go through legal standards that can be vetted through, you know, that the prosecution or defense team can then go through as well. And there are methods for this. The Human Rights Center at UC Berkeley they're kind of, they're kind of on the cutting edge for a lot of the stuff about setting standards and methodologies and stuff for this retention of evidence for things like criminal prosecutions and the ICC and things like that. And I know a lot of other institutions you see obviously mentioned really are the UN places like in this international human rights watch they're also trying to set create methodologies and like common practices for this practice. So I mean yeah so it's not just as simple as just like doing a screenshot right because doing a screenshot is like the crudest and most unreliable form of retention of a photo or video. But you also have to be able to archive and maintain in the highest resolution and keep, you know, put on something a third party archive site like the archive.org or archive.today, the metadata and so on that can be maintained from the original upload. So it's a complicated question that is often more in the legal realm than the investigative realm. But depending on exactly what your goal is with maintaining or doing your investigation, whether it's just shed light on something with, with, you know, a report you publish versus, you know, get them thrown in the clink, right, but this is the evidence for it. You'll have different practices for archiving and data retention. Thank you, Eric. So with all of these things that you said many times, I feel that we could talk for much longer about all of them. I think what you've done, I'm going to give Jamie and Richard both a chance to comment on these things before we get three. One minute each, sorry guys. So Eric, you mentioned that there's work and of course you're very welcome to stay. But Eric, you mentioned there's work going on to think about common standards of evidence where you set the bar. For what counts, and I think that links really neatly to what Jamie said earlier about that. The more this work increases, the more pressure there is to develop standard practices, kind of codes of conduct, almost. I don't know that there's obviously different loaded ways of saying that. So I want, I'm interested to know if Jamie and Richard would like to reflect any more on that and then we'll slow down again after three years. Jamie, would you like to say anything? Yeah, absolutely. So I think Eric mentioned it in his speech as well on the democratisation of data, right? So as more people gain access to the fields, which I believe will continue to occur as technology and access continues to improve, there's simply going to be more data and more people conducting data analysis on these types of open source information. And so particularly when it's on, you know, items of national security, including non-proliferation, it is important that people aren't kind of going with their first gut instinct and saying that something is something when it's not that. So for example, like I remember one time I was looking at a factory in China and I was convinced they were producing a chemical, but then I went on Baidu, which is a Chinese like navigation thing, and it was actually a furniture factory. And so kind of making sure and like verifying and understanding that there's space to be wrong and that people are going to inevitably be wrong at times is really important to a better kind of, you know, I think assessing those processes and making sure that as people say things that they are verified going back to that question of verification. I think that kind of a due diligence or a set methodology or some sort of kind of code of behavior, particularly as it relates to national security interests. So as to curb misinformation and disinformation will be incredibly valuable as you know data continues its democratization. Yeah, thank you. Very interesting and makes perfect sense. I like that phrase it's important to have the space to be wrong. You wonder if this isn't something that open source non governmental stuff can really contribute to to higher level policy making processes. If we have longer Jamie and if you are saying after three o'clock I'd be interested to talk to you about at a different time about the democratization of data because it seems to me that's another vulnerability point because that's very patchy how that's going and it's kind of splitting up of different systems so that maybe open source stuff will need to accommodate a whole bunch of different practices and it might get harder. Yeah, thank you. Richard, I'm going to invite you to to say something quickly. Yes, I'll just follow up with a few points. Yeah, I mean that last point by Jamie really important. I mean it's like learning a new musical instrument. You know, you can't be a virtuoso at the first time you pick up an instrument and there has to be a space by which people can almost say I'm sort of playing around with this I'm rehearsing this and not people going where you can't play that instrument properly. That's really important. Just a couple of very quick sort of bullet points. I mean one of the things that's really important that hasn't properly come out but this democratization of information. We have a spectrum of hard to find the right terminology from what you might call serious analysts who are trying very carefully to gather information that they can be rigorous with to what one might call the more conspiracy nut. I've picked out one piece of information out of context and making a claim on it. And really what is needed in the world is to push people up that spectrum towards the more rigorous analysis. And then two very quick final points. I mean one of the key things that's come out actually. I think of all the the statements is that you have to know the ordinary in order to understand what is out of the ordinary is this contextual point but the Jamie and Eric both drew on on points where they were showing some distinction. And the second is about the last one is about false information. One of the earliest things I had as a task in my career was to try and see whether the newly agreed conventional armed forces in Europe treaty verification arrangements could be spoofed Could people hide key bits of military equipment under the verification arrangements and the best sources I had in chatting with people to work out ways of potentially doing it were magicians and their life is about creating a false presentation of things to hide some other aspect. And there is a lot we can still learn from those sorts of groups of people to work out when we're looking at a picture. Are there things that could be hidden behind that somebody is trying to misdirect us or mislead us on. Thank you Richard. Yes, fantastic points that sort of hidden in plain sight or hidden in hidden sight. Very interesting. So it's now two minutes past three. When we originally planned this with thought about an hour's worth of presentations but in our last webinars people wanted longer to talk about the issues. I invite anybody that needs to leave to leave now. And then I'm going to start talking just briefly about some things I've learned and maybe get some conversations going on that side. And just to say there's no pressure from audience from participants to kind of have any sense that these are well formed questions. I think we're really interested in exploring all sorts of different thoughts people have about this area. So what I've learned today from three excellent presentations is a sense that it's really important to be careful about how you do use the language. It became evident in pre-meeting chat that I was using the words methodologies and tools rather unspecifically and not carefully enough. And Jamie, you really helped by clarifying that. I've also got this sense of the possibilities for non-governmental tracking through publicly available information can demonstrate all sorts of things which can, as Richard said, kind of feed through into political processes. And as we found out in previous webinars and in Eric and Jamie's examples, it can also just help empower other processes domestically or internationally. I think it was very interesting to me throughout the three talks. There's this kind of potential invitation as I sort of outlined in my opening statements. There's this question mark about the extent to which these activities might be joined up or could be joined up if it's even desirable to think about joining up. So I recognise completely that having lots of independent activity distributed in different places can be enormously beneficial because it gives a range of flexible tools. It gives the sort of peer review processes that Jamie outlined are very helpful. Eric mentioned also that it can be very helpful to have redundancies in where you get data from and in numbers of people looking at that same data. So given the benefits of having individual independent projects, is it helpful to think about joining up? And if it were helpful to think about joining up, what does that look like? Is there a sense of thinking about collective understanding about, Eric and Jamie have both given kind of indications about this, but indications about what evidence people count? What levels of authentication count around the world so that in Richard's analysis this sort of spectrum of different uses, if you have some agreed standards, it's easier for people to understand what they're saying. So I'm opening this out to anybody that would like to comment any more on that. Any of the speakers, if you've got any more to say, please do jump up. Nothing for me. Yeah, okay, thank you. Jamie or Richard, do you have anything to say about the possibilities of joining up if there's something you'd like to see happen or not see happen? I think specifically in the nonproliferation realm, data set collection and data set creation is going to be incredibly important. And so in order to kind of apply these emerging technology, artificial intelligence data analysis kind of applications, you need big data sets. And the thing about working in nonproliferation is that oftentimes a lot of these data sets aren't publicly available. And that because that can be useful for nonproliferation in that, you know, we don't have large amounts of information about WMD is just floating around. However, in order to kind of improve analysis, it would be beneficial to kind of have better data sets. And so I think partnership to an extent and kind of data sharing and data like crowdsourcing and collaboration is super beneficial, but it is also important to kind of remember like not everybody's going to have the same goals in their research. And so kind of understanding those underlying motivations for potentially conducting research on a nonproliferation related area of interest is incredibly important to keep in mind as you kind of continue to develop those partnerships. So understanding kind of motivations behind why someone might be doing that research, I think is incredibly valuable. And I just wanted to also, this is kind of unrelated, and but going back to Eric's earlier point on, you know, crowdsourcing and publicly available or user generated data, as opposed to state generated data. I focus mainly on North Korea and our main data sets for North Korea and our North Korea state media, which is a lot of propaganda and so kind of going through that and it's kind of the opposite of Eric's problem where I have very minimal information but it's information that North Korea wants me to see. And so I kind of have to ask myself, why do they want me to see this or what do they want me to kind of gather from this. And so that's, I think, an important distinction to make too, is that sometimes the available information isn't necessarily just like, hey, I saw this cool thing, but it's rather it's pretty targeted. So understanding, again, that origins of and those motivations behind that information and, you know, potential interest in research is important. Great, and that kind of brings us back to this sense of context of the data that you generate is very important, and maybe there's a sort of idea that anybody can do this. If you have the internet and you have the right stuff, you could do this, but actually it's more finely tuned than that. To do it well, you need to have, you need to be embedded in what you're looking at. Eric mentioned that over time you develop a sixth sense about things that are going to be useful. And that that can't happen except without some effort. I'm really interested by that point, Jamie, that not everybody has the same goals or expectations. Or I think I put this word in your mouth values when they're doing the open source analysis. And that's also true of the people that are putting out the data that there might be different values loaded in there. Do any of the panellists or anybody see any conflicts between some of those things? So if your goal is greater transparency in building different systems for regulating things, does that conflict with ideas about privacy or ownership of information? And so that's just within one person. I can see there might be a conflict there. But between states that may be interested in developing regulations, would there be conflicts between people's understanding of verification or the benefits of transparency? It feels as though there are different understandings of what verification can or should be doing. So I don't know if anybody would like to come in on that. Yeah, I would. I mean, there is. And it's not just about individual privacy. It's about what does it mean for security when I was working with Vertic in the 1990s. I had three rules for verification, the second of which was that the greater transparency should enhance security and not diminish it. Because simply having transparency in another side's military operations may actually cause weaknesses that people will then want to attempt to exploit. And so you make the situation more dangerous. So there are some really key aspects to that. So you have to test what it is that the information is being exchanged. And Jamie made the point about not making a shopping list. I mean, one of the things that I've got to be very careful on in some of the work I do is that in under unpinning some of the history of chemical and biological weapons programs when you're trying to illustrate now things to look out for for potential proliferation, because I do things like train export control officers from different countries around the world. You don't want to give away too much information that allows people to make their own weapons. And here's the difficulty people go all this information is 50 years on I go yeah but this information about how to disseminate this material 50 years ago is still bloody effective. Pardon my language if somebody used it today. There's another thing about exchange of information that is really important, which you have quite touched on, and that is about the assumptions that individuals can make. And we're all prone to it. I mean, one of the in looking at the Syrian chemical weapons program. One of the assumptions most Western analysts make is an assumption that the Syrian perspective on the utility of chemical weapons is the same as the perspective they talk about in terms of Western powers. And indeed there are some similarities, but it's also clear when you can try and go through some of the information you try you speak to people who left Syria. And I did interview a number of Syrian opposition figures at the beginning of the war people who left before the civil war had really broken out. And their assumptions of the both military and political utility of chemical weapons were very different from what a lot of Western analysts have. And so that's really important to exchange information exchange analysis so that somebody else might go well have you made an assumption here. How do you know that what you've got from your experience actually relates to the experiences of the people who are doing this action or preparing this material or organizing this production site. Because I think that can really make a difference to the integrity of your analysis. You have to be open to challenge because all of these things are subject to some uncertainty to some cultural backgrounds to some political backgrounds to some financial backgrounds. You know, the available resources to do something. And so corners are cut from what people expect to happen. You know, that's, that's a really important lesson of the Iraqi chemical weapons program. People said I will know that they're doing certain things because we'll see the protective kit that people are using to make or to produce certain materials. And actually their health and safety standards were just so different from people who've been involved in producing chemical weapons in the West and in the Soviet Union. So those key assumptions. You have to, you just have to be ready to be challenged on them. Very interesting. It feels to me again you've kind of identified something that open source research done by non government groups might find easier than negotiated verification regimes and nobody right back in the first webinar. There's always been a sense that this stuff can't replace big structures big international treaties but there are there is scope for thinking it could support them. Yes. Yes. Sorry, can I just follow up on that? I mean, and there's a very key point there. I mean my, if I have an ambition, it's that chemical and biological weapons are possible to be used. Now there are a range of governance areas here now. Here's the key measure of success that the damn things aren't used a second measure of success is that nobody has them. That's a slightly lower down the hierarchy, but the key thing is. Does it matter whether the cause of that success is an international legal instrument by the adoption of cultural norms by accepted levels of behavior. In a sense, I mean, I think it's very Lodgaard Norwegian once said it doesn't matter how the norms are constituted if the norms are followed. Or words to that effect. I think he phrased it a slightly different way. And actually, in stopping what we might call bad behavior, certainly in the chemical and biological field. It's about this collective set of actions, not just international legal instruments for which perhaps you can specify a level of compliance with but other norms, other aspects of behavior where some of those compliance questions are a little bit more vague. How do you evaluate those? And that raises a whole set of questions. But if you've got people looking at that more cultural level, I think it reinforces the international legal instruments. Yes, I would agree. And I'd be very interested in what Jamie and Eric said about that. I do wonder if it once again comes back to Jamie's feeling that the more it happens, the more it's important that people are following certain standards. I have a question from Tom Hickey saying, can anybody recommend good books or courses or guides on Austin techniques or methodologies? I don't know if anybody wants to answer that immediately. Jamie, I'll go after you. Okay, no worries. I was going to point them to Bellingcat too, so I'll let you speak on that. Austin techniques and methodologies by Bazel or BAZZEL. It's fantastic. And so that's my kind of one go to textbook on it. My organization, the James Martin Center for Nonproliferation Studies, also does a lot of more Austin publication work focusing on satellite imagery. We partner with the Nuclear Threat Initiative for NTI on NTI.org. And we have some fantastic case studies on how to better use satellite imagery, that kind of thing. So check those out. Thank you. Sorry, I was a little bit late because I was actually grabbing my copy of Bazel's book for those behind me. Yeah, it's good. It's like 40 bucks on Amazon. I promise we're not getting sponsorship money for this. It's a, he re-releases it every year. It's a really good desk material, like literal desk reference guy actually had on my desk behind me. And it's like 500, 600 pages. And it's really good. It's just kind of a good, it's like, it's not the most comprehensive one, but it's probably the most broad, right? It kind of, it's a very, it's as wide as the ocean. Not quite deep as a title, but a little bit deeper. And also I put into the chat, we have a Google doc that we keep up. It's a little bit out of date, but it's mostly good. Bit.ly, this is the short, like bit.ly slash bcat dash tools. I have it in the chat if everyone wants to see it. This just directs links to a Google doc that we keep. It's 20, 30 pages of different tools and sites that you can use for whatever purpose. So it has like the image verification part. It has like the who is data part. It has like the, I know what is either one is like a data visualization section, all that. So we keep this up. It's crab sourced. If you open it up, you'll see there's always about 60, 70 people who have it open in a tab. Just bookmark it, keep it, you know, if you, if you're looking for a particular topic like, and I want to track this plane or this, this naval vessel, but I have no idea where to start. This is a good kind of an overview of different tools that are available. And also we mark on there that if it costs any money because half of these cost money, a very, you know, varying between like a $5 subscription versus a like $5,000 license. So yep, I, I, I double Michael Basil's book. The seventh edition is out right now it's 40 bucks on Amazon, and also the toolkit that we have on Google docs. And some really top tips on a sort of related, but slightly cheeky question. If you were a novice in this area, and you were browsing courses, we've got your definite recommendations now, but I'm aware there's hundreds of courses out there. Are there any red flags that people should avoid? Yeah, I'll answer this one. I'm going to not give too many recommendations, yes because Bellingcat we run training courses so I'm going to step back and not recommend our own, our own. But Jamie's already recommended you. Okay, good. Okay, great. But we do offer courses. It's a few hundred dollars for webinars. Yeah. But don't, don't spend thousands and thousands and thousands of dollars for a course, unless it's like at an actual university over a whole semester. I'm trying to very specifically not to trash anybody, but if you spend, you know, thousands and thousands of dollars on a single course, you probably are getting spending too much money. Just because almost all of the investigator techniques that you can learn how to do are freely available and there's guides and extensive guides and stuff available. So we have a bunch of free everything on Bellingcat's free free guides case studies walkthroughs all that that we put down already. And by would I would really, really recommend just trying to find a niche that you're especially interested in whatever, whatever this is. So it's North Korean weapon development or, you know, investigation to the war on the Donbass with different information environments whatever you're, whatever you're interested in, because whatever it is there's probably a community of people already investigating it and probably on Twitter. It's kind of the main gathering place for this sort of stuff. Some people are in like Reddit, some people are on Discord, there are other few places too. But stop and don't spend an ungodly amount of money like don't spend like three months of rent on a investigative course without trying to make sure that the same content is not available for you. And I say this is somebody, you know, Bellingcat, like a big part of our budget is offering these online and in person webinars so maybe I'm going against our own business model. You know what you're getting out of it, because a lot of the stuff that is offered for money is also offered up or free. And if you can, and if you do want to pay for one have your see if you can get your employer to it because a lot of people. If you're paying out of your own bank account. Thank really hard, but if you can get your employer to pay for it then all the better, then then all the caveats go away and you should definitely sign up. Your boss will pay for it. Thank you, that's really helpful and I think that's a really good stair for anybody who wants to get involved that they should start with some free stuff. So I'm going to come back to the main body of the some of the things that you've talked to me about. So Eric and Jamie your live projects. I'm really interested to find out for me about what counts as success. So if that's a meaningful question in any way so Richard gave an example of a non governmental project that fed into different systems. We've had an example from well various examples over the webinars of things resulting in court cases. But do you have a single notion of success and if so what is it. I have the standard non proliferation answer which is success for me every day is a date that a nuclear weapon doesn't go off. Right. So that's a very, very basic answer, but getting more into specific like, oh soon successes. I think I used to be like, oh, if I find this North Korean missile or this tunnel entry, like that'll be a success. But now more so it's, I'm less looking at the really exciting things like that and more so at, hey, this building is cool. No one's talked about it before. Do other people in my North Korea watcher community. Do they also think it's cool. And if the answer to that is yes and I'm able to explore it more and kind of come up with, you know, new hypotheses that maybe haven't been considered before. I count that as a success. So again, it's not like me making mind shattering observations every single day, but rather it's a, I'm slowly making progress and slowly uncovering things that maybe haven't been talked about in this, you know, field. So that sounds fantastic to me. That sounds, you know, you're doing it for the world, you know, there's a sort of your providing visibility. But just to bring it back to earlier in your presentation, there was a sense that you think very carefully about where you publish and what you publish. Who would you ideally like to read your publications and what would you like them to do. And again, maybe this is a launchable question. I'm not sure. Sure. So I think my my goal for like people who are reading my stuff are people who are making legislation or making it, you know, the more formal on control agreements and nonproliferation agreements or treaties or resolutions on them. So ideally if I can get them to read something and say or understand that it's potentially a concern or a pattern of behavior that might turn into a concern. And then they're able to, you know, apply more diplomatic pressure, more like international kind of have larger consequences than I alone can have as a researcher at NGO. And so I think that that's kind of ultimately my goal is to get those others interested and curious about the work. Right, so very interesting. Again, this sort of joined up with us with the things sort of things that Richard was saying. And from that point of view, again, this maybe could be part of a conversation of joining up because from the things that everybody's mentioned, there's a complexity about understanding results from OSINT data. And if there are a community of researchers able to communicate that collectively or or I know Richard and I in the past have talked about training diplomats or decision makers and in how to understand information properly. That might feature into ideas of emerging discipline or set of practices. Yeah. Eric, do you want to come in on what counts of success for you? Sure. Yep, there's a couple. I mean, it depends on the exact investigation and project or whatever, but I think there's two. One thing that should not be underestimated is there's a certain, especially with online investigation, there's certainly gamification almost to this to this work, because a lot of the work we do. I mean, when you're hunting for, you know, missile missiles and something like that, the stakes are a little bit higher, right, you know, preventing a nuclear strike or whatever. But a lot of the work that we do is a little bit more, a lot lower stakes, and has a little bit more of a crowdsourced and community element to it. So, even with the very, very serious things like the downing of the Iranian, the Ukrainian airliner mentioned earlier over Tehran, there was an element of gamification to that where you had everyone on Twitter who was simultaneously trying to find content around this and for the videos and photos that were found trying to locate exactly where they were taken to reclaim the flight path, all that stuff. It was almost competitive to a degree. So, I mean, there, when you talk about incentives about this, I think that's something that can be underestimated is that the work that's being done for these large events is not all just being done by a single journalistic entity or part or whatever it's done over a large spread about people, you know, random people who just are enthusiasts or amateurs or whatever. And often they're the ones who find those interesting stuff. And also, I think that going off the point you said earlier, a lot of the work that we do at Bellingcat, the stuff we publish is very methodologically focused, very focused on methodology and process. And so we care more about, like we bear the process as, as nakedly as we can and showing, you know, what the steps are and how the sausage is made and all the other bad metaphors for this. And about how we, how we reached a conclusion and we care often more about showing the methodology and exactly how, what are steps we took more than the actual result. And we'll sometimes publish investigations that are incomplete. Right. Well, if I have an investigation that actually never found the conclusion we want and never got to the, never got from point A to point B. And it's not that in the New York Times or BBC would never get published because, you know, why are you reading this is incomplete investigation, you never found what you're looking for. This is just to delay bear the process. I'm like, okay, this is the methodology and this is the process. These are where the missteps were. These are where, you know, if we did succeed, this is where it would have been found, right? This is the process that would have been found. So for us often the success is not just, you know, we figured it out and we solved it and, you know, we identified, you know, the bad guy or whatever. It was also showing what the process is in a very clear and interesting way. So other people who are journalists or amateurs or human rights investigators or whoever they are will be able to learn from the process and make it a more of a pedagogical investigation than a result space investigation. Great, thank you. So you've given us a different sense there about part of Bellincats rationale is the sort of fundamental science, how to do this, what counts as good work. This is how you could do your own thing. So that's, that's yet another spin. Very, very interesting. Different, this word gamification. Yes, I think it's come up before this sense it becomes a treasure hunt that you're motivating a whole bunch of people around the world. Very interesting. We're running out of time again. So thank you. Richard, would you like to come in for some final comments? I was just to follow. I mean, it was fascinating, almost the cultural assumptions of these, the previous speakers on on measures of success. I talked about the verification, three rules of verification I used to have in the 1990s. The third one was a good verification system should be cheap to comply with but expensive to cheat on. And actually one of the things I see about open source stuff is that by being able to illustrate bad behavior or behaviors that's considered by myself, you actually make it more expensive and more difficult for some of these people to do it because they end up having to take on extra effort to avoid what you see. And that, that to me is a really core reason for doing some of these things. And just a last quick comment about the training. In some ways it's not even good training is not even about the open source, if I'm handling of source information itself, it's about understanding the context. And if we just take Jamie's work and the team at Monterey's work. When they identify development in North Korea, it is not simply that they can handle high quality imagery. It is because they can put that image into context. And that's the really important point to me. And so, if you decide, you know anybody in this seminar that you have an interest in a particular area, actually understanding the context of that area and identifying questions that might be raised and inconsistencies in policies or activities that people have to do to carry out some policy you are disagreeing with. At that point, once you've identified the questions, you then also helps you to identify what might be the sorts of tools you want to seek out to be able to answer the questions raised by those inconsistencies. Right. Yes. Yeah. Right. Absolutely interesting. I've just had a message saying we're about to be booted out. And so we could go on for much longer. I know that everybody's got lives though as well, so it's not about seeing really. So thank you to all the panellists, to Jamie Withorn, to Eric Toller and to Richard Guthrie. Thank you to everybody who's come here. I'm sorry we didn't get to Dan's last question. I think our session is timed out, but we will start with that one about different ways to think about scaling up. So thank you very much, everybody. And I hope you have a great rest of your day. See you soon. Bye bye. Bye guys.