 Hello, everyone. My name is Philippa Lenzos. I'm a senior lecturer in war studies and co-director of the Center for Science and Security Studies. With me is Dr. Hassan El-Batimi, also senior lecturer in war studies and co-director of the Center for Science and Security. Welcome to this webinar. It's hosted by our center as part of the War Studies at 60 seminar series. We are delighted to be joined by so many people, students, staff, alumni, colleagues from outside of Kings, and others interested in science and technology and international security. Thank you all for coming. The War Studies at 60 series celebrates 60 years of war studies at Kings by exploring key issues in security and defense. More details are available on our web pages, which Lizzie should be popping into the chat for you soon. There's also a very fabulous commemorative publication that has been produced about the department at 60, and if you've not had a chance to flick through it, do take the opportunity. And again, the details of those, of that publication should be posted into the chat as well. Our aim today is to take an in-depth look at the big question of how emerging technologies are shaping, reshaping the security landscape. As many of you will be very aware through the 20th century, the pursuit of national and international security was largely synonymous with the use of science and technology to design and deploy powerful weapons from the battleships and chemical weapons of the First World War to the radar systems and atomic bombs of the Second World War to the intercontinental ballistic missiles and nuclear arsenals of the Cold War. Scientists and engineers transformed the nature of warfare and by extension global politics. Science and technology not only remain crucial to contemporary national and international security, but emerging technologies are advancing at extraordinary speeds with unprecedented and far-reaching impacts on the present as well as on future conflicts and warfare. Digital technologies are emerging alongside emerging technologies like machine learning and artificial intelligence, like nanotechnologies, genomic technologies and cutting-edge biotechnologies as well as additive manufacturing space technologies and the list goes on. All of these new technologies are refashioning a lot of old questions and raising new ones on the links between technology, society and global order. And I'll pass over here to my colleague Hassan who will continue with our introduction to the session. Hassan, please. Thanks a lot, Philippa, for starting us off. Hello and welcome everyone. Once more, we're very delighted today to host three guest speakers to share their perspectives on how emerging technologies are influencing global security, drawing on insights from their work and research. Collectively, we hope that the panel can trigger reflections and discussion on how to approach an ever-changing technological landscape and its implications on how we define but also address security challenges. What is the balance of risks and opportunities offered by new and emerging technologies? How to engage with the concept of disruptive technology, with the connotations it holds of fast and radical change, and assess its potential impacts on the global order? How can we think about effective means to control, or dare I say, tame technological advances? These and others will be some of the questions that we will discuss in today's panel. Each of our three guest speakers will speak for 15 minutes. This will be followed by a question and answer with our audience. We have a large number of attendees today and we will endeavour to address all the questions within the time that we have. So please make sure to put your questions early in writing through the Q&A channel, not the chat channel. You'll find the Q&A channel at the bottom of your screen on your Zoom window. These will be collected and put to the panel for comments and responses, and we aim to end our event at 8pm GMT. Please note that the panel is recorded and will be made available on our YouTube channel. I encourage you all to keep an eye on our social media channels. We're using Twitter hashtags, emerging tech, and War Studies 60. And you can venture even further by checking the Twitter handle for the War Studies department and the Center for Science and Security Studies for additional content. All right, our first panelist is Sean Ickens. Sean is founder and CEO of Collaborations Pharmaceuticals, a drug discovery company focused on using machine learning approaches for rare and neglected disease drug discovery. Ickens graduated from the University of Aberdeen receiving his PhD in clinical pharmacology. And DSC in science. He has authored or co-authored more than 300 peer reviewed papers and edited five books on different aspects of drug discovery research using computational approaches. Thank you so much Sean for joining us today and the floor is yours. Thank you very much Hassan and Philippa for the very kind introduction today. I'm just going to share my slides. And my presentation today, I've titled uncovering the dark side of AI powered drug discovery. And as a little bit of an introduction, I'm the CEO of a small pharmaceutical company and this is literally one of hundreds that are using AI and machine learning to do drug discovery. And we've got a circle of this image lists many of the companies globally that are using AI in all different areas of drug discovery. And we've seen a dramatic increase in the number of companies that are using these technologies over the last few years. Obviously this did not just come out of nowhere. The industries such as pharmaceutical industries consumer products and others have been using artificial intelligence technologies for decades. But what we have seen is obviously more of an integration and more powerful technologies being used, as well as the increase in data that's become available, and obviously more powerful hardware as well. I think this is now what's driving the integration towards more of a design make test cycle. Whereas in the past we would have used these technologies individually for assessing molecules in terms of the properties or toxicity, or even environmental impact for example. And we're at the point now where there are a number of groups academic and also companies that are focused on creating chemistry in a box. This is also known as autonomous synthesis or robot chemistry. And this slide gives an idea of some of the different groups that are approaching this. And so the types of algorithms we use for drug discovery are being integrated alongside the technologies to actually physically synthesize molecules. And you can imagine this as a totally integrated loop where the scientist is basically minimally used or minimally invasive chemistry you could call it. And so this is where technologies are heading. And our little company is really focused on using the data sets that we generate ourselves or we can curate from literature to build models in different areas that we can then apply to assist in the drug discovery process. And these could be optimizing molecules to improve the properties, avoiding certain toxicities, as well as other types of endpoints. So in the process of building these models we've obviously had to generate machine learning approaches building our own software that we can then use internally as well as with external companies. And so we do a lot of fee for service work. So hopefully this gives you a little bit of an idea of my background. And where we put a lot of emphasis is primarily on building models for toxicities so trying to predict from a molecule structure, whether a new molecule is going to be toxic. If it's going to hit a particular target, for example, or if it's going to show some acute toxicity in an animal model. Over the years we've been able to put together lots of different computational approaches and publish them. And this slide just shows some of those examples, heavily focused on the area of the settle colon S rays, which is both a drug target for diseases like Alzheimer's, but also a target where molecules may target in to have a desired effect as a pesticide for example. And so what we've been doing over the last few years is integrating these machine learning technologies with other tools for designing molecules in the computer. And this is called de novo generative design. And this schematic gives you an idea of how we can use a database of molecules, for example, from a widely available database like Campbell to train a machine learning algorithm. We can use our models for various properties to optimize the molecules that will come out of this. And an example that we've used to highlight the potential here is a recent publication on a natural product called Ibogaine, where the traditional medicinal chemists were trying to come up with new molecules that remove some of the liabilities of the chemical, such as cardiac toxicity, but retain some of the biological activities. And in the process of this work that was published in Nature last year by Cameron et al. They identified this analog called Tabin Antelog. And we thought that this would be an interesting challenge to see whether our computational approach would be able to replicate the traditional methods, the medicinal chemists would do. And so we would integrate our computational models and perform an approach called multiple parameter optimization to try to see if we could actually replicate this. What we found using our approach was that we could very rapidly produce thousands of analogs molecules that were obviously very structurally similar to Ibogaine, including Tabin Antelog shown here on this graphic on the top right, as well as many different structurally different molecules shown on the bottom middle of the slide. And so we can get a quite a broad coverage of chemical space. And this next slide just makes the point that we trained the model on this large database called Kemble. And we were able to look at desired activities against the receptor 5HT2A, as well as undesired activities. So, all of these represent machine learning models that are input into our design model, and we can generate not only the molecule of interest Tabin Antelog, but many other analogs with very similar properties. So in total we're able to produce over 100,000 molecules, and the Tabin Antelog was in the top 50 of those molecules. So this proves an example where a computer can actually make molecules that that chemists would make themselves but obviously traditionally making them physically. So this brought me up to the point where I was introduced into the field that ultimately got me this invitation today to present as well. I was invited to talk at a conference called the SPEES Convergence Conference, and they were interested in getting a presentation from me on how potentially these AI and machine learning technologies could be misused. This is not something that I'd ever really considered previously, even though I'd worked on lots of infectious diseases and I'd worked on a settle colon esterase inhibitors. The really interesting aspect of this for me was that I'd not really thought of the ethical underpinnings of what we were doing. And so, we had a very short period of time to put together a presentation because I left it to the last minute. And it gave me some some time to think about the challenges we face in recognizing whether a molecule is can be used for good or for evil. And so this slide, hopefully will give you an idea of really how close molecules that are useful as pesticides are, and how chemical weapons ultimately look very similar. Can you decide which side of this slide is pesticides versus chemical weapons. If you have a understanding of this field, you would probably have no idea which slide, which side of the slide, but actually pesticides on the left hand side, and chemical weapons are shown on the right hand side. So my challenge was, could we ultimately use the models that we build but kind of flip them around. And instead of trying to find molecules that are not toxic, identify molecules that are more toxic. This slide is what I presented at the conference, and we use the generative approaches that I showed previously to try to make molecules like VX. And what we are showing on the slide image on the right hand side is really the visualization of the molecules that were generated. We were able to create in just six hours over 40,000 virtual molecules. The molecules in this blue color are molecules with known LD50s. And VX is shown towards the bottom here as a purple dot. And the salmon colored molecules are those that were created by the software. And so we were able to generate thousands of molecules, many of which were known analogues of VX. And we found these by comparing to databases. And also a very large distribution of these molecules were actually predicted to be even more toxic than VX and have potency against the acetylcholinesterase as well. So this was quite alarming that obviously we could create so many molecules virtually. And we never went to the point of obviously making any of these molecules. But in the process of the work, we basically had come up with the design cookbook of how to do this in the computer, how one could just take these technologies. We could take the data sets that are available in the literature from different databases, the software for generative tool design and machine learning is all open source. And we had many other tools to predict the synthesis routes of molecules called retro synthesis software that are either commercial or open source. And obviously to go to the next stage would be to design the molecule synthesis route, and then to have someone make the molecules. And there are literally hundreds of CROs around the world that can do custom synthesis. And as far as we know there are really no checks and balances on whether someone makes a pesticide or whether someone would synthesize a chemical weapon. Clearly there's many other ingredients to this cookbook that we haven't exposed here in this presentation for obvious reasons. And as I mentioned at the outset, we're moving towards the point now where you may not even need a chemist in the future, but the robot automated synthesis approaches may be able to derive these types of molecules quite readily as well. So we could get to the point where these tools could produce these types of molecules without a human in the loop. So there are implications obviously we've had several months since we did this test case to think about this further. We did not explore how to make any of the molecules, the synthesis themselves, but the technologies are out there that could potentially be used to circumvent the synthesis of precursor molecules that are ultimately used to make known chemical weapons. And so that's something perhaps that hasn't been considered. We didn't try to make anything else besides VX, but these types of approaches could be used for literally any molecule. And I think now that these technologies out there as open source, the tools could be used by literally anyone with minimal knowledge of chemistry to do something very similar to what we did. And this may hint that we need to somehow lock down the data sets or the tools that we're making available. And this is something obviously that would be important to prevent the malicious use. And there is precedent for locking down technologies such as GPT three. This has been previously locked down through an API to prevent abuse. And then more recently this API based lock lock and key mechanism was relaxed in November. So this may be a way forwards where people that were genuine could request the API key, and then freely use the models. So in summary, hopefully I've highlighted in this brief presentation, the potential that we could use these AI machine learning technologies to create massive numbers of chemicals that are synthetically reasonable. Obviously we would use them ordinarily for drugs and consumer products, etc. And the advantage of these tools is that instead of having to physically make all of the molecules we can make the ones that look the best and score the best. And these tools can be plugged into the automated design hardware that is being created. And so the next stage will be to other groups obviously to demonstrate that these tools can be integrated in this way. But we have to think about the unintended consequences of these technologies that we're making available, whether that's for making chemical weapons or illicit drugs. There's clearly potential here with technologies that we have to in some way control. And this could have really quite large consequences for the industry currently. We're seeing a lot of visibility for AI alongside the farm industry, and it really just takes probably one example of this outside, which may have a large reputational risk for the industry. And without being too alarmist, I think this highlights the potential dark side of the technology. And so it's critical that we keep humans in the loop. We can obviously erase what we've done in terms of these were virtual molecules we never actually physically made anything. And so I think this really points is now to having more discussion at conferences. This is not something that I've previously seen discussed in in the sphere that I normally inhabit. And I think we need to bring more of a ethical guidelines into this area. And lastly, I would just like to say that this misapplication of technologies potential for misapplication may not be so far fetched. And obviously it got me thinking about how technologies have been illustrated in popular movers over the years and obviously it's not too far of a jump to think that maybe in the future. James Bond may be battling an AI generated chemical biological weapon. And with that I would just like to acknowledge Fabio Urbina from our group that generated this example. And Philippa and Cedric, who have been instrumental in making me think a lot more about this and the implications. And obviously, I'd like to thank the NIH that have funded us for applying and developing our technologies for healthcare related applications. Thank you. Thank you so much for your presentation, Sean and for getting us off to such a terrific start when we were putting this panel together, which is on science and security we thought what better way to start us off then or what more fitting to start us off than from the perspective of a scientist, and you certainly delivered you know thank you so much for this really engaging and very personal story that you tell about how you first got involved in security issues became aware of the dark side, going from AI produced drugs to to AI produced biochemical weapons. I had the privilege of being in the room when you first told this group of fairly senior. security and chemical security experts about this experiment and I still remember the hardtrop that everybody had when you said yeah actually it was we flipped the switch and within just a few minutes. We had a whole range of toxic chemicals, some even worse than VX so thank you so much for that Sean and we'll come back with with more questions after the next two sets of speakers. So our next speaker is Catherine Vogel Kathleen is Professor and interim director in the school for the future of innovation in society, very wonderfully grandly titled at Arizona State University Kathleen holds a PhD and a physical chemistry from Princeton University. She has served previously in the US Department of State as Jefferson science fellow and also as a William C Foster fellow and I've known Kathleen for many years, she is a wonderful person extremely bright scholar, incredibly generous. She's been doing really really wonderful innovative research for years, especially on the intelligence side on the life science side. She's been more recently on the ASA AI side as well. She's been a fellow at the touring Institute just up the road and I thought I would just give a very, very brief plug Kathleen for your very recent endeavor, this triple volume. Thank you very well. This triple volume of the non proliferation review in honor of Ray Salinskas, an absolute giant in chemical and biological weapons and who many of us on the call will have known personally or known of and who sadly passed. I've just just last week and it's just, as I said a triple volume, a stupendous amount of effort pulling all these authors together so thank you so much for your efforts on that, Kathleen and your continued, you know, innovation and keeping us all pushing at the boundaries of our research I'm so excited you're here. Very much looking forward to what you've got to say so with that over to you. Thanks for joining us. Thanks so much, Phillipa that's a nice introduction and I appreciate you also showing that special issue there are so many folks that were involved in contributions and it was a wonderful tribute to raise a lens guests and so that was just a great project to be involved with. And thank you so much for being a part of this very special event the 60th anniversary. And what's interesting that I think is my talk is going to really compliment I think what Sean has talked about earlier we didn't coordinate talks at all. But what I'm going to sort of talk about today actually really pairs very nicely I think with some of the things that Sean was talking about is going to really focus on looking at sort of the life science and big data AI nexus and sort of how we think about that, you know, to what extent is it shaping or reshaping the security landscape now and into the future. And with that my sort of my talk is going to focus more on, how do we think about the social dimensions with these technologies and as Sean just mentioned at the end of his talk the human in the loop, how do we actually critically and deeply think about that even with the development and use of these technologies. And just as a general kind of gist of my talk today. Kind of the main theme to throw out there that I'm going to put is the sort of quote the common quote the more things that change the more they stay the same. Or rather I would say the more we need to consider past wisdom on how to assess the threat for emerging technologies. So take for example, things like test and knowledge or know how things that such as the hands on way of making science and technology work and practice. You know, over the past 20 years. And I think Sean sort of talks about your hits at this a little bit in his talk. There have been many aspects of biological or even biochemical work that have become much easier to execute. Some things have become automated. Some things have become routinized and standardized, but not everything. So the question, I think so remains what has become standardized what has become automated and what has not in a given scientific and technological process. I think those are still key questions even with these new technologies coming on the scene that I don't think we have a really great handle on that in terms of assessing how then that maps on to threats. I think that's still very interesting is what exactly is required for technology transfer. Yeah, and thinking both about not only the technical dimensions of that, but also the social dimensions of that. I think these are still things that we don't often examine carefully in present debates about threats related to emerging technologies. And even when we talk about new technologies and things becoming automated or routinized. Even with new technologies and new skills are often new kinds of know how our tacit knowledge they connect to those new technologies. For example, how do you things are becoming automated how do you operate that piece of machinery, or that new tool for that technique. And if something breaks down or something goes wrong. How do you fix it or how to figure out how to fix it or what went wrong to get it back on track. If an experiment goes awry. How do you troubleshoot it to figure out what went wrong. And then connect to that what kinds of skills or disciplines are required to get those things to work in practice. Does it require one person that can do these things does it require a team of different kinds of people with different kinds of expertise. These are all kinds of questions that I still remain when we talk and look at the ballot tech landscape and how we might think about security threats into the future and I think again. I'm going to spend more time looking at, you know, the parsing out some of those questions more carefully more deeply. So with emerging technologies, I would still ask some of these same questions that I have with past technologies, and then to really probe kind of what is different, and also looking at what is the same, rather than assume that everything is changing with the March of time and with technology. I think it's also really pertinent to pertinent to examine kind of the continued hype around developments in the life sciences and emerging biotechnologies and, and kind of think about how do the stories narratives that we see and that we experience. How does that play into our own risk perceptions about bow threats or policymakers perceptions about bow threats. What are the stories we tell ourselves and others about bow threats. How and why do these stories matter. Are they grounded in a reality or empirical data. If so how and what ways, if not, then how is it that these bow threats still retain currency in different ways, who are what might be shaping kind of how these narratives are told or how they're circulating and for what purposes so I think these again these are kind of questions we still need to ask, even with emerging technologies and in particular the biotechnology space. And so in my own work, kind of one area that I've been really interested in looking at is, and I think again this touches at some of what Sean has mentioned his talk is this whole interest and it's been growing for, I would say a few years now. Kind of the digitalization of biology. And you know how biological information is increasingly digitalized and again how biological skills are becoming more automated to enable the sharing of knowledge practices and lab skills around the grove and there's been a lot of different public and policy attention on how this avalanche of new biomedical and life science big data coming from genome sequencing a variety of different databases electronic medical records and other sources are going to assure kind of a new area of precision medicine. That's going to reap a lot of different public health benefits and and kind of in tandem with that you also see again emerging focus with that on the potential nefarious kind of acquisition and use of this data by different kinds of actors and I think this has been something that has also been talked about quite a bit in different intelligence law enforcement and different academic and think tank settings and you know some have highlighted how China based hacking groups have been responsible or strongly implicated in in several different kinds of hacking incidents in the United States involving biomedical big data. Others have observed how Chinese hackers have attempted to obtain data from, for example clinical trials and scientific research studies as well as intellectual property involving medical devices and it's still very unclear what specific motives are behind some of these attacks, for example whether they're for purely economic or industrial gain, or whether there might be a darker purpose, for example, to aid China's growing security apparatus. In June 2021, the Biden administration signed a new executive order regarding the threat posed by China to us information technology systems and digital data. And you can see that these developments point to an over time a growing concern over China as a US strategic competitor national security threat in this kind of space in this nexus of data and information technology. I think it is clear kind of if you look at sort of what's been happening in China that China has sought to increase its biotech capability I would say over the past 15 years have been very specific government directives. And along this front, and the Chinese government has made biomedical big data, a national priority and they've launched a billion dollars sort of initiative in 2016 to essentially developing this area. And so when we sort of again look at these Chinese biomedical hacking attempts, this could be an effort to try and get biomedical innovation on the quick and cheap. It could be something that China's trying to do. If that's still a concern I think some we still need to be asking the questions. How and in what ways has China actually been able to use this data for economic or security gain. How easy or difficult has it been for China to potentially accomplish these goals. What opportunities or challenges might China face in its biomedical hacking aims. And eventually the things that I'm most interested in is if you're you want to know the answer to those questions, then how might we more accurately access these kinds of questions what kinds of data what kinds of analysis do we need to do to be able to get at those very, very serious kind of policy questions. I would say when I look so far at different kinds of studies or sort of focal points about the China threat related to biomedical big data. What I tend to see is that it tends to focus on these kind of discrete pieces of information, again the data itself, the, for example, things like genomic data patient data like electronic health records that sort of maybe acquired or passed for example. And then sort of there tends to be an assumption then if China then has this data, it's only a matter of time, I think usually the assumption is it's soon. Before China sort of overcomes the United States and becomes the new as integral or powerhouse and the biomedical biotechnology arena. And I think what's interesting is I kind of reflect back kind of been looking at this kind of discourse that's been made about this. This is very similar to other examples that we've seen in the 1990s. And to the present, in which different intelligence and policy officials have pointed out and I would say wrongly to how advances in biology and biotechnology are going to lead to new and growing security threats. In the past kind of examples you see focus more on sort of access to materials so access to viruses bacteria toxins pathogens synthesized pieces of DNA for example, or the focus has been on for example, acquisition of new kinds of biological techniques and technologies PCR synthetic biology techniques genome editing cloud labs. You'll see you've seen this a lot and sort of past discourse on this. Or you see a lot of concern about sort of published materials and methods sections of scientific papers and that information getting out and being acquired by someone by China, for example to do harm. I've argued previously that this focus on these sort of material aspects about technology is a very limited way to think about science and technology it focuses only on the material aspects. And it doesn't really consider the social requirements and conditions that enable science and technology to work in practice in the real world. And I've long argued kind of we need to consider these social dimensions of science and science and technology. You know, we're going to come out with a very erroneous way of thinking about what it takes for state or non state actor to develop S&T capabilities for harm. And again, there are serious consequences I think if we don't really think about these social dimensions of science and technology. I would point to a number of different flawed assessments from the Soviet and Iraqi bioweapons assessments in the past to the overhyped bioterrorism threat since the 1990s. And I would say various both security concerns since that time where this focus only on the material or sort of the purely technical aspects of biology or biotechnology has led to sort of assumptions about threats that don't really match up with the reality. And then the current concern over biomedical big data, I see the same focus again. And that again concerns me and makes me think we need to look at this a little bit more carefully. And I would say in terms of, you know, other scholarship or other folks that are doing work in the space are really interesting bioinformatics and big data researchers I would say several in the UK as well as in Europe, who have really tried to emphasize an alternative understanding and framework for how we think about these biomedical data threats. This kind of research really focuses a lot more on the methods and assumptions involved in using biomedical big data for discovery, the socio technical challenges involved in extraction of knowledge from digital infrastructures. And the implications of choices and data curation for the outputs and uses of science and technology. And this work is consistent with a sort of a broader range of bioinformatics scholarship that talks about the challenges of working with heterogeneous biomedical big data that it's not a trivial tasks to harness these data for other useful or nefarious applications. They're often errors associated with some of this data, there are other data quality issues or compatibility issues that require substantial data curation preparation before they can actually be used in practice. And so the crux of this kind of scholarship focuses on the challenges involved in creating transferring and using data for the production of knowledge that can lead to any, any type of biomedical application whether talking about a positive and beneficial application or more nefarious one. And so the key point here really is that of understanding big data and I would extend this to saying understanding the security threats of biomedical big data. We always need to be think how this is this related to the social context of science and technology, and the social context of how we generate and use and transfer biomedical big data. So it's asking fundamental questions not only about the data but about the who who has collected and curated the data. What are their skills and expertise how they collected the data how they save the data, under what conditions and what context have they done so how they stored the data. And I think these same questions could also be asked let's say for those who might be on the receiving end of biomedical big data, and then asking a further question of what's required then to translate the receive data to work in a new context, and to be used in a practical application. Data scientists are well aware that more data is creating even more complex data ecosystems to curate manage and navigate. Whether biomedical big data can translate into the touted benefits argued by precision that medicine advocates or whether it translates into more and varied kinds of security threats really depends on kind of the social sense making processes involved with working with this data. So we need to kind of more understanding of the social context of working with medical big data to really get at a more grounded understanding of whether state or non state actors that were concerned about how they might be able to use this data and practice. So again, I think we need to kind of really look more carefully kind of at the social dimension about medical big data. I think for to better inform decision makers about these kind of security threats. We need new kinds of research questions we need sort of new resources to enable kind of research agendas focused on these socio technical dimensions of biomedical big data. And you know this would really enable one to kind of parse out more carefully how and under what conditions could certain actors of concern utilize biomedical big data to pose economic or or security threats to the US or to other countries. So I do think we do need to spend more time and attention on looking at this increasing digitalization of biology but I would say we need to put more emphasis looking at kind of the social context of developing working using and transferring this kind of data and what that looks like. I'm just to sort of get to close here one other issue that I just wanted to throw in for conversation if folks were interested in talking about actually connects to the special issue that Philip showed at the very beginning of my introduction. And for me this relates to an issue that I have been more concerned about in recent years is about the pipeline of the next generation of experts, biosecurity experts, and really trying to cultivate that. This is just across the globe. As Philip mentioned with the passing of razor linskas, and I would say, in addition over the past few years, the best security community has lost several leading biosecurity experts through death through retirement. There's a lot of knowledge that is is passing from this field. And we need to find more ways I think to retain and maintain not only the lessons of these past experts and and how to impart those to kind of future generations and also to bring in diverse new generations of folks to work in this field. So, this is something also that I'm concerned about as you know we look at the future of the security landscape, how do we increase the pipeline, how do we find more diverse folks to enter into the field. And to be able to sort of meet interact learn from mother and build these kind of networks that can help us in advance of anticipated as well as unanticipated biological events that we might need to think about in the future. I'm going to kind of end on that. I'm curious if folks want to talk about the pipeline issue I would definitely welcome conversations around that. So thank you. Great, many thanks, Kathleen for your, for your talk and and the reminder to reminder to mention to venture beyond just the material dimensions of technology but also to think about the many social question that frame how we think about biomedical big data but technology more broadly that you know and how that can sometimes be overlooked reminder to everyone to start sharing their questions we've already received a few we're taking them on the Q&A channel. You can get access to it by using the lower bar of your zoom window we will collect these and then bring them to the panelists for a discussion. Before we do that, let me introduce our third panelist Dr. Tim Stevens. Tim is a senior lecturer in global security in the Department of War Studies and head of the cyber security research group. And his research focuses on the politics and geopolitics of cyber security and a plug to his current project project he's currently writing a book on the international political economy of cyber risk. Tim is a fellow at the research Institute for social technical cyber security and and the conservatory national bizarre in Matthew in Paris. Thank you so much, Tim for joining us and over to you. Thanks for saying thanks Phillip and also to Sean and Kathleen I think touched on quite a few issues that I was going to mention actually and will continue to do so during this short presentation. So, it's interesting I think sometimes to reflect upon the fact that when we think about cyber security we still talk about things like the internet is being emerging technologies. We've had general purpose computers now for at least 80 years. We've had the internet for 50 minutes recently it's an initial provisional forms by the web 30 years old. And you know is cyber security this thing that's meant to be securing those networks of those computer networks and systems is, how is it in what sense is it emerge. Well, what it very clearly is in a couple of very important ways first is we haven't worked out how to secure it. The first thing is that this thing this informational substrate for the world is growing in complexity and sophistication all the time. And if you layer on top of that the usual suspects like big data analytics blockchain AI quantum and so on. You've got this informational universe if you like cyber security is attempting in some way to secure, obviously so as to facilitate the types of research we've just heard about. So this is a constantly evolving changing very dynamic field of practice. Obviously, but also a policy, and therefore of research and I sometimes reflect on how difficult it can be in teaching and keep up with with things that have happened literally yesterday. There's a building crisis in Ukraine, for example, which I'll return to later, but cyber security has has come out of a very specific place in terms of traditionally with a bit information security. So securing data, making sure that people can access information systems to keep you know proprietary data or secret information secure from prying on keeping it confidential, making sure that data was have some integrity, making sure that people can access by the people that meant to access it. The cyber security is something much bigger I think than that. And even though you'll hear people say basically it's exactly the same as information security. I disagree. And I think the way that it's been couched in policy and strategy for quite a few years now tells us that it's not just about information security. It's not just about that protective function that info set information security has traditionally had is something much more proactive, something that tries to intervene in the world. There are very different ways through using an exploiting computer networks and systems. We can think that it's being used as a vehicle for certain forms of war warfare. It's certainly a platform for intelligence and for espionage both strategic espionage and industrial espionage. And cybercoin is the massive problem. Criminals have found computer networks, of course the internet is the kind of signal best known information system, but criminals are finding this a fantastic way of generating revenue. Terrorists always use it for also use it for various means, a whole range of other actors, but they're all trying to exploit this informational substrate for their own strategic economic political gain. So this is having some interesting effects on business as usual, the national level and of course also at the international level which is really where my comments are going to sit. So I just wanted to draw attention to the three main three main ideas that sprung into my mind earlier, and there are lots more. And these are please a quite high level and hopefully we'll get into some of those in the Q&A. The first idea that cyber court that not everyone likes that term, but this is how people are referring to it in policy, really is destabilizing some traditional binaries when we think about international life, and indeed about national security and so on as well. The first is that, you know, this is not an environment in which the state is operating on its own. There's a whole range of non state actors involved in this environment I mentioned criminals a moment ago and terrorists, but it's also you and me. We are in some sense agents of security we're constantly being asked to make sure that we patched our systems we don't download that we don't go to that website and so on. We're agents of security too. But of course there's also the huge influence of the private sector. We quite often work very closely with governments. We work with academia and civil society working in various ways to kind of understand this environment, what should promote some forms of a safe practice or privacy through privacy activism and so on. But the private sector as I just mentioned this is, these are the people who own and operate infrastructure that are people who sell us our products that are people who sell and maintain products for government. And they are hugely influential actors in this space. Public private division is further complicated, you know you have at various actors from each side of that putative divide divide in fact very much blurring that bind this hybrid arrangements to public private partnerships all sorts of cooperation and coordination and private sector is very much wrapped up in the idea that the private sector can provide solutions to a lot of our cyber security problems. The whole inside outside when we think that the state is also further blurred you know the global internet is exactly that it's global it's material infrastructure extends into space under the oceans and across every land mass. There is, and the idea that somehow you can retain remain inside a national shell and do cyber security has long gone, because everything is so interconnected and I'll return to that theme in a moment. And as I said this other binary between protection and exploitation again has disappeared because quite often it's the same technology being used for both those those efforts dual use or perhaps even multiple the same technologies. And you know that international level when we think in the Department of War Studies about differences between war and peace we all know people who work in this field that you know it's not quite that simple, but cyber really really complicates that because of all sorts of issues around anonymity attribution, plausible deniability, and so on and Lucas Kellow at Oxford does this fabulous word on peace to describe the state of play that's very much characterised by the use of cyber capabilities. And he's stabilising those bindings. The second main theme is these issues of dependence and interdependence. And as we've just heard from Kathleen and from Sean Sean mentioned you're securing AI for malicious attack. And Kathleen was talking about securing intellectual property from from Chinese state that is, he might wish to steal IP and use it potentially, or translate or convert that somehow into something of use to the Chinese project. And cyber computer networks and systems are themselves a critical infrastructure. They enable infrastructure something that supports a greater endeavor, and computer networks are doing this across multiple economic sectors. There is a dependence on these these digital infrastructures. And this is why cyber security is in part so important, because they have to be protected and secured from a variety of actors and also of course from from accidents and from disrepair. I mean, if things have happened in the global networks that mean that people notice when they can't access Facebook before sometimes, you know, that that's just the way that the systems would misconfigured, or somebody pulled out the wrong plug or flipped the wrong switch. Things happen, and trying to develop resilience to those types of activities is part of what cyber security is all about. So cyber security is a form of security, but it's also a condition of other forms of security. You think about it as being a guarantor in some ways of national security of economic security, you cannot do innovation in a high tech environment unless you're able to secure your data. So it has a role in that too. Even in, you know, nuclear security. We often hear about the cyber nuclear nexus and what happens if nuclear command control systems are hacked in various ways and there's a lot of work out there people very concerned about space security, you know, some of a lot of internet traffic. Not majority by any means, you know, it involves satellites, but of course those satellites themselves are information systems that have to be secured, and so on and so forth that of course right down to the level of the individual when we think about human security. And the pandemic is actually showing us very well it's also showing us quite how good in some respects cyber security is, but it's presented a whole new range of targets on which we've just heard about, for example and vaccine research and so on. But also as people have switched to working online hybrids working arrangements like this, you know, have to be secured, so that they can't be hacked or subverted as we go. So if you ask what is cyber security for. What does it secure you get a very complex answer it's not just about computer systems. And as Kathleen rightly said and I agree with this entirely and this informs my own work. There's a social aspect of this as well that's hugely important, as well as the technical one. So that's the dependence angle the interdependence angle is this transnational aspect I mentioned earlier about infrastructure spanning continents and so on. That's the case and what that does is not just that they're extended across across space and time as it were they're also interconnected which means the things that happen in one place can happen very rapidly and another. So you if you get a take down an asset, for example in one country, and you don't know quite what we're doing they can be unintended effects and perhaps in blowback on your own territory. They can happen in the past, for example, Russian cyber operations, and that points to the to the, you know, when we think about the risk to these environments, not just kind of organizational risk for the institutional the organization the firm the company, but actually that extends across jurisdictions and is no respect if you like, of particular local political flavors or cultural aspects this is a systemic infrastructural risk that we need to pay attention to. And exploiting these networks could be both a source of security for a particular country but also a source of insecurity and potential instability. We don't really know how to avoid, although we've been quite lucky so far to avoid cascading failures and tightly connected digital networks and the critical infrastructures that rely upon. We don't quite know how escalation works when two countries are trying to duke it out or explore one another's networks. We don't quite know how to approach crisis management in this domain, and the unfolding test case at the moment of course is Ukraine, where we're seeing Russia, you know, we presume it's Russia based on analysis is gradually kind of ratcheting up cyber operations against Ukraine potentially softening up both the civilian and government for a potential incursion sometime. And then there's this third issue that I wanted to raise which is this is fully political. This is not just a technical proposition and as I said I agree with Kathleen in that sort of social respect. You know this is as much as the internet and information systems with fabulous tool for innovation exploration discovery communication, and so on. And for economic gain. They are also a military and intelligence domain. There's a strategic benefit in investing in developing institutions and capabilities. Some of those capabilities of course which may be characterized as offensive in nature, rather than just defensive. And these are becoming rapidly and happening for some years now an issue not just for national policy but for international policy law and regulation, but also for global governments about how we, how we govern this this space and it's very very difficult to do so. One of the key backgrounds here is about developing norms for responsible state behavior, where you essentially have a competing diplomatic tracks at the United Nations one sponsored by the US effectively and the other by Russia and China. Very deep little very ideological and some people are calling this part, you know an aspect of the new Cold War. But this has been going on since the late 90s. And who determines for example the technical protocols that make the internet and other technologies work is very important and you watch at stake essentially there is you know do we bake in democracy into our technology or do we do we do we found new technological standards on authoritarian ideals. What's at stake in some of those discussions. So just to conclude my personal view here is that that cyber want to call it that it is potentially quite destabilizing, certainly to the way we think about international. But it could be worse. I mean we've let us get to see anyone die directly through cyber means, for example. That should not be the sole criterion for evaluation of an emerging or a set of emerging technologies. You know there's no direct lives, lots of life on a regular level from nuclear weapons either. It doesn't mean there isn't a problem set nuclear weapons and there certainly is a problem set the cyber, social economic political technical cognitive psychological list goes on. And also the one, one of the issues I just, you know, finally raised in respect of cyber is that, you know, one of the issues is epistemic. Rather than just being a tent is the infrastructure that facilitates misinformation, disinformation breeds uncertainty. And gender's erosion of trust in institutions and in infrastructures. You know what do these do to political stability in turn, both national and international levels and we've seen this in the last sort of five years or so in numerous examples. And I guess the matter issue I will close here, Philip, but is that you know is whether this hyper connectivity that is that is facilitated and is integral to if you like very kind of nature of the information systems that we built is whether this type of connectivity is actually good for the human species at all. And there are huge operations cooperation, some of which we've heard about in terms of biomed in the last hour or so. But also generating and sort of facilitating that some form of global cognitive malaise, and I don't wish to come off as a Luddite here. But I do think this is a an issue that's that's really worthy of consideration and it's something that I think again speaks to this issue that Kathleen raise, which is that when we think about computer networks and systems and information we have to think about social as much as the technical. I guess that's my question, but thank you very much. Thank you so much, Tim that was really fantastic thank you also for stepping up for the home team. Really fantastic to be able to showcase some of our own scholars in the department at these seminars and events that were that were that we're hosting. I really enjoyed your talk Tim and I continue to learn from you as cyber is as you know an area I'm not so familiar with and you present it very clearly and it's really very enjoyable to listen to. We've come to the point where we are now asking you to ask questions what are your major concerns questions things that you take away to go or took away from the presentations we've had please do use the q amp a function we've already had a fair few questions in there we'll get to those in a minute please do also join the conversation on Twitter using the emerging tech and hashtag and the war studies 60 hashtag. From my own perspective, one of the things that spoke to me in the conversations was really about the new actors and the new networks that are gaining currency in this in the space in the security landscape now as compared to the 20th century. Just on a personal level I mean, you know, a year ago Sean I don't think you were even imagining rubbing shoulders with this kind of security crowd that you now have right and and so it's even new actors in that sense but also more broadly Newton the rise of different kinds of actors we heard repeatedly through the conversations of the presentations talks a talk about the private sector. Of course there's another reason to to that we wanted to invite Sean in you know it is a recognition in a signal angle that there are these new actors that need to be part of our conversations and to have a seat at our tables. And lastly and you mentioned hackers, many hackers are of course state based or state supported, but there are also others you know who hack who do it for no other reason than just to see if they can. We got similar issues in the bio space where they're called bio hackers, or amateur scientists who, who, who try stuff for, for, for, for no, no, that's not necessarily malign with malign intent, but they are new actors who with these emerging technologies have can have quite powerful consequences of some of the work that they're doing so we're seeing all these new actors come in and I'd be very keen to hear thoughts from the panel on on how we can enroll these actors in our project of safeguarding national security of safeguarding international security how we can enroll them in disarmament and in arms control. Tim you mentioned, they could also provide solutions we mustn't just look at how they make things more difficult for us. There are also solutions that can be brought from from the, from that could come from emerging technologies. Are there new levers of control, Sean you mentioned reputational risk as a big factor for private industry, can that be used as a way of managing some of these risks so my very broad question I guess to the panel would be along the lines of how can we think about effective means to manage technological advances when we have these using these new actors and how can we enroll these, these new actors and I hope, you know, your big message Kathleen about the social dimension, bringing this down to how do we create linkages relationships and networks between groups of people to build trust to build confidence that what people are doing is above board I think all of that is really really important so those were just some of my trembling thoughts as I was listening to to to these presentations I'll hand over to Hassan who will also have and share some of his thoughts with us and then we'll turn it over to the panelists to kind of give us some in some initial reactions from their side and then we'll turn to the Q&A and there's lots at this point so Hassan over to you. Many thanks, Philip and the panel, I want to ask the panel a slightly tongue in cheek and question I mean rather than on technologies themselves. I just want to put the spotlight on our own understanding and appreciation of risk more generally. I mean these are other technologies that have been discussed on this panel clearly create risks for sure I mean Sean's presentation is very clear and demonstrating that by Kathleen and Tim also asked the question what has actually really changed. And what has remained the same. And I wonder how we can appraise our tolerance of risk in this context. Has it actually decreased somehow. Do you think that our tolerance to risk and has changed in relation to these new and emerging technologies. And therefore I think the consequence would be pushing forward a control agenda at the four of our discussions when we're talking about these technologies. So that's my question to the panel and over to you perhaps we can take we can take them in the order of speakers so perhaps you can start with Sean and then move to Kathleen and Tim. Yeah, I am not I'm not probably the best person to ask about risk tolerance. I mean having not even thought about the risk of these technologies previously and I've been working in the field for over 2526 years. And, and obviously using a computer I would not really have thought of any risk of using that to do what I do. But obviously now I am very aware of the risk of these technologies of the risk of even publishing any of the data sets that we put out there in the models. And so now my, my risk tolerance I think has changed dramatically. And obviously the reputational risk as I, I highlight, I think, because there is so much money going into this area. That was my main concern like reputation for all of these companies, if one of them goes awry. And that's, that's where I'm coming from I guess so it's very, very narrow in thinking I wasn't thinking the big picture globally. I think you're very narrow. This industry is, you know, potentially could do itself a big harm if some of these data sets and models get out there and someone misuses them. Let me see lots of different things I could comment on let me see what I can comment on. I think from my perspective in terms of thinking about the risks. There's a different kind of risk when you think about technology in isolation and abstraction versus what it actually takes to get technology or scientific technique to work in practice. And I think oftentimes the in practice part and oftentimes have science or developing technology. There's so many things so much troubleshooting so much painful work that has to go into getting something to work in practice that's often not part of these conversations but I think it is something to think about and I think it does if you think about some of these challenges that it does then tend to give you a different perspective on the risks. That's not to say that the risks go away, or that we shouldn't be thinking about some of these technologies or new developments but I think you also have to think again grounded with empirics and I think oftentimes in the security space there's a little of that. Actually studying, you know, for example one of the things that I was looking at in Sean's presentation was some of the different scientific papers he posted and you sort of see the list of co authors listed on some of these papers and for my perspective, I'd be very curious then to understand how was that entire team involved in producing that knowledge. And that is where you train better understand what is a threat to what is a risk is to try and get a sense of what is required on the people dimension on the teamwork dimension the troubleshooting dimension. And when you start getting at those and that helps you then parse out what maybe could potentially really be a risk and and be more at hand versus, let's say another type of risk that would require an entire 20 30 person team of people to accomplish with very specific skill and very different kind of risk that you're talking about. And so from my perspective, I think we need to do more to parse out. What is the, the sort of more quick and easy kind of work that might pose risks versus other kinds of risks that might require teams of people and much more complicated state level kind of, you know, over time kind of efforts that would be required. I'll just briefly on one of Phillipa's points on sort of these new actors and, and what can we do to engage and I think not that they were perfect but I think the synthetic biology community is kind of an interesting one to look at where they were from the beginning. When some of that work was starting, there was an attention to some of the security concerns and, and they were very open to being engaged by security scholars by law enforcement folks being involved in security conversations but I think it also involved the security communities, reaching out to them and finding ways to construct will be engaged with those communities and you see similarly with them. You know the I gem the international genetic engineering machines kind of effort where you had folks be engaged very early on and trying to get people to think about some of these ethical or security dimensions of the work. So I think there's always opportunity it shows that there are ways, and they have been successful ways to engage different communities new actors and I think we need to probably be thinking of doing more of that. However that requires resources and personnel to do this so that's also thinking as a policy perspective. So if you have the right enforcement to gauge or the intelligence community to engage you're going to have to provide resources either human or financial to allow that to happen and also on the scholarly side as well so those are just some comments I would have. Yeah. So, two enormous questions there. Yeah, as I mentioned earlier, I'm trying to book on cyber risk, a digital risk, and I don't have a settled answer to your precise question I think when you think about tolerance to risk though. I think the risk is that a risk doesn't exist until we identify something as a risk. So at the moment in a deeply connected world, which science and technology writ large and which global communication and global media are things in the in the world. We have a greater knowledge. Well we think we have greater knowledge anywhere risk, we understand that there are more risks, and we are concerned about them in a sense I think given that the amount of risk if you like that does exist in the contemporary world. We have a greater tolerance towards it. Or at least we're very selective in what we think is riskier than others, most of us indulge in risky behaviors, but at the sort of high level political collective we're indulging in some incredibly risky behaviors, which we pretty much know the outcome, in terms of species death and ecological destruction, yet we do it is that tolerance to risk. Sounds a bit like it doesn't it. So, I mean, Kathleen raised an interesting point about you know the empirics. It's really really challenging, because risk if security is a way of securing what is known risk is a way of managing the unknown. And risk is very forward looking in that respect gets to model the future we have to use a data from the past in a very actuarial sense. So there's always kind of different strains dynamics operating across this this time space of risk. On the on the act to new actors. It's a really live conversation cyber security has been quite a number of years about how they talk about the cyber skills gap by which it's meant people in science technology engineering mathematics we don't have enough of them. We don't have enough computer scientists mathematicians modelers network engineers you name it. So we need we need to pump all the money into stem. That's necessarily the right approach. I think we need a much more diverse workforce that's going to try and work through the issues because they're not just 10. And I think we can probably agree with that on this call doesn't mean we don't need engineers and computer scientists we do. We also need in the universities to encourage much more diverse. A sort of skill set and perspectives and disciplines and so on that needs to be translated into government policy and actually into funding. Because in the UK is exceptionally poor at the moment, despite the fact there's lots of people like me agitating for a much more diverse workforce. And then actually have a respect for things that are outside stem because you cannot properly serve a society, unless the people are doing it reflect the society looks like. And it's not just intellectual diversity there either there's a whole range of protected characteristics that need greater representation in the cyber security workforce to including in government. Great many thanks to the panel. We actually have 40 were 45 minutes past the hour we've got 15 minutes and left. We've got plenty of questions from our audience. Thank you so much for showing interest providing interesting questions as well. Let's see how many rounds of questions we can take in the remaining 15 minutes I'll try my best to condense a few of the of the question of the questions and then put them back to the panel and I appreciate. Quick responses to be able to go through as many as possible I'll start the first round and then Philip perhaps can can take us to the second round if if if there is more time. There are a lot of questions about the different profiles of actions. And so there's a question about the private sector driving technological innovation, consumer based demand based companies, and also rogue states and terrorists. I'm grouping some questions here together but the old point to sort of like different actor profiles, and the kind of question that raises for both innovation risks, but also control and and although some of these questions can be tied to certain technologies and I think we can put them to the panel in, in general about you know, whatever technology they're specifically interested in. So basically, how does the private sector consumer based demand based companies, rogue states and terrorists. And I'm interested to see what these actors in terms of technological innovation risks but also control. And I think there is an interesting and very broad question that I'd like to put to the panel about the opinion about the most dangerous emerging technologies at the moment I'm interested to see what you, you think of that or maybe perhaps a different way of framing that question. Right, over to you. I thought with the same order as well so Sean Kathleen and then Tim. I'm kind of wary of very wary now of thinking about even saying what I would consider as dangerous technologies, because I feel like I've become a bit more paranoid now about what I put out there. I've maybe already said, I think maybe already said too much right we're all aware of what technologies are out there. And I think they all could potentially be misused right whether that's AI whether that's CRISPR. You know, some of the things that we think of as relatively innocuous could have potentially devastating implications if they're misapplied. And yes I would never have thought that computational technologies alone would have such a potential impact. And obviously alone they are not going to kill you right that piece of code isn't going to kill you, but it's what will come out of that code that's just the next step. Right, so that's maybe how we have to think about these other technologies to right so you know they may be just a stepping stone to get to the ultimate weapon. So yeah I would not want to single out any particular technology whether that's biological AI chemical. I think they all could be equal is bad. Yeah, that's a tough question to I would similar to Sean I'm a little reluctant to say one is worse than another. I guess I for my perspective. I would probably look at it. And again you're this ties back to your other comments about different kinds of actors and, you know, how they might connect to innovation or technology and it just when you said that it reminded me of. Sometimes, sometimes we focus more on the technology. And again this gets back to my main argument which is focus on the social. And not think about sort of other kinds of innovations that are not technology and so for example when I think about 911 and what happened there when you talk to different terrorism scholars they talk about that was an organizational innovation that allowed that and facilitate that it wasn't a technological innovation. And I think we, I would say we probably need to be thinking more about those kinds of social innovations even when we think about technology so for example if we think of particular. Technology might be used for harm. What is going to be sort of the, or maybe the organizational innovation that might facilitate that or for example if there's something that truly does remove humans from the loop. You know, looking at that more particularly, and really thinking holistically about that so oftentimes people think okay, if it removes the human from the loop to get something to work but what if that particular piece of technology comes down with, with the person know how to fix that, and how do we can think about that aspect of when we talk about innovation. As well. So, those are just some thoughts that come to mind, I also would be reluctant to, to talk about what is the most but from my perspective I would say we don't really have a good hand on that because we haven't really done the assessments that allows us to construct a spectrum of what is easier or more difficult to do that would be more of a concern to particular non-state actors. I've along our argued we need to have sort of a map where you really parse this out and say you know what what is a technological innovation that is simple and easy at hand that would allow for harm versus something that's much more sophisticated and complex and and I don't think we have that now. Yeah, it's not a fair question to ask someone who consumes too much science fiction anyway, but it's, but I think the most dangerous technologies the ones we already have. And I would say pretty much anything that's based off an internal combustion engine has proven to be absolutely lethal to life on earth already. So we've got a big problem to sort that out right now. But I mean, I do think that pretty much when we're thinking about emerging tech, pretty much all of it converges on information. And it's that kind of manipulation of information that I think is actually in a kind of abstract but yet concrete sense is actually what's going on here. You know, there's a reason we've been calling ourselves knowledge economies and information societies since you know the invention of the internet in the 60s and 70s is because it's real and it's really, really important. And all the things we've heard about AI, bio and chemical invasions and so on. You know, some quasi sentient gray goo escape, which is a lapse of laboratory safety is as likely as kind of a deliberate kind of intervention by a malicious actor from outside. You know, and all of that will come from something going wrong with the code of life. And because we've made it. We've got protocols and safeguards and so on, but it seems, you know, if there's a technology that's going to be really problematic in the future, probably going to come from there, even nuclear weapons about information ultimate. So that's my particular bent, I think, but yes, it's a very difficult question to answer. It's a stunning job. Try trying to get there. There's a there's another few questions in the chat one is about donation states need to do more to manage the threat of globalization of emerging technology. And there was another about. Let me see. Okay, is there plausible room for redefining what a biological weapon is after 2021. Presumably since since the pandemic on that one, we have a great set of scholars from Kings who have argued exactly that point saying, tying it to disinformation and saying, Well, if you are using. Using disinformation in a public health crisis. And you're ending up at a point where people don't know, or they don't trust their doctors. They are taking wrong medicines or they're refusing to take medicines and they're getting sick. They are inciting violence against medical staff, resulting in casualties. Then, in effect, you are developing a different kind of biological weapons so it's a different way of thinking about biological weapons. But what I really wanted to come back to was this point about disinformation and manipulation of information because it seems to me that that is one of the cross cutting issues. It's one of these different emerging technologies right. It's something that affects all of them. And how do you think disinformation in your areas is affecting the shape of how the technologies are emerging but also their impact on the security landscape and sort of related to that is, you know, disinformation information is creating this is part of creating the crisis of trust that we're seeing in in science and in authoritative institutions. How can we start rebuilding that trust? And my very final question to you is really, are you hopeful about the future? Or are you not? So, again, I hope I've served you a little platter of possible questions that you can pick at or not. As you will we've got five minutes. I'll leave the floor up and would one of you like to jump in so we don't pick on Portiaan to start off the whole time. Kathleen or Tim would either one of you like to have an initial go at your final word. I'm happy to take a couple of brief interventions. The first on disinformation and so on in cyber. What's actually happened. I mean, before we got to calling everything this information, around about 2016 in the US we were calling it cyber enabled information operations, which tells you pretty much all you need to know there about the role of computers in that they are the platform that infrastructure that enables this stuff to be disseminated on a global scale. The problem with that of course is that people think that therefore there are technical solutions to the problems of misinformation and disinformation with a very few and far between actually because the problems aren't derived from technical problems in the first place. These are social issues and not obviously they're co-constructed in various complex ways but don't think that there's a technical solution to what are essentially social and political problems because there aren't not purely technical anyway. And what we're seeing there I guess is, you know, the modernity came out of partly an erosion of religious authority we're now seeing in part erosion of secular authority. And that's really problematic because we thought we were doing quite well as high moderns and it turns out we weren't. Maybe we're being bit complacent, but what are we going to replace this with and how are we going to shore up trust, are we going into a different kind of regime knowledge and understanding. I don't know am I optimistic about that or pessimistic, but you happen to be talking to somebody published a book about pessimism about three years ago. So I think that tells you what you need to know but I mean Gramsci was right. Pessimism of the intellect optimism of the will. And I think there's a lot of truth in that so as an academic being skeptic and slightly pessimistic it's not necessarily a bad thing, but you wouldn't want to lead all your life in that frame of mind. Thank you Tim Kathleen Sean do either of you have any points you'd like to come in final final words. I would say that getting back to the misinformation or disinformation. If we're building models that enable us to say design molecules, they're very dependent on the quality of the data that goes into build the model. And so a lot of that data is already gone through peer review, but you could imagine taking data from databases and some of that data was corrupted in some way. And so now the models that you build will no longer be reliable, they will no longer point you in the right direction right, so it would be almost like a compass losing its ability to enable you to figure out which direction you're going. But I think that might be a challenge going forwards is if the databases that we use for good are corrupted in some way that will then enable them not to work. And that's the only way I can think of disinformation impacting sort of, you know, my very narrow view of how we use AI. And then obviously that will impact how they could be used for nefarious uses to. You know, just jump in. I do think this disinformation issue is fascinating and I think it is going to continue to, you know, we're looking at sort of the security landscape I think it's we're still going to have to grapple with that I think in the years to come and all kinds of different dimensions and, you know, from what Sean mentions, you know, errors or corruptions and information that's sort of databases to sort of more nefarious types. You know, it's, I think it's going to be a persistent problem. But I will say I am hopeful for the future. You know, I think in part because when I think about, there's so many smart people working on these problems, including panelists, and others that I just have a hope that, you know, we're not going to be able to void or prevent certain bad things from happening. So I think there are ways to sort of try to do what we can to better protect ourselves. So I actually am hopeful. Thank you, Kathleen. Thank you for letting us end on set on a high note. Thank you very, very warm. Thank you to all of our speakers. It's been really wonderful to to hear your thoughts, your perspectives. It helps us to think further and critically so thank you so much to you for for for you for speaking, and thank you also to Lizzie and the wonderful comms team here at worst days I'll quickly hand over to Hassan to close out the webinar. Thank you so much. We are eight p.m. on the dot. Thank you so much for our panelists for sharing thoughts for my colleague and co director of C++ Philip and behind the scenes, Lizzie from the worst studies department who made all this technically possible. No pun intended. And I also want to thank our audience for contributing excellent questions to the discussion. And please note this event was part of the series to celebrate the 60th anniversary of the worst studies department. Let me pick up one on how do we navigate crisis in international order. And another guess what on health security. How do we respond to growing threats to global health security. Please check our website for more information on both events. And thank you all for joining us today. And good evening to you all. Thank you. Bye bye everyone. Thank you.