 So, good afternoon. Welcome to the 28th Military Rider Symposium. And we are extremely glad that you are here in person and also for those of you that are watching online, wherever you may be, we are extremely proud of you joining us today. So, this is the last official event for our two-day session. And I must say that this year has been incredibly powerful from multiple different dimensions from talking about artificial intelligence and the nexus between robots to the Kobe Award presentation, interacting with a student of Norwich and also a faculty member. It's just been an incredible time. And the great aspect of this in technology is not only have we been able to broadcast this out live, but we're able to record it and to use it later on. So, one of the unique attributes of the Military Rider Symposium is the focus on student research. And you're going to hear from three of our student researchers here in a little bit. But before we do that, one of the things that we've discussed over the course of these past two days is just the importance of cultural intelligence and also just the importance of recognizing that when we talk about anything as it relates to artificial intelligence and robotics, that it's not a English-centric conversation. We're talking about a subject in which certain nation-states are leading the narrative. And we're also talking about a subject in which other nation-states will receive the outcome of the narrative. And so, one of the ways in which we want to internationalize our time together is just to give a couple of minutes to student voices, some of them in English and some of them in their native dialect. And so, Drakshan, if you could come forward, we're going to ask Drakshan to share just a little bit about her thoughts on the Military Rider Symposium and the intersection of our topic and then also just how it relates to her time here at Norwich. And we've asked her to do that a little bit in her native tongue and then some in English. And then we're going to turn that over to the panel. But without further ado, Drakshan, you have the podium. Welcome and good afternoon, everyone. I'm Drakshan Farhad, senior English major at Norwich University, also second lieutenant for international section in the Corps of Cadets. So, Bismillahirrahmanirrahim. In the name of merciful and kind God is what we say in the beginning of every speech when we present back in Afghanistan. Something that for years I have not done, but today it felt like the right thing to do. And you will know later why. I was asked to present on this day weeks ago. I kept thinking of different topics that I could reflect on as part of my speech. I also came up with a script, none of them, none of those scripts provided a convincing enough idea to contribute to the dialogue in the manner I wanted them to. Yesterday, after attending several sessions and panels for the symposium, I noticed different aspects about artificial intelligence and the rise of robotics. I heard about the future of robotics and the challenges of implementing these robotics in different environments. And for my mostly important, one of the quotations or phrases that I heard over and over again, that something that was science fiction is now a reality or the future. And most of the countries, the name of the countries that I heard the most were the United States, China, Russia, names that entertain the privilege of creating weapons of mass destruction and fulfill their ambitions at the expense of third-world countries where these weapons are mostly used. And it is a very pressing matter because I grew up in Afghanistan and I know the effects of these weaponaries and what they do to humankind. One of the questions I asked yesterday during one of the sessions was how the countries who are mostly affected by AI have a say into all the decision-making processes that go into creating these elements. The short response was, honestly, they're not even at the table. And their perspective is most of the time is missing. And it was very obvious because questions like, okay, these weapons are created, but what do we know about the fact that there are ethical concerns about using them? What are the grounds in which they should be used in third-world countries and countries like Afghanistan? Are they gonna be intelligent enough to be able to make decisions based on human intelligence and emotional intelligence to be able to tell who is the enemy? And the good thing is that this symposium and this institution has entertained the idea of starting a dialogue, a dialogue in which we need to ask these questions because this is a matter that is globally entertained and it's globally affecting the countries that are not necessarily part of this process of creating such a thing. So if you are here to learn something, one of the things you need to learn is to be able to ask those questions and say, what is the scope of how these elements can affect other countries? Which countries would be affected the most and what is the level of destruction and is it needed? So today, as one person, as someone who has this podium and who has this chance to be able to speak on behalf of so many Afghans who might not be able to get this chance ever, I want to say, Bismillahirrahmanirrahim in the name of merciful and kind God to start a dialogue, to start heading to a direction where we see this as a global matter and not just something limited to a symposium or just United States. Thank you. Thanks, Dr. Akshan. So as you've deducted over the past couple of days, one of the things that we emphasize at the symposium is bringing in subject matter experts who could advance our understanding on the theme at hand. We are interested in student engagement from last night from at least the 40-some students that lined up to ask questions to the other students over the past couple of days that have engaged with our authors formally through questions but also out in Mac and around campus. But something that you may not know is that there are students that are awarded fellowships to conduct research over the theme each year. And one of those is endowed, it's called the Schultz Fellowship, and that is given from the Schultz family from class of 1960 who was part of the symposium and has started supporting it from the onset. The family is endowed a fellowship, which we can gratefully say that it's grown to the degree where this year we're able to fund two Schultz Fellows. And then the other is a Peace and War Research Fellow. And these students conduct research, funded research for the course of the summer. They are given really an open dossier of how they wanna proceed. They could travel overseas, they could build something, they could paint something, they can write about something, they can do interviews really, or they can engineer something, some sort of product. The point here is that the military writer symposium is something that's interdisciplinary and we want students from multiple different disciplines to be involved, to do research, but also we wanna give them a platform and that's what we're doing today. So it is my privilege and honor to introduce one of my colleagues, Dr. Steve Sodergren, who is also the only Kobe Award winner from Norwich University. He's a Civil War historian and he's also the Chair of the Department of History and Political Science. It's my honor and privilege to turn the podium over to him as he takes care of the panel and introduces our student research fellows. So Dr. Sodergren, cheers. Thank you very much, Professor Morris. Podiums aren't built for people of my height, so forgive me if I stoop. It is my pleasure to stand before you today to introduce these outstanding Norwich students. What is the final panel, the final discussion related to this year's, Colby, this year's Norwich Symposium, which has been, I think, a tremendous success. And throughout the past two days, we have been talking about the future, the future, the future, forgive the praise, but this is the future. These are the students who are going to be shaping not just the technologies at our disposal, but the manners in which we use them. So these are the voices that we need to be hearing right now, and it is my pleasure to introduce them. And I just, once again, like many others, I want to thank Professor Morris, I want to thank Megan Liptak, I want to thank Yang Moku, I want to thank all of those who helped make this year's symposium such a wonderful success. What we'll do is I'm going to read off the biographies of all the student presenters, and then we're just going to do one presentation after another and save time at the end for questions that I will moderate. But from left to right here, you see our three students. The first one to my left is Elena Latino, who is one of the recipients of the 2022 Richard S. Schultz Class of 1960 Symposium Fellowships. Elena is from Atkinson, New Hampshire. She is currently a junior at Newark University, studying computer safety and information assurance with a concentration in digital forensics. Although relatively new to the field, her summer research presentation on AI forensics has helped her engage with experts in both artificial intelligence and digital forensics. This opened new doors for her and sparked an interest in her future. Over the summer, Elena had the chance to study abroad through Norwich's Maymester. The immersive class on cyber surveillance allowed her to explore new areas of computer safety in Germany. Elena has a passion for digital forensics, but on the side, she also enjoys surfing when she is home for the summer, as well as playing club field hockey while at school. To Elena's left is the other recipient of this year's Richard S. Schultz Class of 1960 Symposium Fellowship, Gabriel Williams. Gabriel is a Norwich senior from Suffolk, Virginia. He attended Hampton Rose Academy, where he was captain of the track and field, discovering early on that he thoroughly enjoyed the field of government and politics. Gabriel chose to attend Norwich University as a political science major, planning to work in the government sector or intelligence community upon graduation. At Norwich, Gabriel co-founded the Norwich University Boxing Program and made history this past year as he was part of the first Norwich Boxing team to ever compete in the National Collegiate Boxing Association. As a member of the Corcadets, Gabriel thoroughly enjoys working with the Rook class as he was cadre staff in his junior year and as an officer in cadet training company this year. Outside of Norwich, Gabriel has had internship and contracting experiences in the Department of State and the Department of Defense. Finally, to my far left is the recipient of the 2022 Military Writers Symposium Research Fellowship, Wesley Dewey. Wesley is a student at Norwich, class of 2023. He believes in personal and professional growth, hard work, and furthering the great legacy that Norwich University holds. Wesley is studying for his Bachelor of Science in Marketing Management. He has also spent time playing for the Norwich University eSports program and spends time outside of class with friends and family or in the gym. Just before we begin, a round of applause for these wonderful scholars. We cannot heap too much praise upon them, but to begin with, I'd like to hand over the podium to our first speaker of the afternoon, Elena Latino. Hello, as you just heard, my name is Elena Latino and I'm a junior here at Norwich University. So I can skip the intro that you just heard. Over the summer, I did research on the current uses of artificial intelligence and how and why there's a need for the subfields of AI forensics, specifically in warfare. So my real conclusion to this research was because of the rapid and widespread adoption of various AI embedded systems and their complexities, it is now necessary for the need of forensics expertise and forensics tools for commercial and military applications. To understand this, I interviewed many different experts from professors here at Norwich, professors at different universities, as well as workers in private sector and for the government. I read and reviewed many different peer-reviewed sources as well as books. I attended two AI conferences. I watched many informational videos and listened to podcasts and through all this, I was able to form my opinion, which I will share today. So to get started, I would like to explain the field of digital forensics itself. So as my digital forensics teacher, Professor Adkins explains, digital forensics is the intersection of criminology, computer science and law. The field of digital forensics came about when computer-related incidents began to occur. These incidents brought about the need for scientific and legally acceptable findings, which I'll talk more about in a minute. With the field, it became apparent that different technological incidents required different tools, even for example, network forensics and email forensics, although at some point may overlap, their differences required different solutions. The same will go for the new fields of AI forensics. As I mentioned before, these findings must be legally acceptable. According to the United States Federal Rule of Evidence for scientific evidence to be used in court and must meet certain tests. These tests come about with the Dawbert Standard, as you can see here. They need to be tested with known potential errors and subject to peer review or publication. So now with the understanding of digital forensics, we can move on and discuss artificial intelligence. In simple terms, artificial intelligence leverages computers and machines to mimic problem solving and decision-making capabilities of the human mind. However, as you might have learned from attending a conference over the past two days, the field of artificial intelligence is in no way simple. AI consists of different layers, as shown such as machine learning, deep learning and many more complex integrations of these methods. Although AI might be complex, there are very many common uses you might be familiar with. Virtual assistants on your phone, such as Siri or Google Assistant, utilize machine learning algorithms to gather the information you requested. Ads and recommendations on streaming platforms might seem targeted towards you. Well, that's because they are. Through deep learning algorithms, content can be personalized. Facial recognition, surveillance, and self-driving vehicles are widely popular in both commercial and military, in the military worlds. Think about the car you drive. How many of your vehicles would have lane assistance features or automatic braking? All right, so a few. Well, according to, or, you can think, sorry, you can think artificial intelligence for these features. How about a plane that you've flown on recently? Has anyone flown on a plane recently? Well, according to an expert I spoke with, Dr. Haig, humans are only responsible to approximately three to 10 minutes of this flight. Everything else is done by AI. So next time you take a flight, you might want to think about the fact that the pilot really isn't doing much. The use of AI in warfare is very expansive. Just as I mentioned before, it is used in unmanned vehicles, including aerial ground and water vehicles. UAVs, unmanned aerial vehicles, specifically are being used in about half, over 90, of the militaries around the world. Specifically in that group, 16 of those countries have armed drones. Surveillance, navigation, smart munitions, and cyber attacks are a few more areas where AI is being used. Many of these advancements are helping to make improvements with the efficiency and safety in war. However, no system is 100% effective. Everything is prone to failures. And this brings me to my next point, AI issues. Data poisoning, adversarial attacks, deception, complex environments, and unexplainable decisions are just some of the issues that have arose, that have and will arise due to the use of artificial intelligence. One issue I believe to be highly concerning was unexplainable decisions. As you can see from the graphic today, AI is not explainable. Many decisions may do not have an understandable reasoning. However, the goal is to get to the point where these algorithms are explainable. And however, from the experts I heard from, it might be quite impossible to get to this task. My paper goes more into depth on this part of research. Another issue could be complex environments. For example, in 2019, a driver of a Tesla Model 3 turned on autopilot. 10 seconds later, the vehicle drove into a semi-truck that crossed in front of it, killing the driver. And this is just one of several scenarios of this happening. And then although we try to test for every scenario, for example, a semi-truck crossing in front of the driver, mistakes can still be made. So although this happened in the commercial world, issues like this can and will happen in war as well. And with that, I come to conclude that AI is in heavy use in both civilian and military sectors in all different parts. AI-enabled systems can be attacked, they can be confused, and they can also be inadequately trained. So when failures occur in these systems, forensics evaluations will be needed. Just like any other digital forensics field, these evaluations will be dependent on tested theories and methods. But where we are today, we are lacking the forensic specialists and the forensic tools that are needed to help this field grow. So although there may be a lack of AI forensics experts today, I hope to have sparked an interest and even one person to want to become an AI forensics expert for tomorrow. Thank you. Excellent work. Let me step in here and set up our next slide presentation and introduce Gabriel Williams for his presentation. First and foremost, I just want to say thank you to everyone who's here in the auditorium today. It's a pleasure to be here speaking with you. Moving forward, project today is gonna be artificial intelligence and the electromagnetic spectrum, specifically focusing on electromagnetic warfare and how the intersection of the two are creating some new capabilities on the field of battle and why there's need to educate the warfighter on those. So first and foremost, let's have a conversation about what is the electromagnetic spectrum? Many people don't know, many people confuse the electromagnetic spectrum with cyberspace and to be quite frank, the definition is very simple. The electromagnetic spectrum is a series of frequencies that vary in wavelength, right? Within that, you have the radio spectrum, which is talking about right now, speakers, microphones, this right here. You have microwaves, which are more complex components that house data links, right? Transition between aircraft, tanks, ships, missile systems, right? You have visible light as well and you have gamma rays and X-rays, right? So everyone in the room and everyone in the military environment is also affected by the electromagnetic spectrum. For instance, who has a cell phone? Raise your hand. Okay, 4G, 5G, 6G, all spectrum. Who has talked on the radio before? Raise your hand. Few people, spectrum operations. Who has flown an airplane? In some regard, raise your hand. There's a hint of spectrum operations within that, right? So it's important to understand that the electromagnetic spectrum has a key impact on not just civilian life, but also military life and military operations as well, right? Within the military, right, the electromagnetic spectrum is utilized in more or less three core environments, right? So the first one we talk about is characterization, right? So when we discuss the operational environment, right? The field of battle between ourselves and the adversary. One of the first important things we have to do is characterize and understand what that battlefield looks like, right? Whether that be different enemy units on the ground, tanks, missile systems, weapon systems, whatever they are, we have to identify them and identify their capabilities and how they compare with us, compare and contrast. So one of those core concepts is command control computers, C4ISR, right? These use some of the systems and capabilities that the United States military and our near peer and unfortunately one day may be peer adversaries used to characterize and understand the battlefield in which our war fighters operate in. And the electromagnetic spectrum underlays within all of that. Another subset is more focused on direct action, right? So jamming, jamming radio frequencies, jamming communications between soldiers on the ground, jamming communications between satellites and soldiers on the ground, jamming communications between due for two different naval vessels, right? These are what we call direct actions. And then within that we have destruction, using high powered laser beams to destroy communication sensors and arrays or other critical infrastructure needs to the military apparatus. These things are very key and important, right? Things that the colleagues have been bound as war fighters as people that support the war fighting operation. The third one is deception, right? So reducing the electromagnetic footprint that friendly forces have in the battlefield, right? A key example of this would be stealth technology, United States Air Force, right? Reducing the footprint of a B2 bomber or F-22 on a radar system. That's all emphasized and all depends on the electromagnetic spectrum to actually achieve that goal, right? These are things I've often forgotten about. Moving forward, right? To complete some of those utilization tasks, the DOD and our NATO partners, we really focus on electromagnetic warfare, right? This is kind of the core concept that the Department of Defense focused on to achieve some of those core utilization tasks, right? And electromagnetic warfare is comprised of three different sub-sex. You have electromagnetic attack, electromagnetic attack and electromagnetic support, each of them mapped directly to those core concepts, right? Deception is all about protection, protecting our systems, protecting our planes, war fighters, what have you, from enemy surveillance and detection, right? Attack, direct energy attacks on enemy systems, right? Denying to grade their ability to access a spectrum, right? And how they can use that to eliminate our capability to operate. And then electromagnetic support is all about, again, sensing and characterizing the environment, understanding the battlefield in which our war fighters existed, compliment signals, intelligence at a very high rate. Now, I gathered within the first five or seven minutes, you know, many of you may be asking, this is an artificial intelligence conference. So I'm speaking about spectrum operations and electromagnetic warfare, which, you know, many people may not see the overlay. But as we move forward, and to an age that, you know, where there are more sophisticated technologies and more sophisticated threats to our nation, we have to begin to think at interdisciplinary level. We have to understand where there can be overlap to meet the needs that the nation has and fight the threats that are here and now and that will be in the future. And that comes to this intersection, right? The overlap, all right, let me see if this, there we go. The overlap, right? And oftentimes in AI circles, right, we talk about artificial intelligence, this, the ODA loop gets thrown around, which is observe, orient, decide, act, right? Kind of the framework in which members of the artificial intelligence community build cognitive systems from a deep learning or machine learning perspective, right? Within the ODA loop, it maps directly to a correlation with electromagnetic warfare, right? Observing is all about electromagnetic support. Orientation is all about electromagnetic support. Again, deciding is all about electromagnetic support. Understanding, characterizing the environment that our warfighters exist in, right? And then acting, making that decision is comes with action, electromagnetic attack. What do we have to do? What are our options, right? So these are things that there is an overlap in. And in my experiences working in the DOD and working in the Department of State, I've been to many AI-oriented conference. I've worked at the JAX Center to join artificial intelligence center for a couple of rotations. And these are the conversations that our key leaders are having. It's about building interdisciplinary solutions to complex problems in the modern age. So these are some of the conversations our leaders are having. It's very important. For me, right? My contribution to this conversation, this topic, which is important, right? Is thinking about how do I distill that concept of how do we protect the warfighter or how do we protect our civilians? How do we distill that concept down to undergraduate academia level, right? So the focal point in my research was to conduct a study in an informational manner. To understand these two topics, how they correlate and then build out a pathway to bridge this concept to academia. And that's where my certificate pilot program comes into play. I've been in coordination with the DOD and NATO forces for the last five months about building a certificate program that would enable students at the undergraduate level to take courses and understand electromagnetic spectrum and how it impacts their particular career field, whether it be in cyber or electrical engineering or spectrum management, anything in science, technology or an ROTC. So these are concepts that I've been working with the DOD first in the amount of time and we've made excellent progress. And this is my contribution to that point, bringing awareness of the spectrum and how it coordinates with artificial intelligence to change the future that we exist in. That's all I have. Thank you so much. All right, thank you, Gabriel. Our last speaker of the afternoon is Wesley Dewey. How's it going, everybody? My name is Wesley. And this summer I focused on studying the weaponization of echo chambers, more specifically the weaponization of echo chambers using artificial intelligence. And so to start things off, I just wanted to find what an echo chamber is. An echo chamber is an environment where a person only encounters opinions or perspectives that align with their own. On social media or online in general, companies use artificial intelligence, commonly referred to as the algorithm to market and personalize the online experience for the user and this results in the creation of these echo chambers. And so if you were to, for example, be searching for a lawnmower online, you would be most likely given ads down the road for other lawn care equipment, as well as search results relating to that. And as I said, the benefit to that is that you get more efficient marketing for companies as well as a more personalized experience for the user. However, the downfall, again, is that it increases the creation of echo chambers and increases polarization as a result. Cognitive bias is at the root psychologically as to why echo chambers happen. It's defined as a logical natural pattern of thought in response to a certain stimuli that would produce illogical findings. And so these are just errors in our brains and they affect everybody since the dawn of time. Confirmation bias, more specifically, is a cognitive bias that really centers in why echo chambers happen. It's our brain's tendency to gravitate towards information or data that supports predetermined ideas. And so if you already believe something, you will automatically reject evidence that contradicts your belief. So if you ever hear something that goes totally against your beliefs, you feel that little emotional bit of anger, that's where that comes from. So echo chambers have already had a pretty major impact on the world, more specifically intentional echo chambers as a result of using artificial intelligence to create them. The four examples that I focused on in my research are the capital riots of 2020, the Syrian White Hats, the Philippines election, and the terrorist organization ISIS. And in all of these examples, the algorithm was used to create echo chambers to manipulate groups of people either into inciting violence or into electing corrupt politicians, et cetera. And so with these echo chambers online, we've also seen a great increase in polarization within politics. And so I followed a study that looked at Facebook, Twitter, Gab and Reddit, which are all social platforms. And what was found was that with Facebook and Twitter, there was an increased amount of polarity, wherever there was a greater amount of echo chambers. And with Gab and Reddit, as the amount of echo chambers increased, they each individually went on their own way. So Gab became more radically right-winged while Reddit became more radically left-winged. And so what this study did was it compared polarity between the left and right wings with the presence of echo chambers. And what it found was that users online tend to prefer information adhering to their worldviews, ignoring dissenting information and form polarized groups around shared narratives. All right, and so in conclusion, people don't recognize when they're caught in echo chambers because they feel liberated online. And so kind of the big difference between echo chambers that occur naturally and echo chambers that occur because of algorithms is that you feel like you're making all of your own decisions when you get caught into these echo chambers online. You feel like you're doing all of your own research or you have full control over what you're looking at when in reality there's artificial intelligence behind the scene that's pushing you in one direction or another. And so this is dangerous because there is an unlimited reach through the internet for people to get caught into these echo chambers. It's as if a cult, for example, which is a very historically famous and generalized example of echo chambers working in the real world, it's as if a cult could reach out to anybody who is susceptible to their ideas across the world all at once. And the people who are most at risk with these echo chambers as a result of these algorithms are people who are the least skeptical of the information they receive online. And so the greatest way to battle this issue is to educate all countries and all students on how to do proper research and educate people to be willing to see both sides of every story or be willing to understand things from multiple perspectives. And lastly, I just wanna say thank you to the Peace and War Committee for allowing me to take on this research. I wanna thank my mentor, Professor Bosley, for pushing me in this direction. And I wanna thank everybody for coming out today. Thank you. Thank you, Wesley. We have about 10 to 15 minutes left. This is open for questions. So please, I encourage people from the audience, questions about any of the research topics that you've heard before you, the implications of some of what you've heard. Please feel free to come down to one of our two microphones at the front here. While you think about your questions and move to the front, I'll guess I'll start off here. And just first of all, congratulations to the panelists. It's all interesting, particularly for an old historian like myself in lightning material. You all called for research. You all called for more education into your fields. Where do you think the impetus for this really has to come from? Are these, I can see both commercial and military applications, just like with AI, for each one of your research fields, perhaps more so than others. But where does the push for this have to come out of the private sector? Does that have to come out of the public sector? Where do the solutions and the research and the education for this? Where does that have to emerge from? I open it up to anybody. Okay, this is on. Yeah, it's an interesting question, very brilliant. The concept of where does the advocacy come from for this? And I think it has to be both, if not all, you have to think about it from a holistic approach. So it needs to be a partnership between private, public, and industry sectors to really push this, including academia as well. We were talking about building an educational framework or advocacy for a certain topic. And especially something like AI, which could revolutionize the way the world operates and the way we as people operate, interact with each other. It's something that has to be a multilateral approach or partnership between all different groups to get that done. And then that'd be the best way to push for advocacy. And that'd be resources, training, tools, monitoring needs as well. We have a line of students here, so far be it for me to take up the podium for too long. Please, your first question. So my first question is kind of directed towards Williams over here. So you kind of spoke about the electromagnetic spectrum when it comes to AI. So my question is when it comes to risk, when introducing AI and technology to the battlefield, do you believe that fratricide is gonna be more of a prominent risk because of hacking and being able to use the electromagnetic spectrum to mess with radio frequencies and waves? And what is a possibility that, how can we prevent our AI from being hacked and turned against us if we do introduce it into the battlefield? That's a loaded question, thank you. I think we have to unpack that a little bit. So when we talk about integrating, so AI is a broad term, right? And people create many different fantasies and concepts when they define AI. There's concepts of human machine teaming meaning that there's a robot and a robot and a infantryman working together on the battlefield, right? Or that could just be as simple as saying there is something like a serial and Alexa platform helping a human being understand something. I think to prevent the situation which we're talking about which is an enemy taking advantage of a cognitive system aiding human beings in a battlefield operation we have to first and foremost create AI-enabled cognitive systems that are responsible and governable, right? And that we have oversight over and that there is a verification process underlined so they can verify the decisions that that machine learning algorithm comes to rather than the day, right? And when you think about responsible ethical or governable AI these are kind of the protocol policy issues that haven't really been worked out yet. So you have a very valid question. I think that has to be the first step making sure that we create a governable and responsible AI. The second step is when you talk about a cognitive system in spectrum operations or cognitive VW as it's referred to many times it threads along the subject of human machine teaming being that that cognitive system isn't taking over the duties that a human operator or warfighter would do. It's simply supplementing and aiding, right? It's enabling that warfighter to have a greater decision advantage on the battlefield as compared to our adversaries because it's able to perform the same cognitive functions that a human being can but at a fraction of the time. And when you look at something that's like the electromagnetic spectrum one of the slides I had up there, you know it shows cyberspace as being six components when the six bullets up there but there's a whole vast spectrum outside of cyberspace. That's a lot for a human operator to process. That's why we need the cognitive systems to aid us in that decision-making aid. But to your point, to be bottom line upfront you have to have governable, responsible AI that is protected at a fundamental level and when we integrate into the warfighting scheme we have to make sure that the level of which we integrate cognitive systems doesn't override the responsibilities of a human operator and it just provides a decision advantage or as an aid, right? Supplements does not take over. Thank you. Next question, please. Hello, my name is Dan. I have a question for Elena. I'm also a CSIA major. You mentioned that in AI warfare is used for cyber attacks and my question is, are you more focused on defense or offense cyber and which one of the two should we focus more for the future or could make stronger? So when it comes to warfare, cyber attacks cannot happen both terms. You need to make sure your systems are protected as well as many adversaries will attack you. So sorry, could you just repeat the second part of the question again? If they're like, what could we focus on more in the future, like how we could make it more stronger for either defense or offense? I think it depends on really what your country wants to focus on. I think in many ways it is important to focus on both aspects. You need to have a strong defense to have a strong offense as well as a strong offense to have a strong defense. So I think in reality you need to focus on both aspects. Although if your country is more prone to attack, then it might be necessary to really focus on the attacks. However, you don't want to be the one getting attacked. So again, really honing in on defense is also important. Thank you. My question is for Second Lieutenant Williams. With our military being more reliant on our official intelligence and robotics, what training or new technologies do our military need to use to be defended from electromagnetic warfare since we're more reliant on technologies that are weak against it? So just to clarify, you're asking what training do we have at our disposal to make sure that? Well, I'm asking if our military's gonna be more using it in the future, like weapons that are a week to May, like electromagnetic pulses or like you said, lasers or stuff. What would we be doing to make sure that we're better defended or able to combat these offensive weapons? Right, excellent question. So that brings up a key point, right? So as we move into this more sophisticated age of warfare and we're introducing some of these new technologies, AI and spectrum operations, we have to understand the place that we're at in terms of our superiority in these regions. And to be frank, our superiority is evaporating and eroding at a very quick pace. Our key adversaries being the Chinese nation and the Russians have outdone us in these fields when the last five years in terms of investment on training and research and actually developing operational systems. So to make sure that our systems that we're developing in this realm remain safe and verifiable for our own usage, we have to make sure there's a couple of components. One, we have to make sure that our warfighters are properly trained to actually understand the impact of the spectrum and how it correlates to their operational duties in the field of battle, right? So we have to make sure we have to bring awareness like this, make sure our operators are aware of what the spectrum is, right? And two, we have to coordinating and essentially codify some plans in the background to ensure that these systems don't fall into enemy hands. And that's more of a technical question in terms of how do we secure components of our electromagnetic operational space, right? That's electronic protect regions. And what that looks like, we can, a couple of things, we can do spoofing, we can do radar jamming, we can deny the enemy's usage of the spectrum. Therein they have no ability to touch us when we conduct spectrum operations to make sure that we maintain a freedom of maneuver and access in that realm. So to make sure our defenses are up, we have to make sure that we deny the enemy's ability to access the spectrum. That's the first step, second step. So first one, make sure our warfighters are properly trained on the spectrum and how it affects their battlefield capabilities. And second step is making sure that we continue to deny the enemy's ability to access the spectrum. As long as we deny their ability to access the spectrum, we maintain a safety for our systems, we maintain superiority in that realm as well. Thank you, sir. Thank you. Off to my right here. Hello, everybody. Kedek Kransen, class of 2023. Thanks to all the students for providing this awesome research. Really inspiring for everybody in here. But my question is mainly for Elena, but any of the panelists can pop in here. You talk a lot about the accidents that can happen with AI and considering that as kind of independent to human error in a way. Who, I was curious if any of your experts that you talked to commented on who gets held accountable when these accidents happen, whether it be in the military, US law, who's being accountable for the accidents that occur, what are the legal implications of that, both on the US and what NATO is coming up with in the future? Yeah, so that gets very much into ethical dilemma. And although I did some research on the ethics behind it, I did not fully dive into it. However, I do know specifically NATO speaking, they've been trying and trying to pass certain regulations specifically on laws, which is the autonomous weapons. And there's really a stalemate in where they're going with it. Not much, from my perspective, at least from what I understand, not much has come out that really allows us to put blame in where we're supposed to put it. Same with even here in America when it comes to a lot of the incidents that happen with Tesla. There's not, they kind of deflect the blame saying that they just need to do more training in their systems. But it's, in reality, most people don't bring it to court because Tesla probably finds a way to influence them not to. But if it was brought to court, there's many implications that really haven't been decided yet. Thank you, that's very interesting. Yes, sir. My question is for Second Lieutenant Williams. I was wondering based on your research, you've developed a curriculum for a certificate. Is there a plan to implement that curriculum here at Norwich? So what does the timeline for that look like? So in short, yes, there is a plan to implement that curriculum here at Norwich University. I've been coordinating with the Peace and War Center and the Norwich University Applied Research Institutes to get the ground work up on integrating that curriculum. But where we run into, in terms of timeline issues, well, we have to wait on DOD and NATO counterparts. So I work as they attach a in the Chief Information Office where I work for one of the directors of Spectrum Enterprise and Policy. I'm on an Allied and Coalition Partners Working Group. So the problem is when we're, this whole thing just got stood up in 2019. So 2019 was when the electromagnetic spectrum cross-function team was established and then we completed the study in 2021 and then we've been moving forward since then with building out an educational framework. So we've identified the competency models. We understand where the gaps are in training and proficiency within our war fighters. But now comes the hard work, like you said, which is building out that curriculum and coordination with our NATO partners, right? So we're moving through that process of essentially collecting all the curriculum managers within the individual services and then actually mapping out curriculum between the Navy Air Force, Marine Corps Army and then coordinating and codifying that curriculum with our NATO partners within the respective branches and fields. And then within that, once we put that together, once we put that together within the DOD working group and then we'll sign off on it, then we'll be able to push that out to academia and industry. So the timeline for that's looking about six to nine months, right? And we're about month two in that. So I would say stay tuned. Hopefully before I graduate, I can speak more on the matter, but right now the hard work begins, essentially I just read and write on it all day. Yeah. Does that answer your question? Yes, thank you. No problem. More students helping to design their own curriculum. Outstanding. I think that's outstanding. Absolutely. Mr. Bassett, please. Good afternoon. This question is primarily for Mr. Dewey on your research regarding the creation of echo chambers. You noted the utilization of AI in both a positive and negative way that echo chambers can be a negative outcome of, or AI can be utilized in a negative outcome of creating echo chambers, which has a different effect on society. However, has your research ever taken a look at the foreign influence in essentially create, artificially supporting or creating echo chambers that may lead to increased political polarization? The most commonly prevalent example of this is the Russian influence in say the 2016 or 2020 election and creating echo chambers on Trump's election and how he lost the election. And could you provide, if your research has, could you provide any form of recommendations for say some future politicians in the room here regarding what policy can be implemented to help combat the artificial tampering, not only through increased education and knowledge of the individual's agency, but also what the government can do to step in and intervene. Right. So I think because of the nature of how they come about, and as I had stated, the fact that they come from a place that the internet where you shouldn't necessarily censor it to a certain degree or try to control how people use the internet because that will lead to civil unrest. For example, North Korea is very well known for that. I don't think there's a great way to combat it from a government perspective. I believe that with the right education, there wouldn't be a need for that. If the general population could be able to look at information and decide whether or not on their own that it's coming from a reputable source or decide to research further against the point that they're reading versus researching further to support the point they're reading. I also believe that it comes down a lot to companies because it's companies like Google and Facebook, for example, that instill these algorithms into use and implement them into society to try to make a change versus a government. I believe that if the way the algorithm works was changed to a certain degree, maybe regarding specifically politics, it could have a really positive influence on elections. But no, I don't believe that governments should have I guess a stand or an opinion when it comes to it. I don't think they should be able to influence how people use the internet or what people see. And obviously my research does say that they do it in a negative way. But at the same time, yeah, the only way to really combat that is to better education surrounding the subject. And it is a very new thing. Even the research on it usually doesn't date back past 2016. So I think we're still very early into it. And hopefully as it becomes more apparent to people and as people who are more affected by it start to realize it, we'll see I guess an improvement. Thank you for your question. Thank you. Thank you. We have time for one or two more questions and we have one or two more people, so go ahead. Good afternoon. My question is in regard to Mr. Williams' presentation. We've seen as per the other presentations that AI and automated technology has been increasingly integrated into armed forces around the world. As this integration continues, do you foresee EM jamming and overloading weaponry at some point overtaking conventional weaponry in terms of importance on the battlefield? These are excellent questions. Deep in thought right now. Do I foresee, it's like electromagnetic weapons, essentially. Correct. Like EM jamming, EM pulses, laser beams, you know, the whole thing. I don't think there'll be a point in time where they, overtake the usage of conventional weapons, right? Conventional weapons have a certain place in the multi-domain operational environment, right? They, you know, missiles are very effective at destroying buildings and destroying hard targets, things that are tangible, things that we can see, right? They're also very effective at people, making people disappear. These are things, hard targets, tangible, right? But what I would say and caveat that, I'd say that as we move forward into the future, the number of targets that aren't necessarily tangible will increase, right? So we may enter an era where electromagnetic weapons may be of greater importance that they are today, but I'd never believe that they'll overtake conventional weapons. I think maneuvering in the electromagnetic spectrum and utilizing those weapons to our advantage are meant to circumnavigate conventional weapons, right? To a certain degree. But that doesn't take away the importance of conventional weapons. They still remain king, you know, top of the deck, but electromagnetic weapon in the future may be able to, well, it does now. They're able to disable conventional weapons, right? Talking about, you know, missile tracking systems, right? Infrared navigation systems within missile systems, right? Electromagnetic weapon can make that go away, can disable it, right? So render them crippled, essentially, right? So they're meant to circumnavigate and render conventional weapons somewhat less effective, right? But that doesn't take away the importance of a conventional weapon on the battlefield. It serves a very specific purpose and that purpose will remain as long as we have physical hard targets. Does that answer your question? Yes, it does. Thank you. Thank you. Thank you. And our last question will be from our first questioner. I don't think I got your name the first time. My name is Annalise Hughes. My second question is directed more towards Elena for AI and forensics. I know that in your presentation, you kind of mentioned that AI relies on having, like, validated and tested methods, but you also kind of mentioned the Tesla incident expressing that AI has this unpredictable aspect to it. So there must be, like, a significant challenge finding these methods and tests for all these, like, unpredictable outcomes, even with code. So my question for you is, what do you believe is stunting the AI forensics field growth? Would you say it's a lack of in-depth research and security when it comes to figuring out these outcomes and possibilities even with code, or do you think it's a lack of interest in people or something else? Thank you. That's a great question. To start off, I just want to clarify, digital forensics has the, like, needed theories, including the Dobbert Standard. AI forensics really hasn't gotten there yet, but hopefully one day it will be at the point where it will be, we will have the tools needed to have forensically sound evidence. However, I do think it's a lack of both. As you mentioned, there's definitely a lot of people interested in artificial intelligence as a whole, but there's not a lot of people specifically thinking about the failures that might and will occur when AI gets used so heavily as it is today and will continue to be used. So I think, one, there is a lack of interest in the field and I also do believe with the issues that are already occurring, including unexplainable AI, it is hard to get there. And I think people are scared because of how hard this task might be to take on, but just because something's hard doesn't mean it shouldn't happen. So I think because of the AI issues and also because of a certain lack of interest, yeah, this is kind of a hard field to tackle, but I do think it's necessary. Thank you. Thank you. And thank you all for your excellent questions. It's at this point that I'd like to ask one more round of applause for these stellar students. If you have anything on your mind, I'm sure the students would be willing to engage in some discussion afterward. Apart from that, I thank you for your attendance today. Have a wonderful afternoon.