 To go to our next panel, in what new domains will conflict occur, which I'm going to moderate, and we have Scott Stapp, who's Vice President of Capabilities in All Domain Integration, which he'll explain what that means, at Norfolk Grumman, and we also have Lauren Zabirek, Senior Advisor at the Cyber Security and Infrastructure Security Agency. Both of them are veterans of the U.S. Air Force, and we look forward to their comments. Let me start with Lauren. So the question here we're trying to answer is what new domains will conflict occur? And obviously in cyber, in the sense that conflict is going off for some period of time. There was a time, I think even Liam Panetta used the phrase of potential cyber 11. Is that plausible? Thank you for the question and it's a real pleasure to be here. So I think what we've seen over the last, we'll say decade, decade and a half at this point. We're in this sort of environment of endemic cyber conflict. It's ambient. We're lucky in that nothing totally destructive has happened at this point. But I think most experts would probably say this concept of a cyber 9-11 is probably a little bit hyperbole and probably doesn't really fit the nature of cyber very much. The way I like to think about cyber is not necessarily a place, but more like a tool. It's a tool in the toolkit for adversaries, as well as criminals and other users as well for their gain, whether it is financial gain or whether it is adversarial gain. So adversaries or criminals will use it to get what they are after, whether it's diplomatic, military, financial, etc. So I think it's sort of the wrong analogy. But that is not to say that real harms are not being caused by the use of cyber in a malicious way. Scott, so at Northrop, talk us through what a plausible scenario would be for some kind of space-based conflict, presumably with China. When you think about your job, is it mostly about China? I think that's, I wouldn't say in Northrop, I'd say the department is very clearly focused on China. Secretary Kendall today from the Air Force was down at AFA talking just about that, and everybody's probably heard that where he goes, China, China, China. I mean, he just, that is definitely a focus for the Air Force. I think as a defense contractor for Northrop Grumman, I mean, our focus is what our customers' focuses are. I think the hope is currently that there will not be a conflict in space, but you can't presume that won't happen. I mean, it is interesting that the 1967 Outer Space Tree, which was, hey, space is going to be a peaceful domain, it ain't that long. And that was when space was really not that accessible. And you look at just over the last 15 years, how much more accessible space is, whether it's commercial, whether it's our adversaries. And we were talking earlier just very much like the sea domain. If you go back 1500 years, there wasn't conflict on the oceans, right? I mean, it wasn't until you had much more proliferated access to the sea, lines of communications, commerce, everything else that you started to see conflict in that domain. As you see space really start to become an economic lever, you start to see it actually be involved in other domains of fighting to actually supporting the terrestrial domain. You run the chance that you're going to see a conflict in space. So you have to look at how do you actually address that. And systems used to be, it used to be very expensive to get to space, that the rocket itself cost as much or more than whatever you were putting up. That is no longer the case. Launch to space is actually very economical now. So you can start proliferating larger numbers of satellites rather than having them all clustered into a single capability. And I think you're going to see this to go the same ways when you have another domain. Question for you, Lauren, which is, you know, CISA, your agency is, I think there are 2400 employees or something. I mean, it's, you have, but obviously the problem is much bigger than anything you can do as a single agency. I mean, you're highly reliant on the private sector to do what they're supposed to do. So, and, you know, not if you try and protect everything, you're going to end up protecting nothing. So in the hierarchy of things that your agency considers critical infrastructure, what are they? And how do you protect them given the fact that you're, in a sense, exhorting people to do the right thing? You're not, you can't find them, I don't think, for doing the wrong thing. Or you, the tools are more persuasion. Yeah. So you mentioned critical infrastructure. I think a lot of people know that there are 16 sectors and CISA is actually sort of overseeing a number of those. I don't think a lot of people do know there's 16. I mean, you may know that. So for the audience, what are those, you know, what are the top things you're trying to protect? Well, some of the most vulnerable are looking at the water and waste water sector, the health care sector, K through 12. Those are some of the, our director, Jen Easterly's priorities over the last year or so. But things like manufacturing, transportation, really the things that we rely on as people for everyday lives, it runs our lives. And if those things suffer particular harms, you know, I'll say for today, my flight was delayed three hours because of the alleged software upgrade, right? So, you know, obviously, it's an upgrade. I love that, not a downgrade. Allegedly. But, you know, obviously minor inconvenience to me, but what happens sort of in the aggregate, right? That's a huge thing. Or let's look at colonial pipeline in our energy sector. I think the attack on that particular entity, I think showed a lot of people of the nature of the harm that could happen because it really impacted everyday people. They weren't able to get gas, right? So, to your point, we do rely on the private sector to hopefully do the right thing. Now, I will say this, that I think we're sort of, we try to give them the tools that they can. But at the end of the day, the organizations within the critical infrastructure sectors simply don't have, I would say with the exception of maybe the financial sector, maybe some parts of the transportation sector, they don't have the resources to prepare themselves or defend themselves against very well-resourced, very sophisticated actors. And so, what my team at SIS is trying to do, and this is also a high priority for the directors, is really drive this initiative called Secure by Design. And that is in line with the National Cyber Security Strategy from the Office of the National Cyber Director, pillar three, which states that we need to start moving the responsibility for security from all of us, right? The people who are not as well-equipped to deal with that to the manufacturers, right? The organizations that can, from the beginning, from the design stage, really try to make their products as secure and safe as possible. And just to clarify, so right now, when there's a so-called zero day or sort of a backdoor into, right now, you have to patch it yourself. I mean, you get a message from Apple, you know, there's something out there. But so, what you're trying to do is to put that responsibility back on the software developers, etc., and make, so that we're not always just repatching or missing the patches. Exactly. So, think of, patching is something that we are referring to as a soft cost, right? Left of boom. You're investing money and time in patching these pieces of software. And it's not just one, right? In organizations, they have an entire stack of software. That takes a lot of time and resources to go through that. And then inevitably, right, maybe you miss a patch like we were talking about, you know, before with Experian. That's a problem, right? And again, in the aggregate that results in this residual business risk, which then, of course, really adds up to this, this huge national security delta. And so what we're saying is, hey manufacturers, there are things that you can do, you know, if you think of a vulnerability as a defect, right, to test for defects in a way that other manufacturers that have embraced quality by design, right? You mentioned Experian. So, look, I mean, for those who don't recall, the Chinese, it's a public record, took, I think it was 175 million records, basically half of the records of half the population in the United States and stole them. I think one approach to this that I, as you were talking about, like litigation by people who are affected might actually be better than just you saying you should do the right thing. Well, traditionally, software and, you know, probably to an extent hardware manufacturers have really shielded themselves from liability. Although contracts that you get right when you download software, that basically says we are not liable, you're licensing the software. So, so yeah, typically, or traditionally, rather, that that's been an issue. And so I think the courts may be starting to think a little bit differently. But of course, you know, with the with the National Cyber Security Strategy, you know, the Office of the National Cyber Director, you know, talked about, you know, looking at the liability issue, too. So, so that is that aspect regulation that falls outside of CIS's purview. But, you know, that might be coming. Scott, how is AI changing the defense and space business in general? That's a great. How is it changing everything in general? So I think one of the things you see, one of the things you're going to see in the DoD is is the DoD typically is looking for predictable results whenever they're doing anything. I mean, the one the one thing with the military is you train, train, train, train, train, train, train. You want everything to be reflex in nature. You want to understand what that outcome looks like. And and they struggle with, again, I was a tester at one point during my career, and you want to have very predictable results. When you start to look at AI, I think predictive AI there is a ton of opportunity space within the DoD. When you start looking at, you know, cybersecurity, when you start putting AI into computer systems who can look at anomalous behaviors and strange things that have not occurred, whether it's insider threat or intrusions. AI does an amazing job at that, right? When you start looking from a DoD perspective of looking through, whether it's imagery or signals, and have an AI actually diligently go through that, it can actually do that in a much faster and cleaner way. When you start talking about generative AI, and where systems are going to start making decisions on their own, you already see it in the commercial side, you see people very reluctant on how fast they want to push that on the military and the DoD side, I think you're going to see that go even slower. Because what you're dealing with is it's different if you're dealing with a business and it makes a bad decision and you lose money in the DoD, it's about lives. And if AI makes a decision to do something that costs lives that is either civilian or unintended, it will not go well. So I think the department is going to go very slow in generative AI. Well, you know, I'm old enough to remember that when the US Air Force, you know, always said that there'd always be a human in the loop. You know, when it came to the kill chain, when we said, you know, the drone program really took off in 2008, 2009, that was sort of a mantra. Now, the Chinese obviously have autonomous drones and swarms that are governed by AI. They don't seem to really care about that issue. And now we have public reports. I mean, there's been a lot of reports recently about AI powered sort of wingmen or right. So what is that? How do you keep the human in the loop? I mean, I understand how you might. But if the Chinese already passed that, I mean, do you put yourself at disadvantage by trying to maintain that human in the loop? Yes, I mean, you do. And I think the difference with that is much more cultural than it is anything else. I think I think when you look at some of our adversaries, their risk calculus is much higher. Their their value on human life is different than ours. And how you look at that problem said, I think for us, when you look at autonomous systems, I think we're gonna have to get used to having what we call man on the loop, not in the loop. Things are going to happen way too fast that you can do that. I think in critical decisions, and I think I think we're gonna have to have fully autonomous systems. In some cases, if if you were to look at a very large incoming raid of missiles, and you have a battle manager and operator, he can't make those decisions on how he parses weapons to what's coming in, he's going to have to turn that over to a system that automatically makes those decisions. But those are, again, I would say that is more predictive AI that they're going to use. And it will just do the mathematical calculations very fast. And what's happening and as things change, it will adapt. I think anytime it's going to make a critical decision that that can have a larger or what I call strategic consequences, you're going to have a man on the loop, basically hitting a verify and making sure those decisions are made correctly. Yeah, you share that view. I actually want to sort of take that and bridge it a little bit with the cyber aspect of it. You know, we've talked a little bit about, you know, the China's cyber capabilities and the theft of, you know, a lot of data, you know, especially what we were talking about earlier. A couple years ago, and I think he probably still, you know, continues to talk about this, but the FBI Director Ray, you know, talked about how, you know, this intellectual property theft is really the represents some of, if not the largest transfer of wealth in human history. And if you're a an adult, more likely than not, your data has been stolen by China. And so to bridge that with with this concept, right? What, you know, why is that data? Why is that well being stolen, you know, for to develop these particular capabilities, right? And so, you know, not not only from a data aspect, but from, you know, the systems themselves as well. So I think that's an important sort of bridging there. Well, I have one thing I mean, I think this does get into norms of behavior. I mean, what you see is, is there's a whole bunch of things you could do with generative AI. We are just we're actually holding ourselves back from doing some of those things. But if you look at some of the things that have occurred recently, just making the news, right? Chinese cruisers cutting in front of US cruisers, that is not normal. That is not what we consider norms of operations of the sea. And what they're doing is they're changing the norm. When you see whether it's the Russians or the Chinese buzzing aircraft cutting in front of them doing it, we call that unsafe operations. But the question is, for us, are they changing the norm? Are we going to have to figure out how to adapt to a new norm? Because it's unlikely they're going to come backwards and adapt to our norm. And so that is going to cause attention over time of how fast we are willing to change and look at the culture and norms we have in military operations to adapt to that. Because otherwise what you've done is you've seated the advantage. And the odds of an accident seem to be going up pretty highly, right? Whether it's a pilot accident or a ship accident or what, Lauren, in terms of when you look at the we talked about the hierarchy of all the things you're trying to defend. What's the hierarchy of the threats? Obviously, you have, you know, you have these malicious groups who are doing ransomware that have names like our evil, which is a great kind of name. You know, and then you have states. So when you at your job, how do what's the hierarchy of states that are involved in this sort of tagging United States? And what's the hierarchy of non states? I don't know if I can put them into a hierarchy. I think you have your, you know, the group of of state actors that, you know, are typically implicated in cyber attacks, which are, you know, Russia, China, North Korea, and to an extent Iran. And then of course, you have a number of non state actors, criminal groups. And, you know, I don't think you can say, you know, which one is worse or anything like that. There are different capabilities and there are different interests on them. They're still harm to real people being caused. I mentioned the colonial pipeline incident. You know, we hear about ransomware attacks every day, right? Businesses, I think in 2022, the cost that was reported was in the tens of billions of dollars, right? But also let's look at livelihoods as well. There was a ransomware attack on a hospital in 2019. And this is, of course, not the only ransomware attack on a hospital. There's many. We might not have heard of all of them because they're, you know, they haven't been required to report it. And perhaps they were able to recover, you know, within days, weeks, maybe months. But there was an attack in 2019 that led to the death of a baby girl, right? So real harms are being incurred through cyber attacks. So it, you know, to me, I'm not necessarily saying, okay, state versus non-state. It's, again, a tool being used for whatever gain, but also there is collateral damage. Speaking of collateral damage, you know, there seem to be there are a lot of satellites in the domain that you're mostly focused on, which is space. Elon Musk, I think controls 4,000 of them. Are you are you concerned that there is too much too many of these satellites sort of in low orbit, that we're kind of setting up a problem that goes beyond simply like some future conflict with China? So I will tell you, I personally am not. And the reason I will say that is because space is super big. And we had kind of talked about this earlier, which was go look at any FAA map and look at how many airplanes are in the sky at any one time. And they're all operating between 20 and 40,000 feet. I mean, sometimes you look at the map and it's just populated, right? It's just crazy. You look at the satellites and they're talking about operating between, you know, 200 nautical miles and 22,000 nautical miles. Multiple different orbitologies, all sorts of stuff. When you really look at, and there was an analysis we did in the department that I had done with Cape, because we were looking at how close you need to track different objects to do conjunction analysis. And what you found out is, is we're constantly moving things around because you don't have a huge, there's a lot of error cones around those. Well, what we found out is there's about a million pieces of debris up there that are of a size that can destroy anything. Right? I mean, there is a time and this is not from us. This is just natural debris, micrometeoroids, all these kind of things. It's about a million pieces plus up there, all that are operating around our satellites yet we aren't seeing major impacts. The number of satellites we have and the likelihood that you'll have a conjunction. And again, you may over time, but it's lower than what we're seeing currently, I think, in the air systems. And again, it's a big space. I want to open up to questions. If anybody has a question from the audience, so we have a mic. Except for Rooster. Who would eventually be perceived to be by the adversary. And they're two different things. I'm just curious as to what your thoughts might be on that. All right. So is that long or mean? Because I will tell you, from my perspective, I think that's a major problem, right? The department, I mean, you're right, what the Chinese are looking to do in an adversary or military conflict is to change our perception of what occurs, right? And to me, this is where it gets interesting in the AI world, right? Everybody's seen when you look at deep learning algorithms where you see a picture and an AI algorithm will tell you it's a giraffe. You change a couple of three pixels, you still see it as a giraffe and it calls it, you know, a polar bear, right? Is the ability to go in, if they have our data and are smart on how we're using algorithms to use our data and can actually modify it so that the algorithms read that data completely differently? We're not going to have humans doing this, right? I mean, we're going to have machines doing that. That is a real problem and a real threat for us. And figuring out, which gets back to whether it's secure by design or zero trust, understanding when people are intruding in what they're doing is probably the most critical piece because, yeah, I think from the department side, you're going to have to assume people are in your networks. You're just going to have to assume it. Yeah, and I'll just say, you know, from Sysa's point of view, where we are trying to buy down that risk, right, for the nation, part of that is making sure that our systems and our data and our, you know, our devices are secure, you know, because it, as I was saying before, you know, it's about real people. It's about harms to people, not just data, but, you know, I think you just, you know, really explain the potential harms of not securing that data. Colonel Willie, do you have a question? I just wanted to add that question came from Lieutenant General Bob Schmiddle, who's also associated with ASU and was instrumental in setting up Cyber Command. And this is Colonel Dennis Willie, who is our first Chief of Staff of the Army fellow and now works for Space Command. Thank you, Peter. I know that you're running close on time. With respect to the space conjunction problem, you're right, there haven't been a lot of public discussions about breakups and things like that or but the number of objects has gone up almost double that we do track. And you talked about the economic aspect of it earlier. And so what we're witnessing is de facto real estate by altitude being occupied by these different constellations, SpaceX and Starlink first and all of the other ones that the FCC approves. So in the world of conflict at the economic level, any thoughts about how creating this de facto real estate, which no sovereign country is supposed to have real estate in space. So any thoughts on that? Thanks, Peter, for the time. I think that's a great question. I think as we start to look at whether it's Starlink or Kuiper or any of these things that have just massively proliferated our trap, I think that does start to create some issues, especially if they are not assigned extremely different altitude regimes, right? Those kind of things. But when you start talking about and again, we're going to see this when you start talking about economics, who owns the moon, right? I just heard that, you know, somebody declared they own the moon, right? China, others. I mean, it's going to get very interesting over time of a new domain and it's like a law of the sea, right? I mean, at some point we decided that 12 nautical miles was international limits, the rest is sovereignty and the rest is international. There has not been this huge view on how we're doing space and the issue gets to what you were talking about is when we were talking about what we can do in cyber. Yeah, we could go in and hack all sorts of stuff in cyber and you could, you know, hack hospitals and other stuff. But we have this issue called law of armful conflict that we tell ourselves there are certain value systems we cannot do. When you talk about the Outer Space Treaty, we said it's for peaceful purposes and we're not going to actually have conflict. Well, China's not a signatory on that. I mean, a lot of these... So, who do we handcuff ourselves and as norms start to change, how do you address what that new world looks like? So, if you want to say, no, we need to have some discussions internationally and in space on who owns what pieces, you're back to the UN. I mean, not a shout at the UN, but how fast that works and how effective that works is a big question mark, which gets back to the likelihood of conflict in space grows because you can't develop a common set of norms and you get into a conflict of, no, that's mine, no, that's mine. And the next thing you know, they're at it. I want to thank our panelists very much. Thank you, Lauren. Thank you, Scott. Thanks, Peter. Appreciate it. Thank you.