 Welcome to the U.S. Naval War College lecture of opportunity, who is joining us today is Lieutenant General Michael Groen. Lieutenant General Groen is the recently confirmed director of the Joint Artificial Intelligence Center. As a member of the JPEG team, he leads the transformation of the U.S. Joint Warfighting and Deployment Processes through the integration of artificial intelligence. Additionally to coming to Jake previously, he was at the National Security Agency and served as the Deputy Chief of Computer Network Operations leading this premier Computer Network Exploitation Organization. He's also served the chair of the Joint Chiefs. He has an interesting bio. I encourage you to go online and look at it at the Jake website. He does have several master's degrees, advanced degrees, but the most important degree that he has among all of his degrees is the U.S. Naval War College degree. General Groen, we welcome you back to the Naval War College and look forward to giving us some insights about Jake. Great. Thanks, Dr. Creely. It really is great to be virtually back. I guess virtually gives me the advantage of not having to feel the January wind blowing through my uniform as I'm trying to get to class. There's a lot to be said for this virtual stuff, but I do remember for everybody. Thanks for having me. I remember clearly, once you pass the Christmas New Year's holidays, and suddenly you're looking, now you can actually see graduation on the horizon. Suddenly everything becomes a lot more real and you realize how much work you have to do before you're ready to leave Newport. So I wish you all the best. I know you're getting danger close now, and so I appreciate the time that you're spending with us today. Okay, thank you. Let's get right on into some of the questions that we have looked at and have thought about at the War College. A number of people have submitted some questions, and then later on we will take questions from the chat for Q&A at the near the end. So with the basics of Jake, what should a War College student who's graduating in 2021 know about artificial intelligence? How important is this at this point in time? Yeah, great question to get us started here. And I think it's critical, obviously. I do think there's a magic mix of skill sets that's required in leadership in the military and in the Pentagon today. It's fed by, I think, three things that all of us need to be experts in. And if we're not, then we have to study hard to get there. And that is the graduate education that you're getting at the Naval War College and the other service colleges that allows you to think outside the lines and figure out where are those lines and why are they there? The second piece is war fighting understanding. I mean, especially as you become an O5, become an O6, become an O7, I mean, you have to understand deeply how war fighting happens in the department, how we fight as organizations and how we fight as an institution. And so if you take those two things and you add technical literacy to that, now you've got the mix that really is gonna make somebody successful in 21st century war fighting and starting now, right? We struggle with this every day is this balance between, we have tech literate people, we have great thinkers, we have war fighting experts. We need those skill sets in one person. And I think, just to talk about like, okay, what does a graduate in 2021 need to think about? I would say, first, the temple of war fighting, the temple of war fighting is in for dramatic acceleration and it has a huge impact on the way we, not only the things that we do, the equipment we buy, that sort of thing, but how we think about our operations. I always think about Yamamoto in 1941. I'm afraid we've only awakened a sleeping giant. And that was true. And over the course of years, the US industrial capacity, defeated dictators on multiple continents. But here's the thing, today, a threat actor can have effects almost instantaneously at intercontinental ranges, at hypersonic speeds in multiple domains, cyber attacks, a space denial. I mean, all of these things are real artifacts of the modern war fighting age. And so to be able to respond in that age, industrial capacities are not gonna be sufficient, or industrial capabilities. I mean, you have to be thinking about how you can succeed in a digital environment. So artificial intelligence itself, it's a general purpose technology. Its value is really measured in everything it touches, right? So it's hard to talk about artificial intelligence about talking about the applications of artificial intelligence. And so you have to understand the applications of artificial intelligence and how it's transformed by commercial technology. AI is largely about predictions and decision-making, certainly in the state of the art today. And so it takes what you know from past experience and from structured problem, it help you understand what's likely to come next, right? The prediction machines is kind of what artificial intelligence is an anomaker that's labeled on artificial intelligence. Here's what I will tell all of you. Look, AI is not something that will happen in the future. AI exists today. It exists in the world that we live in. Every, when you leave the office, or you leave the Pentagon and you pick up your cell phone, you are immersed in a modern digital environment that runs at great speed because it's enabled by artificial intelligence. But then on the flip side, when you turn your phone off and walk into the Pentagon or walk into the SCIF or walk into your CP, suddenly you're back in the industrial age, right? And so artificial intelligence has integrated defense to a much lesser degree than it has the rest of our lives, right? And that's partly because of what I would call a tech inversion, right? So historically the Department of Defense would create capabilities and that technology would trickle out into the commercial environment. Today we have an inversion of that, right? But we have very advanced digital technology that's driving our social and commercial environments. Yet we haven't, that hasn't trickled in to the Department of Defense. It's not a lack of technology that's our problem, obviously, because it exists everywhere else. It's our problem is, I think two things, imagination and implementation. And so when you get back out to the fleet, you get back out to wherever you're going, looking at problems through a technologically informed lens and a war fighting lens is gonna be the difference maker for your units, your services. Okay. Yeah, I would think that I've heard several people talk about the issues of imagination and that is expanding people's capacity of thinking and through education that is going to help with that and implementation is obviously dealing with a bureaucracy meetings with the did demonstrated that and how did you move things within the bureaucratic government given the speed of artificial intelligence. Now, recently you've spoken, I've read in Defense one about transformation. So what do you mean by transformation? How, what is the means of transforming the Department of Defense and the impact of future military operations and artificial intelligence? Yeah, so great question. Because the distinction, why would we call this a transformation? I think of it in the same way I think about the transformation to industrial age warfare, where it really came to a head in World War I, right? So, if you like me were binge watching Netflix, World War I documentaries during the height of the COVID months, if you saw Lancers riding into battle against machine guns, massed artillery fire, barbed wire, chemical agents, how could that be, right? How could it be that somebody who lived or a force or a commander who lived in the industrial age was very familiar with all the artifacts and the technology, still think that Lancers had a place on the battlefield, literally, men with wooden sticks, with iron spikes on the end, riding into machine guns, right? This is where the realities of industrial age warfare, all of the industrial age artifacts that these people were very familiar with in their daily lives suddenly became magnified into a complete transformation of warfare into something that, honestly, we still, think a lot the same way today. So, now think about where we are today. Think about all of the information age artifacts that we're surrounded with, every component of our life. Hundreds of times a day, we touch an artificial intelligence agent, we just don't necessarily realize it. So, how do you not just feel the piece of equipment or field an AI into the Department of Defense, but how do you transform the Department of Defense, right? So, I think a transformation in kind of three broad areas. One is warfighting. So, clearly our warfighting, still industrial derived in places we've incorporated the artifacts of the industrial or the information age, but we haven't yet achieved information age warfare. So, warfighting is by far our number one priority, but there are two other broad enterprises that I think we should talk about. One is the support enterprises. So, this is things like the Defense Logistics Agency and the Logistics Enterprise, the Defense Intelligence Agency and the Intelligence Enterprise, or the Health Agency or pick your favorite agency or activity in the department. There are 28 of them, I believe, in addition to the services. So, to really change, and then the third category is business practices, right? So, the Department of Defense, five services, 28 agencies, about 3 million people, about $700 billion a year. If there's one place on this planet that needs to be transformed using artificial intelligence, from everything from bringing fires quickly to bearing a target to matching, unmatched receipt transactions in the comptroller's office and everything in between requires broad transformation. This is not something we're gonna poke at and call it done. It can't be that way. The entire world has transformed around us and we're behind the power curve. So, when I talk about transformation, that's what I'm talking about. We have, and specifically, I think the officers in the war colleges today, you have a generational opportunity here to transform the department in ways it's never been done before. And I think digital modernization and being, being the individual, the leader who can combine military expertise, warfighting expertise and technical familiarity and technical skills is really gonna be a valuable proposition going forward from this point. Yeah, I agree with you. I know last year, UD came out with the ethics principles guidelines. I worked with the Defense Innovation Board on that as well, we've made some input. And in respect to ethics and AI, is the intention to eventually have AI capable moral agency, including moral deliberation, acting autonomously on those deliberations, being able to evaluate and defend their actions and moral terms to other moral agents and being held responsible accordingly? Yeah, important question. So moral agency. So we're, we are certainly nowhere near moral agency. We're not even, we can't even see it from where we are. So it is a question that we have to continue to consider. Honestly, from where I sit, I don't know that we would ever go there, right? By definition, this requires the ability, I guess not only to understand what's legal and what's illegal, but what's right and what's wrong. And we don't even allow human agency in that space in many areas. I mean, this is why we have commanders, for example, and why the example of being a commander is so important is because you are the person with moral agency over your unit. You are the person with moral agency over the technology that you use. And so I cannot foresee a day when we do that and that's not to say that this technology moves fast and as it integrates across the rest of our lives, maybe people will become more comfortable with it. To me, it's really, that's a question of philosophy. But that doesn't mean that we, that doesn't mean that we can, oh good, thank God we don't have to worry about that anymore. No, no, actually the integration of artificial intelligence under human moral agency brings with it a plethora of moral and ethical arguments that we have to think through, right? So probably closer to the barn here is, what kind of tasks can we delegate to an AI system? What kind of decisions are we willing to make from an AI recommender? So the preponderance, the vast preponderance of use cases for artificial intelligence in the department today are decision support, right? And so making decisions faster, making decisions data-driven. And then the next biggest trance is man-machine teaming. So when you think like where the departments focus and when I say the department, all the services, the agencies that have their own AI efforts, the Jake of course, others, you know, our focus right now is enabling humans to make better decisions and make them faster. So this is things like, you know, how do you do integrated fires, command and control and intelligence so that you can understand what's going on around you, you can make sense of it quickly and make good decisions and then execute, right? So that's kind of, you know, that moral agency remains clearly with the commander in that case. And so that's where most of our work is now, some of the technical work, you know, with things like, you know, autonomous drones and swarms and stuff like that, loyal wingman kind of concepts. Most of those are still conceptual, but we're doing a lot of good work to figure out, okay, how do people and machines now operate together as a team much more effectively? And we're, you know, obviously we're all very comfortable with machines with some degree of autonomy currently, you know, controlled under human judgment, right? So, especially when the support is contemplated, you know, our cruise missiles, our laser guided bombs, you know, precision guided munitions, you know, close in weapon systems on the ship, you know, fad, you know, air defense capabilities. So many of these have automated components, but they still remain under a human moral agency. Yeah. Well, that's important. That's exactly what we do with the ethics and emerging military technology graduate program at the War College is to take the philosophy of technology, the philosophy of ethics and apply it to technology and our students wrestle with these hard questions that are not easy. And so, you know, sort of make it a contribution in that area. Looking a little bit of a head, do you intend to participate in the discussions with other top AI experts and gain collaboration with think tanks to mitigate the high state's risk associated with AI? Yeah, so I guess first I would say that, I mean, the Jake is not a policy shop. So we are an implementation organization. And so this is a distinction that we, you know, that we try to make across the department because we have great partnerships with organizations like DARPA with the research and engineering organizations in the service labs. So there's a lot of our R&D enterprises that are looking carefully at artificial intelligence and artificial intelligence applications. Our niche is to the right of that, right? So we are taking, we are transitioning things from the research and development environment and turning them into real implemented capabilities. So instead of think tanks, we have do tanks, right? So we are focused on getting things accomplished. So, but again, that doesn't excuse us from the ethical, the set of ethical questions. I mean, your ethical radar has got to be on all the time. And especially because of the high consequence missions that the Department of Defense operates that nobody else does, right? So life and death decisions at scale even are things that every military officer has got to take very closely to heart and have a really sound moral sense to think about this. When, one thing that does strike me, when some of the think tank publications, of course we read them and we take a look at that and we use that to kind of adjust our moral compass, but there's a lot of coulds there. You know what I mean? There's a lot of hype in that space. So I think we've gone through periods where the hype was associated with the technology and what could accomplish. I think that there's a lot of hype associated with, oh my God, you know, this kind of thing could happen. And when I read that kind of stuff, I think of a couple of different things. Like I said before, no officer is off the hook for having a strong moral and ethical foundation, especially when it applies to the use of technology. But there are things, there are moral imperatives, there are technical or legal imperatives like the law of armed conflict. And then I would say there are comparatives. And when I say comparatives, I think in a lot of cases, when we talk about what could happen or what dangers could be present, we always have to compare it to what dangers do we have today, right? So things like collateral damage assessment. So, you know, we do collateral damage assessment today with humans, you know, and it's part of our targeting process. And I'm actually, so I'm a recovering target here. You know, I'm really proud of the way the U.S. military does this, right? We pay really close attention. We do everything we can to minimize the loss of life, sometimes going to just extraordinary lengths. It's really motivating to be part of an institution with that kind of moral grounding. So, but machines can do collateral damage estimation much better than humans can, right? We've built algorithms for, you know, doing like typhoon damage assessment, right? So one of the first things if you're going to do typhoon relief is you want to understand this block of homes, should we just bulldoze them or can they be repaired? We've got AI tools now that we've developed to help responders do that kind of stuff. And it takes the place of lots of people with clipboards walking down the, walking down wet streets. And so that's the kind of stuff that we can put into place. So let's go back to the moral questions about targeting and collateral damage assessment. Well, if machines can do that much better than humans can, at what point is it immoral to have humans doing that and moral to have a machine do that? You know what I mean? So you get into these sort of twisted, you know, these moral and ethical boundaries they twist on you. And you have to think through like, okay, you know, here's another one, dirty and dangerous, right? So we, you know, today we put young men and women, you know, against steel, right? We'll, you know, we're still Russian, you know, squad leaders and squads of Marines, you know, up against entrenched enemies. At what point does that become immoral? At what point does it become more moral to have an autonomous weapon system or, you know, a remote control weapon system to actually take the line of fire? At what point do you make the decision that, yeah, you know what, it's better for me to not put humans into that space at all. And you know, some of these arguments, you know, things like, you know, well, could unmanned platforms, you know, in the Pacific, you know, create strategic escalation perhaps. And you have to ask yourself from a moral agency question, is it more acceptable to have a strategic escalation risk or is it more, or is it more acceptable to have humans in the line of fire on a ship in the South China Sea, for example, under the range of, you know, a host of forest of DF-21s. So those are the kinds of things that our officers have got to be able to reconcile. And so I would just add this, you know, so today, the real existential moral agency questions are not things that most of us are gonna deal with, but there are real moral and ethical questions that still have to be decided. You know, things like when you, now when you start talking about, you know, real applications of AI in the department today. I mean, you know, there's always, you know, when you're building an AI algorithm, for example, especially one that's forward deployed, well, you have to worry about latency, right? Because of your bandwidth limitations. And so there's generally a trade-off, an engineering trade-off between latency and accuracy, right? So you, the more bandwidth you have, the more time your model has to think about a problem, the better solution that's going to give you. Right now somebody has to make that decision on the technical scale. Do I design my algorithm so it can make decisions quickly in a low latency environment? Or do I design my AI so that it can take all the time it wants but it gives me a better answer? That's a moral question that if military leaders don't engage on that sort of question, then it's up to an engineer somewhere. And I love our engineers, they're great Americans. But you know what, I don't know that I'm comfortable leaving a decision with moral impact on the engineering team, you know what I mean? So that's the kind of moral questions that our folks have to think about, right? So here's another one, so training data. So we train our machine learning algorithms on training data. Well, if I train a machine learning algorithm on data that I collected in the Middle East, say, and now I'm going to use that algorithm in the Pacific or in Europe or something like that. Now, okay, am I, I don't know what that performance is going to look like necessarily. I don't know if there will be second or third order effects from that machine to its decision-making. So, you know, the moral requirement to ensure the responsible application of your AIs into the, you know, into the situations that they're being exposed, that falls on commanders. That falls on military leaders who are part of this decision-making process. Now, ethics is becoming more and more prominent in making the decisions from the operator in the enlisted level all the way to the four stars and even the secretaries in the Pentagon. The next question I would ask you is, partnerships with tech and corporate sector are obviously critical to ensuring DOD can benefit from artificial intelligence. What are the challenges that private sector partners cite to working with DOD? And I've heard some of these and working with some of the private sector and talking with them about these challenges. Yeah, I think there was, you know, there was maybe some tension, you know, some years ago. Honestly, today, I don't see it, right? Today, it's not an issue for us. AI vendors are beating down our doors to want to work with the Department of Defense. I mean, there's a couple of different reasons for that, I think. One is that, you know, the small, aggressive innovative companies, I mean, the AI space is a really competitive space now, right? So almost every company is an AI company in one way, shape, or form. I mean, there's this vertical market segmentation where what used to be broad AI companies have now established niches, right? And so they're, you know, the AI market is so segmented. You have specialized companies in all these different areas. And so that makes it a really competitive environment. And so I think that might be an aspect. There are, you know, we've seen the emergence of specifically defense-oriented AI companies. So there are some AI companies that have been formed, some very successful ones, and really great to work with, you know, specifically to work on defense problems because they're driven by the value challenge, right? So there are many great Americans who don't serve in uniform, but yet who want to be part of defending America, right? And so that patriotism and that moral fiber is there in the tech industry just like it is anywhere else. We have, you know, we have sound relationships. We have relationships with all of the major vendors, you know, all of the major vendors that you, that, you know, that you would come off the top of your tongue. So we, you know, we've got those vendors on board. So we've got relationships across the industry. So what we're, you know, what our focus is now in that space, you know, we're trying to get more small innovative companies into the department of defense business, right? And the sort of the defense industrial complex is not really designed for small companies. It's designed for major defense contractors. And so we, you know, we use our major defense contractors as primes. We help them bring in, you know, small innovative subs or subcontractors. And then we also have in this latest appropriations bill in the NDAA, we're authorized acquisition authority within the JAG. So now we can reach directly to some of those small innovative companies and create mechanisms that makes them easier for them to work with the department of defense. So we're, I think, I think we've got a good relationship with the tech industry. We've got a growing, you know, increasing, you know, increasing set of capabilities that we can access, you know, from the major, you know, everything from, you know, just commercial cloud computing all the way to algorithm development. And I think, I think one of the features of that for our relationship with the industry is you can kind of pick, you know, if you're a company, you can kind of pick how close to the fire you want to be, right? So, you know, it's everything from, you know, running a commercial cloud environments, you know, like some of our big companies are, and then, you know, having defense businesses operate on the commercial cloud. Well, that's fairly benign, right? So you're, you know, you're sitting next to, you know, a commercial entity on the cloud. So all the way to no kid now is helping, you know, to automate or provide, you know, some degree of autonomy into a weapon system and everything in between. And so I think that there's a broad range where AI companies can now kind of pick where they want to be on that spectrum. We do a lot of humanitarian assistance, disaster relief, for example. So most companies are very, you know, very comfortable working with the Department of Defense on that kind of activity. So generally, I mean, just to sum it up, I think we're, I think we have a good relationship with the tech industry now. That's good. And getting that acquisition authority is quite important. That gives you a lot more flexibility to respond to the immediate needs of there. The next question is with the push to deepen systems integration to enhance battle space and awareness in operational units, describe the role AI will play in the cyber security protect decision maker sensor shooter network. Yeah, so great question. And, you know, I think the recent, you know, the recent expose of, you know, foreign intelligence services, you know, getting into a lot of the networks is a, you know, should be a wake up call to all of us. You know, and AI is certainly not immune from that kind of interference. I mean, there's, I guess how I like to think of it, you know, AI can be a really critical component of cyber security. That is, you know, really there's no better machine to watch the activity on the net than a machine, right? You know, humans can do it, but machines can actually do that much better whether you're talking about detecting patterns and data or whatever that is that AI can play a key role in cyber security. But the flip side of that is cyber security has to play a key role in artificial intelligence because your data and the risk of corruption of your data, your algorithms, the risk of corruption of your algorithms, the risk of adversarial AI that is specifically trying to defeat your artificial intelligence capability. So like any emerging tech environment, you know, you see this sort of race between capability and counter capability and counter capability. And you know, and so as the threats mature and manifest, you see the tech industry mature and manifest as well. And so we in the Jake, for example, have a very well, you know, well-connected research arm that really watches the, especially the academic research and then the commercial research where we can have access to it to really keep an eye on, to make sure that we are aware of and following the maturity and the emergence of risk in our space and then our ability to defend against that. So, you know, we've taken that sort of sensitivity and created very secure environments for our AI data, you know, our training data and our algorithms. And so in the department, you know, one of the things that, you know, our officers will have to kind of balance is, you know, you want to be able to share, you know, data quickly among services, for example, or algorithms among services or other components so that, you know, you can leverage the technology that already exists. Yet every time you share, you expose new risks. And so building the right kind of enclaves, building the right kind of, you know, containerization so that you can have trusted code that can move from one service to another service or from one place in the department to another is it takes a lot of our time now to try, you know, we spent a lot of time thinking about, okay, how do we secure this AI enterprise? Okay, I know that Jake has engaged with NATO and now you're facilitating a forum called the AI Partnership for Defense with a host of partner nations. Can you discuss how the Jake is coordinating with allies on various AI initiatives? Yeah, absolutely. So I think there's a couple of points here. One I think I would start with, you know, that the US has been at the forefront. And again, I'm very proud of this. I mean, you know, all of us are, you know, who wear a uniform or serve our department. You know, the US Department of Defense has been forward leaning in establishing ethical baselines. So we, you know, we've done it with autonomy, for example, in the DOD 3000.09, right? And so I'm sure that's a topic of discussion. But so we have, you know, we laid down a set of principles for the use of autonomy and what we're willing to do and what we're not willing to do as a military. And that's a powerful statement. We've done the same sort of thing with AI ethical principles. And so the DOD, you know, the US DOD was a leader in actually, okay, publishing a set of principles that would guide a development of our AI. And it stands in market contrast in what we observe for AI development in some of our threat nations, you know, clearly. So that's actually a very strong attraction for countries who think the same way we do. And so we, you know, we did some work here with General Shanahan, my predecessor did some work with NATO, you know, a year or so ago. And, you know, and they actually discovered that there was, we actually, you know, there was a 13 nations, including the United States who kind of put their heads together and said, you know what, we actually want to think together and work together on this. And so we've established the AI partnership for defense, those 13 nations. We have, as a matter of fact, we have our next summit, I think in two weeks, two or three weeks. So we have all of those nations coming together virtually. And it's not, you know, this is not just for us to, you know, kind of throw ethical principles at each other. We actually now are thinking through the elements of development of an AI capability. So when we talk in a couple of weeks, we're going to talk about data strategies and data protection. We're going to talk about algorithm protection and algorithm testing and evaluation. And so some of these things that have ethical implications, you know, applied ethical implications into developing technology, they want to be right where we are. And we want to be right where they are, quite frankly. So in many cases, there are very mature models of AI development and integration in our partners, just like we have here. And so we want to take advantage of that. So I'm really looking forward to our session here in a couple of weeks. And we meet about quarterly. So this is not kind of the old school, you know, a conference where, you know, you get people into the United States and the Americans talk for a day and then it's done. Like that's not what this is, right? This is a real partnership. We're working together. We're learning from them. They're learning from us. And we're sharing across that community. So I'm really excited about it, if you can tell. Because I think it's a really powerful message and it gives us strength, right? It helps make our efforts stronger. It does create some awful lot of synergy in the relationships across the different cultures and regulations and rules. Over the next five years, what will be some noticeable impacts of DOD's employment of AI? How will we know your time as director was successful? Well, yeah, yeah. I guess they say if the band's playing when you come into office and the band's playing when you leave from office, then you did pretty well. But I am such a believer in the necessity for transformation in the department, right? So I think anything that we do has to be measured against the transformational goal. And so, I mean, for me, so there's, right now, each of the services has an AI development effort. Some of them are very mature. Some of the defense agencies have AI developments in different levels of maturity as well. There are many areas in the department where there's no AI at all and there needs to be. And so, when we look at that environment as a Jake, and we think through like, okay, what does broad transformation look like? How do we really get the department to change? Our focus has become here in the last couple of months, really focusing on broad enablement, right? So we want to enable people who are on the cusp of using AI, we want to help them make that leap. People who have AI already, but haven't implemented it at scale, we wanna help them implement that at scale. And so the Jake's primary function is enablement, right? We want to make other people successful. And so, if we can make other people successful to the degree that we start to have recognizable scale across the department, then I will be very happy, right? And because that gives us a baseline to build from. The other thing, one of the tools that we're building to enable that broad enablement is something called the Joint Common Foundation. So here we're working with multiple parties, we're working closely with the Air Force, we're working closely with the department's Chief Data Officer with the Secretary Defense Office that uses a system called ADVANA that gives data-driven decision-making or data-driven briefings, for example. Just a note, the Secretary's Office or the Deputy Secretary's Office is in a really good place here. They don't use PowerPoints in their briefings, they use live data. So if you want to brief something and you want to show your numbers, you show your numbers, right? You don't show the PowerPoint presentation of your numbers. You show the numbers that are there every day. And this creates an environment of not just episodic, hey, let me go run a survey or ask a question, but constant knowledge, right? Like we're continuously, if you need to access the data, it's there for you. That's the kind of environment that we're trying to build across the department. So if we can get a joint common foundation that enables AI development, that can securely store data, that can securely store algorithms and contains the tools that people can use to develop algorithms, then that will be another really great success here for the Jake in a few years. The third thing I would say, and this is important, the Department of Defense is fantastic and it's known for its segmentation and its stove piping, right? Given a challenge, every service will go do their own thing. Every agency will go do their own thing. Sometimes individuals within services do their own things. And so that's okay in an industrial age kind of environment where you're building technology that might drive one particular weapon system or something like that. When you're trying to stitch together an integrated warfighting capability across all the services and all the domains, that stove piping is not acceptable, right? It cannot work. And so to move beyond stove piping, you have to have a degree of governance. And I'm not talking about governance, that's the dead hand of governance that slows everything down, but I'm talking about creating capabilities that are so good, so virtuous that everybody wants to be part of them. That's the direction we're taking, right? So we're trying to build really solid enterprise level capabilities that any AI owner would be foolish to turn their back on, right? And so if we can collectively, working with the services, working with the agencies, pull together an integrated capability, then I think we will really have something to be proud of. And then we will start to see real transformation. Right. Well, a year ago, in April, been three and a half million gallons of milk was being poured down the drain, produced with being plowed under and livestock protein meats being euthanized because the farmers had no place to put it. Yet at the same time, there were peoples in the lines who were hungry looking for food. At that time, I called my good friend who was on this Zoom, Dr. Molly John, the John Research Group former Deputy Secretary of Agriculture, is what can we do about this? And the first person we called, a group we call was Jake to help in food insecurity, national security and mitigating the waste and redirect that. And of course, certainly you have Project Salus, the Roman goddess of safety and well-being. In what ways was AI used to mitigate the disruption in the supply chain and the food distribution and COVID-19? And y'all have just happened to set up the Food Source USA, but it's a bigger problem than just that. Yeah, exactly. So Salus came about in support of Northcom. So Northcom has the domestic disaster responsibility. It's always good for the students, you have a couple of months left. Get to know all of the pantheon of Greek and Roman gods because you'll see all of those names in the, so it's an OPSEC issue perhaps, but Salus was a great opportunity for us. This was really when we had several hurricanes in succession and then we had the COVID crisis on top of that. Northcom had some real challenges planning for, okay, how do you track, what mechanisms do we use to track where potential shortages are? And if you can track demand, then you can track, then you can estimate potential and you can track supply, well, then you can estimate future shortages, for example. So in that environment, we developed a number of algorithms working with Northcom and others, things like being able to forecast, demand on critical food supplies, for example, or other release supplies, medical supplies, PPE, that sort of thing. At the same time, we were working on algorithms to help the California firefighters to fight wildfires. So that one is just a fantastic example to me because it's so easy, right? And it's just a great example of imagination. So the way firefighters have dealt with massive wildfires is at the end of the day, all of the command posts get together and the chiefs get together and they look at a map and they'll draw a line. And when I said draw a line, I mean, literally some dude is looking at a picture of a fire and then trying to figure out where that is on the map. And then they reproduce that map and then they quickly rush that out to all of the other, the fire planners and the commanders so that the force can continue to fight the fire. Why in the world would we do it like that, right? Why don't we use imagery to automatically map the fire line, make that data available through an app that a firefighter can have on his or her tablet, right? And so that's what we did, right? So we worked with the National Guard to do some overhead sensing and to be able to image fires and then autonomously then with an AI identify where the fire line is and give it a true geo-reference and then make that data available for firefighters. It's not magic. And this is what I say the challenge to implementation of AI in the department is largely due to imagination, right? We have to imagine the use of these technologies in all of our problems. It's easy to go from firefighting to warfighting, right? You can see the parallels and it's easy to go from the distribution of COVID supplies to the distribution of logistics on a theater battlefield, for example. And so there's so much goodness and a lot of people, honestly, there's a lot of focus from outside on warfighting AI. Hey, you need to build swarming UASs and that sort of thing. All of that's really important. But I think for everybody who's listening, look, whatever your warfighting function is, whatever your process is, AI can do enormous things, right? If you're a comptroller, we still have people looking at spreadsheets, like physical spreadsheets and looking for dollar signs, trying to match this spreadsheet with that spreadsheet. No company in the United States does that, right? Of any scale, anyway. And so why is that okay in the Department of Defense? It's not, right? And so in all of these warfighting areas, the opportunity to just take a white board and just think through how AI might facilitate your warfighting function, whatever it is. We've started doing these actually as a precursor to doing AI readiness evaluations. So we've started doing imagination sessions, right? Where we'll take a service or a component or a particular warfighting function and we'll sit down with a blank sheet of paper and say, okay, let's talk through from garrison to deployment, to employment in a combat environment, to integrating support from whatever other warfighting functions to win the battle and coming home. And we're asking folks to think through their processes and so that you can actually map your processes, right? AI starts with commanders and leaders who can think through their processes and know that, hey, I could make this process much better. I could make a much better decision if I knew X, right? So if you can map your process to the degree that you can say, I would make a better decision if I knew X, holy cow. Now all you have to do is go get X, right? And X is usually readily available or you can make it readily available. You can make it readily available through an app. You can make it in a broadcast that goes into a command and control system. I mean, just all of these technologies that are there, I would just encourage everybody, when you go back to the fleet, you should look at all your problems and you should look at all the things that you do and you should say, you know what? I wonder how Amazon would do that. I wonder how Uber would do that. I wonder how pick your AI company, pick your successful commercial environment, how would they do that? And that is the key to opening the door to, oh, I see how we should do this process. We should do it like this, right? If you do that, you will be driving transformation in the department. The next question is AI education in training. And I'd like to emphasize the whole education aspect, too much of our training is click the button and move on. How do we get into a much deeper, richer AI education and ethics education in DOD? And how is that going to be implemented? Because I think it is absolutely critical that we move in that direction with full force, plenty of funding. Yes, I agree. And I would ask you to tell Congress the same thing. So we've gotten a start here on the AI education challenge. And as we kind of dipped our toes in the water here, some months ago, we find the same challenges across the digital workforce at large, right? Because our network spokes need advanced technology grads. And so there's lots of people. We have a thing called the Defense Digital Service. I don't know many of you may have heard about it, where we actually have some kind of IT troubleshooters, if you will, they're more than just IT. They can actually come in and look at how you're addressing the technological problem, and then just give you some help with, hey, you know what, you might want to leverage this technology instead of that one, for example. So there's a lots of these little disparate elements across the department. And so we're trying to build an AI education pipeline specifically, and so we have a strategy for that that we're starting to execute. But importantly, I would just say this on the front end. So we recognize that it's not just about the Jake and producing an AI training course, if you will. It has to be more than that, as I imply from your words. So we're working with programs and resources, for example, in the department. So OSD has a department whose job it is is to do schoolhouses and education pipelines and training. Every service has their own approach to technical, for technical education and technical training. And so what we want to do is work together. We have an executive steering group for AI in the department here. We use that ESG to figure out what's best practice and what makes sense for training and education for AI. And the solution set ranges from making some online courses available to integrating it into the service academies to creating a separate academy for tech skills or something like that. I mean, there's a wide range of things that get put on the table. We've approached that with our education pilot. So we recognize that in the AI space, there are a lot of different roles with respect to AI. So some people build AIs. Some people use AIs. Others lead organizations that do either one of those things. Some people play a very technical role. Some play a policy and strategy role. So we're building a customized pilot program for each one of these archetypes. And so as those, and we just did our first one here a few months ago. And what we did was, because I like in your question, the idea of, well, I clicked through a bunch of buttons and so now I'm trained, right? So what we did with our drive AI pilot, we linked together, first of all, we did link together several courses from Coursera and Udemy and a couple of these other, online training venues that we could make available to a cadre of students. And then we, so we took those courses and then we interspersed them with in-person training from expertise that we either have in the Jake or we could bring into the Jake to give a specific training course. Just to talk about how this type of algorithm works or to talk about adversarial AI or talk about data standards and that sort of thing. And then at the end though, you'll like this. So we, the Jake is an interesting organization because it's, we've got a CTO who's a serial entrepreneur and lots of sort of entrepreneurial experience that have come to the Jake because they wanna serve, which is a story in its own right. But we made the teams, the students that are going through the training course, develop an AI proposal and pitch it, as if they were looking for an investor, for class A investment, or whatever prize that we could come up with. But we made it, we made students work in teams, we made them actually put their money where their mouth is. We actually assessed them and gave them feedback on their ideas and it was fun. It was a really good course. And so I think we'll probably stick to that, something like that as we go forward. Right, I look forward to seeing more from that development. As a Marine, leading a unique and flat organization such as Jake, what has surprised you? Yeah, that is a really great question. We could talk for an hour on this because leadership of organizations in the department of defense is a fascinating study in and of itself. I've been fortunate to have a very varied experience here, working with lots of different types of units. All of us, you joined the military and you're trained and you're kind of guided by a hierarchical model, a very industrial age model, built for mass formations and that sort of thing. And so you judge yourself on the commands you get selected for and you judge yourself on how big your command is and how many people are under your thumb or whatever. I think after a while, you kind of come to the level of maturity when you're thinking about, okay, I'm serving. And so what is the nature of my service? How can I serve best? Can I serve best in an environment with large numbers? Can I serve best in an environment with lots of dollar signs because I'm really good at accounting or whatever your skill set is. Do I serve best in an engineering capacity? And so this is a real important thing for every military officer is to think through service. What does it mean to you? And then if you think about that, then you will, one, you'll be happier, right? Because you'll understand all of everybody that's on this call has committed their professional careers for some period of time, some of us for a really long time to service, right? To service to the nation, to service to the constitution. You can never let that stray too far from your heart, right? Because that's what matters here. It's not the organization. It's not the numbers. It's you and it's your service. And so I think it's really important for every officer to do that introspection. And the reason I bring this up is because I think while you're at the War College is a great time to do that, right? Sit outside the War College in the withering wind and think about your service and it will make you stronger in a lot of different ways. So leading tech organizations is different, right? And if the transformation of the department means that we transform some of those industrial age bureaucracies and organizational models, nobody should be threatened by that. Leading technical organizations is different, but it's not different in a way. So the Jake is an interesting place to work. I describe it as running a unicorn farm. So, and I say that in the most affectionate terms for any of my Jake sirs who are listening here. The Jake is full of fantastic performing individuals, brilliant people who largely are doing this because they want to do this. They want to serve. Many of them could make a lot more money just going across the street, Amazon web services or somebody else. So they're doing it for all the right reasons, right? And they bring just enormous skill sets. And so for just a plain old Marine like me, this becomes a challenge, right? Because, or it could become a challenge, right? You just have to think through. You have to lead with both humility and confidence, right? And so in a tech organizations, you have to know what you don't know and accept that, right? And value the expertise of the people that you have. But you have to lead with confidence too, because your role is to drive the organization to its organizational objectives. And so you can't, you know, that's why you are there. And so you have to have both the humility and confidence as, you know, as you go through this. And I will tell you, unicorns just like sailors, just like Marines, just like soldiers, they want good leadership, right? They want to be led. And so as a leader, especially, you know, at the grade you're coming out of the war college at, you know, your people, however many and whoever they are, they will crave good leaders. So be one, right? This is, you know, this is a purposeful organization in your mind, and especially while you're at the war college, you know, before you get back to your next job, think about it, you know? Commit yourself, be a good leader. That's why you're here. That validates your service. And it's what your sailors, your Marines, your soldiers, your airmen, your guardians will require of you. And so, you know, obviously I'm fairly passionate about this. I think every military officer should be like, this is the nature of the service. This is why we wear this uniform and we go to places like the war college. So I'll leave it at that. You know, it's, I think it was Teddy Roosevelt, life's greatest reward is to work hard at work worth doing, right? If you're in a tech organization, you're gonna work hard. If you're transforming the department or transforming your service, it's work worth doing. So I challenge you all, do it, right? And do it with a smile on your face because it's fun if you're doing it right. Obviously of calling.