 innovation avoid being online. Thank you for assuming this. As we all know, the United States has the finest best prepared military in the world. It ain't perfect, it can always be better, and hopefully we can add something to help those who devote their lives to keeping this country safe, to do an even better job, and bring some experience from our different backgrounds. Innovation isn't the only thing about real and futuristic techniques, but it's often the culture that is really, we should worry about. And when we'd had discussions like at dinner last night, it was the culture really much more than one kind of playing versus another, and whether we should shoot here or shoot there. And I think successful organizations are really doing some people. And our responsibilities as managers is to get them to work together and get them the resources they need to do their jobs and discipline them if they don't, all that sort of stuff. And I've always thought that the difference between the military and the private sector isn't as great as we always complain. People still worry about their careers, they still worry about wanting to be respected, and all of those human resources and human relations things lie just as much in the military as in the private sector, and maybe even more, because the private sector, you can reward people a little easier than you can in the military, where it's a very structured progression as you move up, but you can fudge things in the private sector with titles, and wear the offices and things like that, and incent or discipline people as you want. We just got to make people more agile, and hopefully some of the things that we're going to talk about today will do that. We have two studies that we're using. We're going to have to approve. Yes. I've talked about bureaucracy here. Okay, why don't you go ahead and do that. Thank you. Thank you, sir. Welcome, everyone. Thank you for joining us today for the Defense Innovation Board Public Meeting. My name is Dr. Marina Geroto. I'm the Executive Director and Designated Federal Officer for the Defense Innovation Board. Today's meeting is being live streamed and reported to allow members of the public to attend the meeting virtually now and you all will be later. Thank you to the Defense Media Agency for providing their expert support and the Defense Innovation Board team in the old world to bring this slide to you today. The board would now convene its public session. First, I would like to make some procedural remarks. The board is discretionary, independent, advisory board operated under the Federal Advisory Committee Act and the Government Sunshine Act. Today's meeting was announced and the Federal Register Notice posted on January 23rd. There have been no significant changes to the meeting's agenda as posted in the Federal Register Notice. The public was invited to submit written comments for the board members to consider. We received 77 public comments in advance of today's meeting, which we'll review during the meeting. As a reminder, these are comments to the board, not a question and answer session. These comments have been posted on the Defense Innovation Board website, which is innovation.defense.gov. And now I'd like to turn it back over to our chair, Mr. Bloomberg, for his remarks. We're ready for two reports. Why don't we jump right in? Do you want to go first? I knew exactly what you were going to say. Leap into the frame. You were not, I've always found. Thanks, Mike, and thanks for the opportunity to talk about the board's study on lowering barriers to innovation. First, I really do want to thank the board staff and especially the experts and professionals inside and outside of the department that gave us their time and insight and perspectives that they shared with us as we tried to identify tangible steps that could be taken to lower these barriers to innovation. You know, creating advantage and mission outcome in a changing world requires innovation in three areas. You have to have new technologies and capabilities. You have to consider different approaches. You have to consider different approaches. And you have to consider how you want to get things into use, what approaches need to develop, and how you look at the procedures and policies that work. So this study really focused on this last area because for all the strength of our systems and approaches, it is our installed base that over time has become a challenge to achieving what we want. Now, what was most heartening about this is that there was no meaningful difference amongst all the experts we talked to and the outcome we wanted to achieve. And all of them wanted procedures, processes, and processes that allow solutions to be put to use faster, easier, and more mindful of the imperatives of all elements of our acquisition ecosystem from the creators in the private sector to the operators in the field. So our goal was to identify actionable recommendations that can be immediately implemented that will improve real, tangible, recognizable impact on how quickly, how efficiently, and how user-friendly we can do these things. So we had looked at seven domains of innovation from security to acquisition to information technology and human capital. And I will very briefly describe some of the recommendations. The first started with leadership. Someone has to believe that it is their responsibility to actually make use happen at a relevant speed and in a way that allows all to participate. The second is security. We need to reform how we grant access to those who have already been cleared, how we allow access to skiffs as a DOD-wide asset so that the work that the department wants to do can be engaged with all who have the ability to participate. We need to establish direct reciprocity for any software as a service product with an authority to operate amongst every cloud we have. We can't afford to each time we think about using it anew to spend a year getting things into use. We need contracts that are mission capability-oriented and outcome-driven. We have to maximize competition in user use and IP and data sharing agreements, and you'll hear more about data in the next study. We need to adopt industry-aligned proposal processes. We can't with a square hole be forcing people with a round peg to chase banking against it, but we have to align those two things for speed. We need to purchase enterprise software through a single entity so we get broader both efficiency and effective. And we need to eliminate the requirement for non-traditional vendors to prove commercial viability before they can bid on the process. None of these require new authority. They just require action. But I will say that we were not cavalier in thinking that it is easy, nor are we dilatants in thinking that we can give up needs for security or repeatability or transparency or fairness because that's what our system is based on. But decision to achieve these areas and innovation in the process is, you think, help innovation. Is there, I'll come back to the question. We have two guest speakers, Dr. Mark Livingston via Zoom, or whatever the, what's the, uh, uh, Microsoft MSDWIS, MSDWIS. It doesn't work very well. That's okay. The people at Zoom have been desperate to get their employees to come back to work. You can't make this up. Literally, they've sent out a number of messages internally. Anyway, CC Assistant Director of Personal Security and the Defense Counterintelligence and Security Agencies. Thank you for joining us, Mark. And here in person, Frank Kelly, retired Brigadier General. Thank you for your service. And you've still got to tell me what that other airplane you flew. I'll talk to you later, Ron, about it. He works in close partnership with senior leaders in the Defense Department and has a lot of experience. So, Mark, you want to start by telling us what you're doing? Yes, sir. So I'm with DCSA. We've performed four distinct national security missions. We work together to protect the nation's data, our critical technology, our supply chains from our adversaries. We do that with four elements. Personnel security, which is clearance. Industrial security, which is facilities, clearance. We do counterintelligence and insider threat, looking at adversary actions against our structure. And then lastly, we provide security training. What I wanted to talk today is three quick things. One is the reciprocity issue that's been around since President Reagan. We're still talking about it today. If we want to use innovation or lower barriers, one of the things that we're thinking about is why can't it be on your cat card? So when you go to a new unit or a new IC element, you simply check in and it's there. Right now, we're spending one to three days best case, seven to ten days worst case, just to get people ready to go to work. And it's the same government, the same Intel community, so we're trying to expedite that. The second point I wanted to make is our investigative model. It was born in 1947. It's not broke, but I think it sure needs to be fixed. And what I mean by that is we're using what worked in the past. I think we're stuck there. At one point, three or four years ago, we had almost 700,000 backlog. Today, we've got it to a manageable level, but we stopped innovating after we got to that 200. I think we need to be more aggressive, and at least to my third point, AI. So if we're going to be successful in the future, I think we're going to have to brace AI, MTM, and ML. A smart friend of mine recently told me that the internet was a game changer that spun off Google, Facebook, Twitter, X, and all the other things that came with it. AI is that third rail, and we need to embrace it. I think there's an opportunity here where we have background investigations across the entire federal government. We do about 95% of those. A lot of that could be automated. You're still going to have to have the human element there, but the bottom line is we need to be leaning forward and take some risks. The problem is, in the government, once we've had a success, we tend to stop. It's kind of like putting breaks on a car. Some people think it's so you can stop the car. Some people think so you can go faster. So when we talk about AI, we need to go faster and we need to be more proficient. We need to be aggressive. We can't wait for a crisis to drive us to act on AI, and that's my pitch this morning. I've got lots more I'd like to talk about, but I wanted to keep it under five. This is an exciting time to be in security. We've done more in five years than we've done in the last 70, and I think the next five are going to be even better if we are aggressive with AI. I'll stop there. I know I got going there, but this is good stuff. I think we're going to make big changes, so I'm ready for it. Hopefully, Ken, Frank, you want to add some? So I'm here because Marina told me to be here, and I'm not sure if I'm here as the Vice President of DA Units or for a covering unmanned systems desert within the Navy or retired Marine. So in my panic in terms of preparing for today, I came up with three big things, change, transformation, innovation, and readiness, and something that I'm not quite equipped to do. I tried to come up with some unifying theory for all of those things. I'm reminded of Drucker, Peter Drucker, who I never met in my life, but I got my name and jumped on for it, and I kind of put together two things that they said. Drucker told us all about culture, the cultures of that. We don't teach people how to change culture in any training I've ever had, and Joe Dunford said it. It's all about, I shouldn't say in general, Dunford. It's all about the size of engaged leadership, and I think if you have those two ingredients, you have everything you need. By the way, remember years ago in a job I had down in Orlando, Florida, I had to meet the governor of Florida with a bunch of other three stars that General Mullen might remember, Admiral Vitali, who was an absolutely fantastic person for me to help him be out down there. Meeting Governor Christ was like, which of these is not like the others when I come to something like this, and that would be me. I am not like all the people that are super smart at this table. One of the things that we've done at DAU is when we addressed change, we adopted and went to school on the Cotter Lean Change Model, and that has worked very well for us, which emphasizes the fact that you start with urgency and you lead with going into action, which is exactly your point, man, that you got to implement this stuff. There are six other steps in between that starts with urgency and then finishes off with action. On the transformation piece that we're doing at DAU, again, we still use that Cotter Model, but John Cotter, and I would recommend somebody here picking up the phone and talking to John Cotter at Harvard, he said, hey, I got everything right in the mid-60s with my Lean Change book, but there's some things that need to be adjusted in the 21st century, and he wrote another book called Accelerate, and what he endorses there is a two-model system, retain the hierarchy. That's the stuff that keeps the lights on, keeps things like the bureaucracy functioning. That's called the hierarchical side. The other side is this network model, and this is where you can flush out innovation at the grassroots level, give people the freedom to try and experiment, and the big job for leadership there is let it happen. That led us to think about innovation. Now, DevSecDev had an innovation ecosystem model. Marina was still working at DAU at that time. DAU involved on an effort to explore how do we promote innovation. She formulated an innovation competency model. I also like to think about that as being like a readiness model that Lewis can employ to make sure that their organizations are ready for innovation. But it's these two sides, the hierarchy side and this network side that helps foster and also promote innovation. The last thing that I want to see is readiness, and General Miller, as the commentator used to talk about, rents and sets. You need to give people the opportunity. The worst time to expect to get innovation out of people is when you need it, the mechanic button is pressed, and you don't have that culture. You've never experienced raising of their hand and suggesting what could be next. Lastly, we need to do more work, I believe, with industry, and I don't mean that to sound like that's an easy thing. A great place to start, I think, are places like NDIA, NTSA, National Training Simulation Association, and we just recently did an effort with AUVSI, so that's the Association of Unmanned Vehicle Systems International. Tewat, one of the things that I know that it's working, we did this effort out in San Diego, and it was in response to the replicator effort that we had industry, government, some academia, but we had the fleet there, and that was the most important thing. It was third fleet out in San Diego, and at one point I couldn't really figure out who the industry folks were, who the government folks were, and unless they were in uniform, I'm not so sure or a good haircut, I'm not so sure I could have figured out who the government folks were. They were all working together to think about ways to actually get things like replicator into action. I've seen some great evidence on the Marine Corps side of the house where great leadership really, really matters. Brigadier General Wall, she's the commander of Marine Corps Systems Command right now. As innovation, getting out of the way, you're smart people, is an expectation. I had a chance to visit one of his commands out in Camp Pendleton, McTessa, under the leadership of Craig Clarkson. Guy is running a brilliant show out there, and all he did was escort me around to talk to his captains, lieutenants, staff sergeants, and sergeants about the great work that they were doing in the technical field. Here's my answer test. I spent 32 years in the Marine Corps. I got a brief that was highly technical. I could barely hang on, and I thought I was a pretty smart guy. He gave me the brief, brilliant, and I asked him, what's your MOS, expecting him to be a commo or an intel officer? He was a grunt who knew how to write code, and I said, wow, I'm not so sure I ever would have expected to see that in the Marine Corps. There's a software factory that the Marine Corps tapping into today. We've got grunts out in the field writing code on the fly. It's a great new world, so from my perspective, just like Mr. Levingston out there, I think it's a very exciting time to be part of the acquisition, the acquisition environment, and this marriage of strong leadership with culture changed to help bring about these changes. I'm starting to reading a book. I'm only up to page 200 out of 700 of the book. It's called Rules of the Game, and it is about Jutland and the battle cruises and battleships and battle, and how the British admiralty went from sail to coal and the signaling from flags to eventually teletype or Morse code and radio, and especially they went through all of this innovation stuff in the real time, and there's not much difference than what we're doing now. It's just what the issues are, but innovation's tough, and there's lots of excuses why not, but thank you for your service and everything you've done. We have to vote on your study. I'm going to take a vote, and I will read the names of each person, and they can vote. I will vote with you, incidentally, just to make you comfortable. Mac, Jill, Gilda, Will, Ryan, Charles is not here today, Mary, Mike Mullen, Ed Hoffman, and I will vote I as well. It's a wrap on the first study, and now Ryan, we're turning to your recommendations. So thank you, Mike. So we set out with the building the data economy, the DOB data economy, really to understand data centricity across the department. We set out with a purpose of figuring out how do we accelerate the use of data in AI in our decisions engines kind of across the department. We had over 50 plus engagements really across the department, industry, academia, so we definitely want to thank all of the experts, all of the time, all of the insights. It was really refreshing to see that there is great work happening, but we have more to do. And so some of the key blockers that we identify in doing this study really came down to a few key things. One, data, I'm going to use that word kind of loosely. I'm going to say data in AI. I'm going to use the word the data loosely here. Data innovation is happening in a department, but it's happening in silos. It's not broad kind of broad base, and there's great work happening, but we have to break down those silos. The second is that the industry, the DOD industrial-based tech has accelerated and has really outpaced the department on a number of levels, right? And so that shows us that there's opportunity there. There's opportunity within our base, but we really have more to do inside our four walls. And then the third is that the CDAO is an essential organization. It is critical that the CDAO is empowered to kind of really become the DOD data economy leader, and that the service level CDOs or the combative command CDAOs are really positioned in the right way. Today there are challenges around how they're utilized, there's inconsistencies on how they're utilized. But when we look at all of this, one of the things that comes to light is that there's a huge opportunity. There's a huge opportunity for us to unlock the value of data in AI at a scale that has never been seen before. And so that means in order for us to do that, it has to be both important and urgent, right? And some of us see it that way, and some of us don't. And so as I go in through some of the recommendations, I encourage you to read the report. I'm not going to go through all of it here, but what I will say is that the recommendations can kind of be categorized in three kind of categories, kind of people, process and technology. On the process side, one of our recommendations is to include NDAA FY25 recommendations that create a requirement for all DOD contracts to actually prioritize data rights and data interoperability so that we can get data out of our platforms and systems so they can be shared securely kind of across the enterprise where we find value or where we are able to leverage AI to create value. The second is really about changing the value opportunities of data in our contract. We see this in industry already, and we want to be able to include data incentives in those contracts so that the industrial phase can really help us unlock the value of data. We can align those incentives, but internally we also have to incentivize data sharing and access within the department. And this kind of really connects to Sue's comments on how we classify data, how we share data across echelon, and how we actually bring that to life. On the people side, this is where the report lays out some recommendations, both for the CDAL at the DOD level, making sure they're empowered and properly resourced, that type of thing, but also to make sure that the CDALs at the service level and the combatting commands don't have additional jobs, that this is important enough that the CDAL is the job. Why? Because we will be able to unlock value and create an advantage that we're just on the cusp of. And then third, we also, this is not just about from a people perspective, about the CDAL. It's about the broader workforce, the broader force. We have to empower the digital natives, the data experts that are within the departments, the innovators, as General Kelly just said, the grunt that knows how to code. We have to build that data literacy, that AI acumen, and empower them to be innovative. Give them that safe space so that they can innovative and take the great ideas and scale them. And then from a technology kind of perspective, we have to enable things like API-first technology, where our systems and platforms from our technical requirements are able to get access, we're able to get access through the data in a machine readable way. So we don't have to re-engineer, reverse engineer the logic, if you will, from platforms in order to get our data out. And then finally, we have to really embrace the data as a product strategy, focus kind of across the department. So when you take all of these kind of recommendations together, this can really create meaningful change in the department. And we see the potential of creating that change and driving that change. And we have clear evidence of where it's working well in the department today. So I encourage you to read the report and really think deeply about how your role as a leader across the department, wherever you may be, how data in AI can help you do your job better, faster, more efficient. So before we go to our guest speaker, I would like to open up to my colleagues for any comments on the report. If you read the newspapers, all of the issues really center around where you get the data. The business of collecting the data and distributing it are being separated. And it's not clear what business model would let you still have the money to collect if you're not controlling the distribution. And so you're going to see a lot of battles over that. And those companies that have data that is not available to anybody else will have a real advantage of all they have is the same thing everybody else has. I will say you did a great job, I think, in working to ensure that the most relevant of the private sector practices are included in this report. There's a lot of different private sector companies working on it. So there's a lot to be learned no matter how smart the Pentagon is. There's just nobody can control all of the good ideas. I thought it'd be useful at this time to hear from the Department's Chief Digital and Artificial Intelligence officer, Craig Martel. Craig, you want to fill in? Thank you, Mayor Bloomberg. And Rita hadn't seen you in a while. I just wanted to say hi. So I want to say that I'm a strong believer in this report. And in fact, most of what this is report, most of what's in this report is essentially the marching orders that we've been following internally. When I first got to the department a year and a half ago, I asked, what's the way to build, allow for short-term wins? It's extremely important that in our organization, we have to allow for the short-term wins while building sustainable change and sustainable value. And striking that balance is really what CDAO has been shooting for. So on the sustainable part, we looked hard at what was broken. And what we saw was a bunch of things being delivered at the top. We have a hierarchy of needs at the top of which is AI. And we saw a bunch of things being delivered at the top that brought AI value, but the lower parts of the hierarchy, which I'll mention in a second, were stovepiked. We're owned by a particular vendor. And so that if you wanted to move that value somewhere else in the department, you had to rebuild the same stovepipe. And that sort of stovepiping has gotten to us a lot of the pathological things we have today. So we worked really hard to think about what would be the right thing to do. And the right thing to do is to allow for those individual wins to continue to be built, but to recognize that they're tech debt. So that if I build a win right now, and this win, it's a particular win for Indopaycom, that's great. We're going to build that. We're going to allow that to go forward. We're going to encourage that. We're going to recognize the tech debt that's there. And we're going to recognize the way we need to going forward tweak contracts. We're going to recognize the way we need to make contracts even more robustly different than they are now so that we can get to the state that we want to get to down the road. And what is that state? We think of that state as a hierarchy of needs. I think I'm just going to be echoing most of the report. And I'm glad to do that because I was really impressed with what your team did, Mayor Bloomberg. But that hierarchy of needs has at the bottom quality data. And what do I mean by quality data? Well, first of all, it has to be discoverable. Think about the data in the Department of Defense. It's distributed everywhere. And not only is it distributed everywhere, it's not distributed in any accessible way everywhere. It might literally be in a machine under someone's desk. So the ability to discover that it's there is extremely difficult. Then the ability to know who owns that data so you can actually go talk to the person to figure out how can I get that data and what's the value that data can bring to me. And finally, how can I access it? So accessibility is extremely important. So we drove forward this notion of a data mesh. And I want to be clear about what we mean by a data mesh because there's too many terms right now, data fabric, data mesh, data lake, data warehouse. I want to differentiate data lakes and data warehouses on one side with data fabrics and data mesh on the other because there are two very different ways to think about it. A data warehouse is as you say, sucks it all into, I mean as the name says, sucks it all into one location and it's there and it's warehouse. Data should not be warehouse. Data should be used. Data has a value when it's a product. Data has a value when it has customers that are delivering value to their customers. So the idea of treating it as a warehouseable thing or as we had said in the department that data is an asset, it is an asset. But if you think about an asset incorrectly, an asset becomes a fungible thing. Data is not fungible. Different data has different value. And data isn't something you lock away. An asset is something you want to lock away such that the more you have of it, the better you are. Well, the more you have of the data is not the metric to decide the value of that data. It's the more people use your data. So as the report pointed out, we've been driving really hard on this, that we see data as a product. That we, in a data mesh mentality, the data is distributed. The data is closest to its owners because the owners are the ones that know the most about that data. The data has centralized discoverability. So we're taking CDAO's role as building that centralized discoverability tool. So we can, so there's some data in the Air Force. There's some data in the Navy. There's some data in the Army. Together we can build the right dashboard that talks about readiness, for example. So knowing where that data is and how to find it, that's going to be in the centralized tool that the CDAO is building. And the ability for the owners of that data to put an API over it, to make it accessible, that's also tools that we're bringing to the table. So we work with the CDO's at each of the departments, each of the components, sorry, components in the department, to learn how to take their data, make it accessible, publish it, put a schema around it so you know what's in that data. And then very strongly, I agree with this idea of data as a product. So we're also teaching the department how to treat data as a product, how to treat data as having customers, how to put on your hat a customer point of view so that the more your customer uses it, the more the data has value. And along that line, we actually want to push really hard to promote people and praise people and reward people the more they treat data as a customer, not the more they gather the data. And I think that's one of the pathologies that I saw early on is that people treated the data as an asset and they treated as gathering it as the value. And that's actually not the value. The next layer on top of our hierarchy of needs, we're calling it analytics and metrics. You know, in this hierarchy of needs is really a polemical device. It's not a device where we fix the bottom before we get to the top. It's really a device that says, here's the way we should think about for any product that we build the layers that you need. So first we need quality data, then you need the ability to analyze the data, to look at that data, to see what that data tells you. So we have a strong infrastructure that we're building out for analytics dashboards. And we've pushed really hard at metrics. So this is a really fascinating one. I think the last time I talked to the board, we talked about this briefly. In the last year and a half, we went from 70% effort-based metrics. And if anybody who's been in the department knows, many people count their value by the number of meetings they go to, not the value of those meetings. So we've made a real push from effort-based metrics to outcome-based metrics. And I'll tell you in a moment how we're trying really hard to drive that. And finally, at the top of that, we have AI. Now, when the Jake was one of the constituents of the CDAO, and when the Jake started in 2018, they were really thinking about what are point solutions that we could do to bring immediate value. I immediately thought that that was a slight mistake. And the reason I think it's a mistake is a centralized, where OSD, we're here to help. No one likes to hear that. That's like saying, we're the government, we're here to help. So a centralized organization that was producing point solutions for particular AI problems is not scalable. So what we've switched to is what we're calling AI scaffolding. I 100% agree with you, Mayor Bloomberg, the real value in building AI is in the private sector. The private sector knows how to do this. There's no way that we as a department are going to have that value. For the years I worked for Read at LinkedIn, they were so far ahead of other teams, and now Microsoft has continued that and Google has continued that. And all of these remarkable companies are so far ahead. There's no way we should even play the same ballgame. What we should do, though, is show up with a clear definition of our problem, analytics and metrics. We should show up with our owning the data that we labeled, because that is our IP. We shouldn't leave the labeling of the data, the description of the data, the value of the data and what it means to the warfighter necessary to the private sector, because we're going to understand the warfighter values significantly better than a vendor will. So what we see is our role is producing tools both on the left and the right of the model. The model building itself is going to be an industrial task, and it's almost a commodity at this point. A model building is almost a commodity. Modulo, the new, the current battles with generative AI, but that's going to get commoditized as well. So on the left of the model, we've worked really hard at data labeling as a service. So taking a video streaming, for example, and saying, this is a truck, this is a school bus, this is a truck, this is a school bus. Now, in our use cases, which I can't do over this call, we get very fine grain about what each object is in a video stream. And how fine grained we get takes an expertise that only a soldier or a warfighter is going to know. So we're building out tools that make it as easy as possible for us to own the labeling of the data, the telling the system what matters in that data. And then the building of the model, 100%, that belongs to industry. And we're also building tools, we have something called SunNet, which allows for multiple vendors to come to the same data and compete their models so that we can pick the best model for a particular problem. And so that's part of our AI scaffolding on the left of the model. On the right of the model, we're thinking very hard, and it's really difficult in government, it's much easier in industry. We're thinking very hard about what model monitoring looks like. So let's just level set. For me, when I say AI, I basically mean statistics at scale. And what that means is we count the past in order to predict the future. So if you look at almost any AI tool that you use, it ends up not working very well if the deployed situation isn't sufficiently similar to the data gathering situation. So you gather data in one scenario, and then you deploy it in one slightly different. Well, that's what fine tuning is for, right? Because the deploying it in that different scenario didn't quite get you the value that you thought you'd get because that deployed scenario is different than where you gathered the data. Well, sometimes in war, it works really well when you start off, or even in business cases, the world changes, it works really well when you start off. But the world changes, and the model over time has less and less and less value to you. Right now, we spend hundreds of millions of dollars for models, or models, we ship them, and we no longer evaluate the value of that model going forward. So we're thinking really hard about what it means to do model monitoring on the right-hand side so that we can say, oh, it's time to retrain your model, or it's time to look at your model because I see that the distribution of inferences that you are making is now changed pretty significantly. So quality data almost completely echoed the report there. We 100% agree that that's the right way to drive forward. On top of that, analytics and metrics, and on top of that, AI in the form of AI scaffolding. Just one more quick things. I don't want to take up too much of your time. We've chosen two marquee use cases to drive this forward because we're not going to be able to just say, magically, please give us quality data. That's not going to work. I mean, that wouldn't work in an organization the size of Google. It's definitely not going to work in an organization the size of the Department of Defense. So we've chosen two marquee use cases. The first marquee use case is, oh, and by the way, we're not going to try to get all of the data. That's boiling the ocean. That won't work. We're looking metaphorically for the 10% of the data that's going to bring 60% of the value. And I say metaphorically because we have no idea what the denominator is. So I don't know if it's actually 10%. But we're looking for the biggest chunk of the low hanging fruit chunk of data that's going to bring the most value and drive a virtuous cycle. So the first use case is a dashboard on the deputy's desk and a deputy secretary's desk and a dashboard on the secretary's desk. These dashboards now present a view of the department for the key metrics that the secretary and the deputy secretary care about and each of those is data driven. So we've worked with the reports to the deputy secretary and we've asked, how do you measure the quality of your work? How do you know when you're successful? That's how we drove that flip from effort based to outcome based metrics. And then we went to figure out my team just dove in and was embedded with all of the PSAs. We dove in and we said, okay, where does that data come from? And then we went to the services and we found that data and then we went to the components within this, we went into the components within the services and found that data. And for the key metrics that the deputy wants to see and the key metrics that the secretary wants to see, we have found the right data, we have built the data flows, we have assigned data ownership and every meeting that the deputy secretary now has with the, with the PSAs, every one of them starts with a review of the metrics and whether they're red, green or yellow. And this is a massive change in thought from the year and a half I got here. So that was our first use case. And we think that's the right, a right marquee use case, because it really drives down to the bottom the need to get to the data, to get to the executive decision making. And when we push down to the bottom to get to that data, that's when we institute these data mesh principles. That's when we teach the folks, that's when we're teaching them to fish, that's when we're teaching them how to be a data product manager, for example, and how to use an API to make the data accessible and how to get it in the catalog. All right. So that's, that's use case number one. I only have 10 minutes. I'm rushing it all in there. I'll pause in a second for questions. I'm sorry. And then the second one, which I think is the right warfighter use case is C.JadC2. If you think about what JadC2 is, it's a combined joint all-domain command and control. The key there is command and control. If you look at the press from a year or two years ago, JadC2 was a product. JadC2 was a thing that we were going to buy. That's not true. Command and control, Adam Mullen, correct me here, because I am not a warfighter, but command and control has happened from the beginning of human history, right? Command and control is just a warfighter thing that leaders must do. It just happens that in the modern world, we can do it significantly differently, differently and significantly more effective. So we see C.JadC2 as a data flow issue, getting the right data from the sensors to combatant commanders, and then the right decisions back down to where they need to go. And so we're building a data integration layer for the combatant commanders. Now I'm going to be really honest with you. The phrase data integration layer has worked really well in DMags. It has worked really well when we go and talk to the combatant commanders, because it sounds like something they can understand. They're going to plug their devices into this layer and the data is going to flow. But all it really is is an extension of these data mesh principles, that the right data that a combatant commander needs is going to be available, that we can build apps on top of that data, and those apps will allow for command and control. And once a command and control decision is made, it can be pushed down into the right levels. Now we are this one as a use case, we're focusing first on IndoPaycom, but we have just had massive successes, and I only say it this way, and I'm happy to brief anybody. We're actually having a DV day next week, but I'm happy to brief anybody about the specifics. But it's not just IndoPaycom. One of the largest frustrations is the inability to do cross combatant command coordination. And AOR boundaries, area of responsibility boundaries, were very hard boundaries with respect to the way the data flowed. And things move around in the world and cross those boundaries. And we're now at the point where we have leave behind capability for combatant commanders so that CENTCOM can know what's happening in IndoPaycom, can know what's happening in AFRICOM, and as things flow in between those areas of responsibility. So these are our two marquee use cases. They have immediate value for the department, and we have leave behind capabilities. And we are incurring tech debt in the sense that Palantir is a big helper here, Andrew is a big helper here. All the major players are helping us deliver this value, but we're just having them do it in a non stove piped way so that we can build out this underlying data layer. Now, building out the underlying data layer so it sounds as strong as it was presented in the report, that's going to take years. And that's going to take years of leadership that drives us in that direction. The trick is to make sure that every time we deliver immediate value, because we always have to deliver immediate warfighter value, we're doing it in a way that contributes to the long-term stability and the long-term sustainability. I'll pause there. I hope that wasn't over. Craig, could you repeat your question from Mike Mullin? What was? Oh, I was just going to say. C2. It was C2, Mike. Yeah, we've had command and control from the start of time. That was really the issue. And that's not insignificant in terms of what you're talking about, Craig, in terms of my ability to command and control in a way where everybody understands clearly what's going on across lots of domains, lots of services, lots of warfighting requirements. It occurs to me that the security people are going to have a nightmare of time in looking at the AI software and seeing whether it is secure in itself. As you pointed out, it's most likely to be developed outside in the private sector. There's no way the military could possibly keep up with the number of different organizations and people that are out there creating. And yet, if you look at the AI software, it's not clear that anybody could look at the code and decide what's really happening in there. One of the things about AI, we always talk about nobody knows how it does it. So what kind of security issues are we going to face and how did we deal with that? Yeah, that is just an absolutely great question. I think we just need to shift the way we think slightly differently. So I'll give you an example. So when I'm driving my car, I choose to use adaptive cruise. Why do I choose to use adaptive cruise? Because I've practiced with it in military terms. I've trained with it over and over and over and over again. I have a justified confidence in how it behaves. Even though I can't look inside what's going on with my adaptive cruise system, because it's making millions of decisions per second and too many sensors, and there's no way to interpret it if I could, I do understand the statistical envelope around it, the behavioral envelope in a statistical way. It's very likely to do X in this context. It's very likely to do Y in this context. And just like any other technology, the military is excellent at training, training, training, training, training, training, training and developing a justified confidence in the system. So I think the right way to think about it going forward is not something analogous to personal loop. It's not something analogous to explainability. So if you look at our responsible AI principles, we don't have explainability as one of them in there. We have traceability, which is do we understand why this input, statistically speaking, gave us this output. So you have to think about it like a complex system where you're evaluating its behavioral patterns, not necessarily what's going on in the inside. And I don't think that's very different than any other technology. Let's pick a really old technology. Periodically, guns misfire. Periodically, guns explode in a soldier space. Periodically, missile guidance or torpedo guidance, which is very, you know, that feedback loop has been around for a long time, goes awry. So what we're really saying is, did we have a justified confidence in the system, the way the system behaves because of the training? Such that a commander feels confident in making a decision to use that tool. And then just to close the loop there, other parts of, so the car I drive is a Subaru, other parts of it, I don't trust the lane keep assist because it bounces back and forth between the lanes. So I do not have justified confidence in that. I always turn that off and I would not take the responsibility to use that, but I do take the responsibility because it is my fault if I hit another car to use the adaptive cruise. If this sort of mentality that we have to switch to, and I would even argue, sort of that for most complex software systems, even if we can look at the code, when you do it en masse, we don't really know. It's very hard for us to trace exactly what happened. And we really look at it in a behavioral pattern for the device itself. So I'm confident that's the right way to move forward. And we have a T&E team that's building this out and working with the DOD T&E team on exactly this. People that work in security, they have to be 100% secure. Their jobs are on the line. We expect when they say it's secure, we mean 100% secure. And yet AI software hallucinates unpredictably. And you're never going to get that. So how do you square the Senate and Congress people who are screaming bloody murder? What do you mean you let that cat out of the bag? How will your job is to keep it in there into all circumstances? And we'll have Ryan explain how this is going to happen afterwards with your department here. But what do you do there? Yeah, so let's talk about generative AI, particularly with hallucination. I think, the last time we met, I had a very strong opinion about the concerns that we should have about hallucination. And the way I see it is this. My PhDs in natural language processing, I am unbelievably excited about the state of science right now with respect to AI. This is the best that's ever been in my career. I'm super excited. But the state of the science is here. And the state of the promises or the hype is here. And there's a gap in between. And we take it as CDAO's responsibility to characterize that gap. And what do I mean by characterizing the gap? Well, we're having a symposium on February. You're all invited. We're bringing all of industry together. The whole community is coming together to actually ask the question. How do we decide when a generative AI model is sufficient for a particular use case? So we really believe this is use case driven. So for example, so think about the autonomous vehicle maturity model. There's level one, level two, level three, level four, level five. What was the value of that maturity model? Well, it allowed for industry and consumers to talk the same language. It allowed for industry to make offerings that they know another company wasn't going to out hype them on because there were objective criteria that said level three, level four, level five. And it allowed us to do things like catch and ask questions of crews. Was crews really level five? Well, you know, they had some folks interacting so it's not level five, but is that level four? But it allowed us to have a rational conversation about the technology being offered and about the demand signal. So what we've done is first gather over 200 use cases across the department. We're categorizing those now. We're going to present them in February. And that will be the demand signal for the department. We'll have levels of use cases. And then we're working with industry and we've had over 33 submissions from Google, from Microsoft. Microsoft actually was the most robust, but from all of the major companies arguing this is what a maturity model should look like. And we want to be able to map those together so that we could say, you know, Admiral Mellon, you might ask for something, you know, last time we talked, there were some demands that you had for when you were seeing out. Well, great. I would then be able to say, sir, that is a level four demand, but the technology is only at level two. So you can't have that. But here's some level two things. Would these be effective for you? So if we can build out that maturity model, we can start to rationalize that gap. And so then we can say to Congress, well, Congress, the industry's at level three, here's some really strong demand for level four and level five. And let's fund that. Let's see if we can get the research to move the technology up. But we're not there yet. So let's not allow for, as the gatekeeper here, I'm not going to allow the department to buy purported level four solutions if there aren't level four solutions available. So we're really excited about this preparation podium, exactly for this reason to characterize this gap. And I just want to just echo something I just said, industry has jumped in with both feet on this. Like we talked with everyone. And when everyone, I mean, we talked, the deputy and I talked with Satya, the secretary and I talked with Sundar, like we talked with everyone. And we talked with, I had a conflict with Amazon, but I don't anymore. So we're about to go out to Amazon. And everyone says we would love a characterization like this, because it essentially allows for a more rational marketplace. So the, the response we've gotten and the energy that industry has put into helping us build this maturity model is very exciting. I think you've got a lot of, like a lot of momentum, right? That's following this trying to build the internet for the military from before there was even a cloud in this department. This is one area that leadership after leadership have kept momentum and kept putting the next step in. I think you've done a lot to take the focus off of the kind of AI widgets at the end and really build the required infrastructure and interoperability. My recommendation to you is don't lose sight of the economy and the data economy. And that while you're also thinking about maturity models, there needs to be a value model that gets distilled into contractual language that should be standard. So the industry that owns the platform that others are doing things with the data, they're benefiting from this. If they're not benefiting financially and that language isn't in the contract, there are just too many ways a platform owner can make it difficult to get the 10% of the data you're wanting. So my recommendation is don't lose sight of the economy and equally be aggressive to reward industry when they give you the access to the data that you want so that monetizing data from their platform becomes the equivalent of the app store for them. And maybe it's a different set of companies that are making money directly on the contract, but they get a percentage so it's replacing the cash flows they would normally get from repairing old systems. Now they can repair less of the old system and benefit with predictable cash flow from their data monetized by others. I know we've discussed that, but I think it's so, so important that the money is aligned with the intentions. Well, number one, thank you for the energy you've already given me just diving in and helping me like by default. Hey, Craig, just quickly, and I know I'll probably get that off here, but the building's pretty famous for ignoring COCOMs and actually for ignoring CDAOs or whatever other specialty we've had historically. It seems to me just to listen to you that in a way you may have overcome that. And I'm curious as to how you see that because typically the services will not pay a lot of attention to COCOMs and where they can resist what's going on in the third deck. They'll do that as well. So each service has got a CDAO as well. How is that all coming together? And then how do you connect that to the COCOMs where in our hearts we really want to support them, but from the building that's just really difficult in this area. So yeah, I'll talk to you first and then I'll go back to the other question. Kathleen Hicks is the start of this, right? That her leadership in preparing the ground for my COCOMing was remarkable. I actually believed that my biggest job when I arrived would be to generate demand signal. That was not true at all. Whatever's happened prior to me, the demand signal to be data driven was unbelievably strong. It was just a how. So my job became let me show you how to do it, which is a significantly easier job than actually the generation of the demand signal. So there was already really strong demand signal. And so I just believe that every component realizes the weakness they're at by not understanding their own data. That on top of Dr. Hicks just driving our being data driven and the value of our being data driven. And actually I think it goes back much further. When was the start of ADVANA? I forget which deputy secretary. I apologize for that. But ADVANA started, I believe, when you were in the building, sir. And so the drive towards having our data so that we can do the right thing so we can measure where we are, that began way before I arrived. Now, your concern, I read your concern as how long will this runway last for me? When will that goodwill peter out? That's my real concern. And so there I believe it's really effective governance. So we have a CDAO council. Every CDO across the department comes to that council where we leverage problems that they're facing. And sure we push down policy. But if my sole job is to write memos, I will be ignored. So what we really do is try to help people solve problems that are helping them win their fights right away. And that's we absolutely have to build the right long-term solution. But we're only going to be successful with the right long-term solution by helping with immediate problems now. That's my hope. What you just said keeps me up at night. And what keeps me up at night is that I have momentum now. I'm delivering value. Our pacing adversary is giving us the kind of pressure that everybody focuses on into paycom. So that really helps me focus attention. But if the world changes, how do I maintain this going forward? That's a fear. Reid, you want to add anything? Look, I think the comments are the discussions are very good. As probably earlier, like Will stopped saying, like the focus on infrastructure and the continuation of things. And I think that the question around how do we essentially leverage what is amazing data for the DOD mission is, I think, the important. And then the focus on commerce, because there's a ton of obviously, like this whole network ecosystem of investment going in commerce and making sure those parallels are right. So I think all the discussion is extremely good. And the only thing I would add is that we are in within the industry context continuing to see you know, kind of innovation that works on the order of magnitude of small and months, releasing a more large scale models of different ways of doing it. I think it's going to hit like drug discovery and a bunch of other things within that probably within small and years. And so just also that the continuing of consulting with the various industrial things, which we have been for that expertise, given the pace of that. And given that, you know, I think the discussion has been good and robust. So Reid, just a quick comment there. There's no way we could keep up with that pace. And so we're not trying. What we're trying to do is regardless of the innovation that industry brings to the table are not having quality data would be a mistake, right? No matter what, we need to have quality data to be ready for that. Plus that's our IP. We should own our own IP. And so there's sort of a business case for it as well. But if we don't have that data ready, we are going to be more stovepipe than ever. And so we keep our eye on, we have very strong industrial connections. As I said, everybody has been remarkably fully engaged. And as people come to us and say, here's a solution, again, I am not stopping the deployment of any solution. The only difference I'm doing is saying that's not note that that incurs tech debt, we're going to have to eventually transition that to the new way of thinking. And as long as I can get that in the habit and then to go back to the Dr. Roper's question, and we can think about contracting that allows for that transition, both paying back the tech debt and changing the way the economy is done, I think we're going to have something sustainable. But I say this in a hand waving way that all of these things are really hard from a from an economy perspective. Well, I think the answer is something like instead of you're owning the stovepipe and being able to resell us the stovepipe, how about you own the whatever IP you put into generating that data mesh, you get to monetize that maybe maybe we pay you per API call, maybe we change the contract so it is actually beneficial if we do a per API call, and we require you to open it up to other vendors, well, then you have two markets, you have the market of selling me the app, and you have the market of selling the data to other app builders, right? So we're working that might not be the right answer, I don't know. But this is the kind of discussion we're having in part driven by the conversations that we have. Well, you know that your students have an interest in going into AI and trying to address the things that Craig's been talking about. Absolutely. AI is so pervasive now that I believe louder. So AI is so pervasive now that students, regardless of what they're studying, want to be part of understanding how to use AI appropriately. And at Olin, where I am right now, we are actually infusing it throughout the entire curriculum, but also to help the students understand how to use it responsibly. So yes, I think the interest is only going to increase. And I was curious, Craig's point, from a people perspective, how do you make sure that, from a training point of view, people are always adequately cared to embrace concepts like data mesh and AI scaffolding? Like what's your recommendation to make sure we're prepared to use the best frameworks? Yeah, that awesome question. We built out one of the things I did when I first got here is build a data management team. Sorry, a data talent management team. We have a couple of threads about the way we're thinking. You asked, how do I ensure? I can't ensure yet. We're trying to figure that out. So let me just tell you the threads that we're thinking about. Number one, stop thinking about hire to retire within the government. Industry produces massive value knowing that their people are going to stay 18 months to two years. So they've learned how to manage that churn, we need to as well. During that 18 months to two years, how about we help transform careers? Here's another thread we're thinking about. How about we go to HBCUs, for example, or just in general, the universities that Silicon Valley doesn't by default recruit from. And how about we say, come work with us, we'll pay for your education, come work with us, we'll transform you so that industry is going to want you. We'll give you some of the hardest problems on the planet and we'll give you some of the best experience. So on the other end, industry will come knocking. Now, imagine I can create a pipeline of diversity, of data, literate, diverse candidates. I'm pretty sure industry is going to be knocking on that door. So that's a pipe we've been thinking about. We've been thinking about how do we change JPME and anybody who's had to go through JPME, I haven't, but professional military education knows that that gets overloaded, everything gets thrown in there. But we really have to think about when someone's an O3 or an O4, what do they need to know about managing technology like this? How do we make sure because every leader is going to have to go through PME throughout their career? How do we ensure that that's the case? And we are also the functional community manager for 11 new job titles, which are data engineers, data scientists, all of the new titles that we need to actually drive this data driven transformation. CDAO is the functional community manager. And so the first thing we're doing are very boring things like we're just recoding jobs. And we're saying that's actually not a software engineering job, that's a data engineering job. And once we get those recoded, which is a monumental task, because we have to drive the whole department to do it, we can then say, we can at least then characterize that here's the gap. Here's what's missing. But those are longer term. For short term wins, we have embedded talent teams in every combatant command. And we call them AIDA, AI and Data Accelerator. And those AIDA teams are in every combatant command, and the CDO of the combatant command gets to task them. We pay for them, but the CDO gets to task them. And we specifically tell the team, your job is to support the CDO for mission commander success. And so whatever data they need, just even like just building a data pipeline, building an AI app, so at least at the combatant commands, we are seeding the talent to start to build the muscle of leadership to depend upon this talent. If we have leadership demanding this talent, we'll have billets and people will come in the door and we'll do our best to fill them. But we have to have that demand signal as well as the supply. So I don't have it. I mean, your question was, how do we ensure? I don't know the answer to how do we ensure? But these are the ways that we're tackling it. Mayor, you get the last word before Ryan's, I hope, is going to recommend that we vote on. I vote yes, by the way. It's a good report. If you add something else, a lot of this stuff is right down your alley. Who's going to do how does the private sector work with the military? Well, since the first time we met, Craig, you've made, you know, proper beliefs and bounds of progress. It appears. And it feels like the data economy study that Ryan really spearheaded so well fits. And I love the idea of the dashboard that is rolling out and love the idea that you had the symposium in February. And he knows this very well. The innovation in artificial intelligence in the United States of America is off the charts. And in my career as an investor, I've never seen this many companies with evaluation in excess of a trillion dollars. Never seen before. It's never happened before. Most with their founders still involved innovating for this big prize of artificial intelligence with the capital to put behind the infrastructure, because it is the most expensive, rapidly changing innovation we've ever seen from a funding perspective. And if there's anything we've learned about innovation and technology, when you have great leaders who are founders, who don't want to lose the next battle, great stuff can be created. Reid brought up the if we can if we can create new drugs with this kind of speed, which we'll see, Reid, you said in years, I think it's I'll just say, single digit looks good sooner rather later. If we can do that, we can do a lot of things in a lot of areas. And the fact that you're bringing those companies together, and you're going to create hopefully new bidding and new new business practices that are faster, better, cheaper, more efficient, can unleash a lot more innovation. So Mark and Craig and Reid, you may add to that. But Mark and Craig, thank you very much for Ryan. You do, I assume, recommend that we vote on this. Yes. And I will give you the first vote. Absolutely. I hear it. I, too. Or partly I. And Gilda. Will. I. Mary. I. Mike. I. Pete Hoffman. I. And I will vote last. And I also, I, oh, I'm sorry, Matt, could just jump over you. I apologize. Okay. I can't do this without your vote, Whitigal. Thank you, everybody. And concludes the second study for the day. And thank you, Tim. Thanks, Craig. Thanks Mark. Thank you to our speakers. And yes, Mr. Chair, now we're shifting over to the public comments. As we mentioned earlier, we received 77 comments. They're grouped, we grouped them in the areas of feedback and inquiries on previous studies or current studies. So we have Nancy asking, how is the dip by going to lower barriers to entry for startups, small businesses? And I would recommend to Nancy to read the studies as soon as it's published on our website. We also have Timothy who is asking, how can I support the studies in how we optimize innovation with allies and partners? And that's leading up to our next transfer studies. We also have new study topic suggestions. We have recommendations on data and analytics. And so we've covered a lot of that in our current study, building a data economy program. And to our citizens and readers here to review the data economy study, also that will be published on our website later today. And we also have some deep procedure inquiries. And Amanda is asking, she would like to know what metrics do we use for impact when considering these studies and recommendations and how do we know it worked? And that's a great question because we want to make sure that our studies are not only recommending but also being implemented. And that's the work of that's why we have speakers here and representatives from from the stakeholders across the department to make sure that when our studies are published, before they're published, our stakeholders are engaged early and often so that we have that buying. So when we study, when we publish the studies, we can then turn them loose and have them embrace and implement and adopt and adapt the recommendations into their practice. With that, Mr. Chair, I turn it over to you. Thank you. The last item of this on the public of the board's work going forward. We are starting two new studies. The first covers how we are innovating with our allies, partners, and Charles Phillips are going to coordinate that effort. And the second stems from a previous recommendation made by this board and its focus is to align incentives across the acquisition and tech adoption pipelines. And that's going to be coordinated by Admiral Mike Mullen. Both of these topics have come a lot in our conversations with people across the Defense Department, and hopefully we can provide some valuable insights in the coming months. Any member want to add anything near the end of our public part of our meeting? Okay, well, that includes our public meeting today. And I want to thank Secretary Lloyd Austin and Defense Secretary Kathleen Hicks and Under Secretary Heidi Shue and the entire Defense Innovation Board team for organizing such a productive day. And let me remind everybody you can put it on your calendar. You can tune in to our next public meeting on Wednesday, April 17th. So thanks again, and we'll see you all in the spring and hopefully a little bit warmer across the country. All the best, everyone. Thank you, Mr. Chair.