 Hello, and thank you for joining us. I'd also like to thank Albert for all the wonderful work that he's done putting us together. He's really been the glue that's helped make this conference happen over the last several years. So if you could please join me in thanking him as well. OK, now that we've got the human out of the loop, let's talk about artificial intelligence. Whereas our friends at Google refer to it as machine intelligence. So AI has actually been around for quite some time. In fact, the promise of AI has been held out there for quite some time. Most recently in the 90s, where it was massively oversold. And then at the time, the technology just wasn't there to support its effective delivery, at least on the hype that was put out there. Now today, you have some of these are well-known figures. China is investing a significant amount of money. They say it's $150 billion. Russia of Vladimir Putin has said that whoever owns AI, whoever achieves AI, which is really effectively four levels. Level one is Siri. Level four is Skynet. Whoever owns AI will own the world. Recently, there was a major AI research conference. And in that conference, they actually had to move the entire conference from, I believe it was New Orleans to San Francisco. And the reason why is they scheduled it over Chinese New Year. And the event would have had no attendees, or at least major research attendees, without the reschedule of the conference. So this is a problem. This is a problem not just from a national security perspective, but also from an American supremacy perspective if you care about investment in next generation technology. So as Albert has already mentioned, I'm joined here by three individuals who have a distinct knowledge base on artificial intelligence. And you can read about their bios more in your booklets that you've been given. But maybe just to start out, Mike, as you have a variety of these countries digging into the artificial intelligence space, at least not just in terms of basic research, but also applied research and building them in the military programs, are we in a new artificial intelligence arms race from your perspective? Well, there might be an artificial intelligence arms race, but we're not yet in it. National Defense Strategy released in January calls that out, calls a high out, as one of the modernization areas, one of the key modernization areas. For exactly the reasons you just quoted Russia's Putin and quoted a budget figure for China, I think are adversaries, and they are adversaries. I think they understand very well the possible future utility of machine learning. And I think it's time we did it as well. Great. Ivana, China and Russia, they have a national AI strategy. The United Arab Emirates and France have released or are just releasing theirs. Where is our AI strategy? And as a person who runs a artificial intelligence company yourself, what can the commercial space be doing more to support people like Pat and Mike? So in 2016, President Obama actually released a national AI R&D strategy. It's about, I think, 30-some pages you can find on the White House website. It really goes into why we need to focus on AI. But one thing that it does not do is does not talk about implementation. So we can talk about how maybe the commercial sector within the US is actually winning the AI war. But if the technology is not being used, what's the point of having it? And I think that's something that we really want to work on. And I think as someone who is working in the AI space, a lot of times engaging with policymakers, we're even talking to senior military officials about AI. We basically have to educate them first. This is AI, and this is housed from machine learning. And this is where we're at. And here are the pros and the cons and all these things. And I think the more time we spend educating them is a time that we're not spending trying to advance AI. We're trying to think about how can we actually implement this in our strategic vision. Interesting. So Pat, as a CTO of one of the largest defense corporations out there, from your perspective, building enterprise-level systems, can we afford to be a fast follower in this space? So just like every piece of technology, you can't lead every part of whatever the technology capability is, but you can choose to lead in the application of artificial intelligence and or machine learning into the defense space. And I think that's the decision that we have to make. We have to be committed to that. There's a whole stack of technologies that exist all the way from how to organize data, how to curate, get large sets of data to the fundamental hardware resource that allows you to either train or do inference in real time. All these things contribute to the possibility of taking AI and integrate it in our weapons systems future. That's where we have to make a decision. This is this strategy that Mike and Ivana have been talking about. We've got to make that decision to be committed to that and to start. I mean, we've been doing it for, we've had neural nets in classifiers for years. I've seen them back in the 90s, and they didn't work all that great for a number of reasons. But we're seeing now, led by some advances in the commercial industry, that there's great promise in critical areas of what could be the defense and space that artificial intelligence and machine learning could have dramatic impact on what we need to do. I mean, machines can do certain things much better than humans can. And if we can to get enough of that reasoning in the machine stack, I think we can make our job in terms of the defense department mission much simpler. So to follow up on that, Ivana, running a private sector company, what would you like to see come out of either the Department of Defense or heritage industrial players like Mike's company, sorry, Pat's company, sorry, in terms of enabling the ensuring of innovative technology. Everyone talks about innovation, much like they talk about AI as if it's like a cereal box to be purchased at the supermarket store. But from my perspective, I still struggle often to understand what the actual mechanisms are could be, especially when we have an acquisition system that equates roughly with Moore's law. I think at a system level, it's interesting, because I was at Andrew's last week, and it was the strategic multi-layer assessment. And a couple of the 06 and the generals who were there were talking about this interesting thing, which is that since the 1960s, we have won battles. And to a certain extent, we won wars. But we certainly have not won any strategic vision. I think there's something that JSOC also said publicly. And one of the reasons behind the reason why we're not actually winning any strategic vision is because the monitoring and the evaluation system that we have is fundamentally biased, because we want to say that the things that we created work. Well, AI is sort of there to kind of take that human bias out, so that it's almost like a fundamental shift that I think DOD and a lot of this industrial base can't really help with is how can we actually think about using AI to do something like that. And then I think on a micro level, it is about the acquisition process. And so AI, there's new technology coming out every six months, every four months. If you're a typical acquisition process, it'll take at least eight to 12 months, if you're lucky, then by the time something is going to land in the end user's hand, it's three generations behind. So how can we actually reform that? Yeah, that's a really interesting point. And if you think about some of the promise in machine learning and artificial intelligence, you're going to make an acquisition decision based on what you think has the best compliance to a set of requirements. But yet you're going to have machines that could continue to learn and improve their performance over time. And so it's going to make a decision, an acquisition decision, and then you've got to live with it. And then it turns out that the machine that you decided was not good enough may be better in six months because of some improvement that could happen as a result of it being trained better. Sure. Mike, I don't know if you wanted to answer any of those. I've got another good one for you. Well, let me talk for a moment about the acquisition system. Please. Both Ivana and Pat have mentioned that. There's a bit of a reality, cold water bath, that we in the national security community need to immerse ourselves. And that is that we can, in the world we have today, we can either maintain our process or we can maintain preeminence. But we probably can't do both. We need to be able to move inside the decision loop of our adversaries. And our adversaries are not burdened by the acquisition system we have that has grown up really since post-World War II days, starting with the development of the ICBM and so on. And it has just gotten more and more protracted. In my couple of times as a corporate CEO, I'm well aware of the difference between the decision making process. As a CEO, I have to please my board, and I have to find a way to please the market, all right? But in making decisions about which chip to buy or which piece of software to buy or anything like that, I don't have to be fair. I don't have to give all parties equal access. I don't have to entertain protests. I don't have to pick the lowest bidder. You know, I'm responsible for picking that which I think is going to advance my corporation's interests. I'd better guess right, or I might not be the CEO for very long, but that's the criterion. Our acquisition system is built for a period of time in which, first of all, American preeminence was not really questioned. We were striving to contain the Soviet Union. They weren't striving to contain us. They were striving to break out. China is on the rise, but they're on the rise against our established position. So everything we've done has been in seven decades since the end of World War II has been done in the context in which American preeminence was not questioned. And we had the luxury of time to make decisions as if others couldn't catch up. Now we know that they or we should know that they can. And we can either devote ourselves to the maintenance of the structure that we have or we can devote ourselves to remaining on top. But that's the choice we face. If the consideration for national security officials is I have to figure out a way to be fair to everybody, rather than picking who I think can do the best job in living with those consequences, then we're always going to be behind and so many other things. If I have to structure my accounting system to be balanced within the penny, if many of us in this room live all that, if I have to maintain all of that process, then we probably cannot stay ahead of our adversaries. It reminds me of a quote by Winston Churchill who said, you can always count on the Americans to do the right thing, but only after they've exhausted all other options. And Churchill was half an American. His mom was American. So I think he knew us well. True words have never been spoken. This is a comment that was mentioned, and this is back to you, Mike, again, a comment that was picked up on earlier today. In late 2016, a couple of F-18s launched a swarm of 103 Peridix micro-drons. So shortly thereafter, China responded with a swarm of 119 drones. So one of the things that concerns me personally is less who has more drones in the air, and it's more about the ability of artificial intelligence to enable those platforms, however big or small they are. So Mike, from your perspective, how should these data points, a number of drones in the air, number of unmanned or assisted or remotely piloted vehicles out there, logistics capabilities, how should all these data points inform what our future fourth-direction investment priorities should look like? And should they change from today? Well, I think swarming drone attacks, which have recently become possible and, in fact, are occurring in theater in the Middle East, I think these drone attacks give an example where, in fact, AI can be very helpful. Because certainly human-directed weapons systems can deal with one or two or a few drones if they see them coming. But can they deal with 103? If they can deal with 103, which I doubt, can they deal with 1,000? And so you reach a point where it is inevitable to say that the response to such attacks must be automated. If it's going to be automated, then it will have to be reasoned. If it is going to be automated and reasoned, now you're talking about AI. And in fact, the ability to go after such swarms is a classic example of the traveling salesman problem. I mean, there's no provable optimal scheme for doing it. You have to do a pretty good scheme. But failing to implement a pretty good scheme means you're probably going to lose, because the enemy will also have a pretty good scheme for attacking. And if we don't think our way through those kinds of issues, I mean, that, to me, is an early end. I think what will prove to be a classic example of the military utility of AI. So this is a perfect tie-in. Ivana, Deputy Secretary Bob Lurk in one of his last speeches said, surprise is going to be endemic. Because a lot of weapon system advances, we aren't going to fight them, or we won't see them until we fight them. And if they have AI that's better than ours, that's going to be a bad day. So I guess my picking off of what Mike was just talking about, how can DOD work with the industrial base, as well as leading commercial players to counter what he calls endemic surprise? How can we work to get ahead of not just the investments, but also the employment of a lot of these technologies? And to what extent does morality, as an American, play into this, given that other countries seem not to be as encumbered by that? Or are we too moral? Well, first of all, I don't think I would agree with the endemic surprises. I think the markers are out there. We just have to know where to look, right? So for a lot of folks, I think the Russian influence and the way they're using bots and the informational warfare that they used against the West, against us, was a surprise. But I think if you were actually tracking it from the anti-vaxxers all the way onward, it was not a surprise at all where they were coming from, all that. Can you repeat the last part of the question? Are we too moral? Oh, yeah. Not sure the connection there, but OK. I mean, AI is technology. So I'll talk about technology in the broader term, which is technology is neither good nor bad. It's all about how you use it, right? So it's not going to change someone's intent. It's just going to catalyze it and help that person realize that goal faster. And so it could be bots. It could be drones. It could be whatever. I think, and it's really hard to control how someone's going to use technology. For we developed quadcopters, and people used them to take photos of videos of their weddings. But you could also be Dash. And you can also use it to create your own mesh network in Mosul. So all of these things, I think that, yes, in the US, we're probably a little bit too concerned about, oh, how can we control how someone is using our technology where we don't want to open up this can of warmth. But eventually, I mean, I understand that side of the argument. But eventually, someone else is going to open that can of warmth and they're going to own it. And so I feel like it's much better for us to actually get ahead of that and then start to think, OK, now that I'm five steps ahead of everyone else, maybe I can control it, maybe not, because there's a bigger chance that I might not. But at least I can have some kind of a say instead of having no say. How can large defense corporates enable both Mike's intent as the CTO of the Department of Defense, and as well as people like Ivana and what she's doing and others out there in the commercial space who candidly don't really, and we talked about this before, they don't really have a desire to even engage with the US government. They're not interested in developing a DCAA compliant system or being subject or even understanding what the FAR is. Like, what can Northrop, as an example, do in the space from a technology perspective? Keep them far away from the FAR, I guess, that's all. No, it turns out that I think there's a very useful space that allows and I think to Mike's earlier, encourages the possibility of the potential that there's an economic motive that exists for US companies to do well, and then there's a national security motive that suggests that we want our national security apparatus to do well and protect our ability to have that economic capability. Sort of at the fundamentals of technology, it's very useful for us to collaborate together and broadly with members of the more commercially-defined technology space. And quite often, we find ourselves in a really nice position where our commercial motivations, we're selling to our customer and defense and security customers, they're probably not and they don't understand how to do that. We can be that vector for that technology to our customer base, and yet we can collaborate in ways that can drive the underlying technology to be better for the broader economic and defense and security missions. So I think there's quite a natural matching at the technology level. We also happen to have some very interesting and demanding problems that allow us to stretch the underlying technology and discover its limitations or its benefits in critical areas. So I think there's a lot of reasons why we can and should work together in sort of a symbiotic way to drive the economic and the security needs of our country. And we're delighted. We do that in our space. I suspect our peers also do similar. And I guess that's the follow-up question, both for you and Ivana, as well. Well, we've enjoyed, I guess, a richness of technology here in the United States that, quite understandably, has given us a not invented hero syndrome, where we tend to view not just technologies but the concepts of employment of those technologies as solely existing here in the United States as a source. Given the development of emerging technologies, especially AI, what's your perspective or do you have any thoughts on what you're seeing coming out of places and concepts of employment for those technologies in places not named the United States of America? Yeah, I would say my view, especially with things like AI, is there's been a huge democratization of the underlying technology and that it's going to allow achievements. It's going to allow new applications. It's going to allow new discoveries to be made very broadly across the world. And I think we have to be present in that global technology commons. We have to understand it. We have to use it. And because if we don't, and there's better technology somewhere else, we're going to be disadvantaged. We don't want to ever enter into a fight where our weapons systems, our machines, are at a disadvantage. So we want to understand that technology very broadly and see how we can make use of it inside of our own procurement. Ivana? I would agree with that. But at the same time, there's a part of me that also looks at the other side, which is that AI essentially let's just boil AI down to algorithms. They're very different types of algorithms, but they're algorithms, which means that anyone really with a laptop can compute it. And they can create an AI, which is a scary part. By the same time, if you want to do really, really cool AI, you need a lot of computing power, which brings me to China. China, before they banned bitcoins, they were mining probably almost half of the bitcoins. And so think about how much computing power they have just hidden everywhere. And they can totally just redirect that into AI and start to create super, super computers to crunch all the data and all that stuff. And we know, for a fact, that China collects a lot of data from everybody. And so that's a concern. Yeah, I heard recently that I remember growing up reading about stories of China's attempt to kind of seed the clouds to make it rain in certain parts of the country, if any of you remember that as well. And I heard recently that apparently now they're doing it, but using artificial intelligence to do micro-targeting to be, I guess, much, much more successful. So I mean, it's kind of a random segue, but Mike, given what Ivana and Pat have said, you've had a distinguished career holding a number of different positions in multiple agencies and companies. When we talk about AI, I mean, I've ridden the wave of big data, if you remember back when that was cool, or cybersecurity. Also another monolithic term to describe a whole number of things. Now we're in the middle of the latest AI craze. From your experience, either currently or in your past assignments, what have you seen that could work or could be a useful mechanism or a useful process for helping onshore the some of this advanced artificial intelligence technology that people keep saying that they want? Well, let me comment on that in the context of Pat's and Ivana's remarks. Pat said, we've seen the democratization of AI. And Ivana commented that if you have a laptop, you can do AI. It's a, I might rephrase the same thing and say it is a field with low capital barriers to entry. Yeah, a whole bunch of computers are a great idea, but if you've got one and you're clever, you can do something. And who knows what the invention will be that really takes off? I mean, Sergey Brin and Larry Page didn't start with rooms full of computers. Einstein worked out relativity using nothing more than algebra and an inquisitive nature or inquisitive mind about the nature of the universe. That sounds pretty straightforward. Right. So it's really difficult in endeavors that are inherently low capital intensive. It's really difficult to say who is going to make the next major advance, where it will come from, how powerful it can become. So to that extent, when you talk about on-shoring such things, yeah, it's nice to have lots of compute power and all that stuff. But what we really need to do is to have a climate which wants to attract the best minds. A fear of mine has been that since 9-11, we have really clamped down on a number of different ways in which the United States used to be attractive to the best and brightest. Now I think we still are. I mean, a country that has people clamoring to get here is a better arrangement than having a country where people are trying to get out. But we're not always doing, with regard to our immigration policies, the kinds of things that would cause people to want to come here, get a great education, and stay here. Sure. So for areas that are, you know, my native field is space and rocketry and things like that. Well, those are highly capital intensive systems. AI is not. And the rules of how you progress are very different in those systems. And I think we should take notice of that. Well, and I'm really glad you said that, because realizing that you're not the PNR, a question I'd love to ask you, and the genesis of this is the saying that says, you know that you're avoiding innovation theater when the budget for innovation comes out of the CTO's budget instead of the chief marketing officer's budget. So along those lines, what's the incentive for defense personnel and the corresponding civilians to understand or have basic AI literacy, right? Maybe not know how to go to a general assembly class and learn how to code in Python, but at least have basic technical literacy around things such as AI, but not only specific to that, and one that can be promoted without it. I mean, from a CTO's perspective, what's your perspective on that? Well, I'm hardly just a department CTO. That's one of my legal responsibilities, but I also have line organizations in, say, missile defense. DARPA both report to my office. DARPA, of course, spans the waterfront on their whole mission in life is to prevent technological surprise by the United States and to cause surprise for others across a wide range of activities. So I personally think we can't lose sight of the fact that there are many dimensions of national security. We have to add a new one without losing the others, or else I have to do it without losing our audio visual. Yeah. So this is a discipline that we have to add on. I don't have a problem that people can be promoted to high levels without working knowledge of AI. What I want to see is that working knowledge or even great expertise in AI will be sufficient for people to be promoted. I don't want to go to a place where you can't get anywhere unless you're AI literate. That might not be that useful to people who are trying to help us develop ever more quiet submarines, for example. So let's not get head up and locked on the shiny penny. We need to add AI to our quiver. I think all of us have been very clearly clear about that. It needs to be a new arrow in the quiver, but it's not the only arrow. So I guess it's a follow-up, and any of you can take this one. What concerns me about artificial intelligence, this time is different monologue, is the fact that it enables, really what it enables is hyperwar. We talked about hybrid warfare with the little green men and the return of core warfare. But what concerns me is that the ability of a fairly democratized technology with low capital costs, low business barriers to entry as well, to be employed in a way that can expedite adversarial decision making, as well as their ability to really employ that decision, whatever the decision that is, in the battlefield. And so I guess what I'm curious is, is artificial intelligence the same hype or no hype as cybersecurity, or big data, or blockchain, or any of these other shiny pennies, so to speak? And is this really a game changer when it comes to how we think about basic and applied R&D, how we think about how we orient around our force structure, things like that? I'll start. Please. I won't use a lot of time. My answer is, how can you know? You can't afford to put yourself in a position where you don't know, because a reasoned technical individual would say, there could be something there. We're at its infancy. We don't have a mature adult in front of us in AI. We have an infant. But we can conclude, reasonably, that there might be some real advantages, and we can't let others be the only one to mine those advantages, particularly in a world where, if the global level powers all have nuclear weapons, it's really hard to envision serious kinetic strikes on each other's infrastructure. You can envision air battles and sea battles, but it's a whole different level to imagine China striking our mainland, or we theirs, or any other nuclear power. So given that, and given the desire of different cultures and different nations to prevail over others, which I don't see going away, much as it might be nice, given the desire of nations to prevail over others, they will seek other means to do so. And cyber attacks and AI and other possibilities will naturally occur. I mean, I've often said, I've often called attention to the dependence of the US economy on the GPS timing signal. Now, it has become popular to point that out. I like to think that I was one of the more paranoid people for spotting it earlier. But irrespective of that, you have to ask yourself the question, if half the US economy goes down because we don't have the GPS timing signal to allow encrypted financial transactions to take place, then in what sense can the national security establishment say that they're defending the nation? I mean, maybe no bricks and mortar were pounded into dust, but two days without electricity in New York goes feral. So the number in an advanced society, the number of different ways to be vulnerable, increases greatly. And AI and cyber and some of these other newer realms offer possibilities to our adversaries to do that. We must see to it that we cannot be surprised. Let me stop there. Yeah, and while you guys are talking, if you can weave in as well, I think a good point that Mike brought up around escalation dynamics as well, if you have a technology that can easily ratchet up the potency of the strategic corporal, so to speak, whether they're in the Donetsk Basin or somewhere else. I mean, so information warfare and information operations is one of the places. And speaking of hyperwarfare, I think that is a very good example of hyperwarfare because there's a website online. I forget the name, but it's essentially WordPress for Twitter bots. So in 10 minutes, you can create your own Twitter bot. Anyone can do it. It's slightly terrifying to think about because bots by themselves are a lot of corporations use bots on Twitter. So when you're writing like, hey, American airline, like, my fly was delayed, blah, blah, blah, blah, the first line of response that you get from American airline on Twitter, that's a bot. So it's not bad, but the fact that someone can create a bot. I've explained it so much. Thank you. And so if someone can create a Twitter bot in 10 minutes and use it for whatever purpose they want, that is very concerning. And so I teach at NATO, and I always say this, which is that information warfare sometimes can be seen as even more dangerous than the arms race because there's no mutually assured destruction. What is the consequence of the entire internet being flooded by bots? A, 70% of us probably would not even know that there were bots. B, we don't have an alternative to the internet until quantum internet is invented and stable enough. We have to use it. And then going back to the hype about AI, I think there is a lot of hype, but I also think that the hype is there because a lot of people don't understand what it is. So everyone's talking about it like they know what it is, but a lot of them don't. And so it's a very abstract way of talking about it that actually undermines the amount of time we could be spending talking about real AI problems. Interesting. Pat? Just one other thing to add. These are really terrific thoughts here. There's this tendency to just black box AI. AI is this big black box computer. You throw in a bunch of training data. You calculate a bunch of weights. You run it and you get magic answers out of it. Well, that's really not a great way of evaluating either you or your adversaries decision-making process or creating effects. I think when you untangle this hype around AI, it's an important tool. And I think we need, whether our adversaries have it or not, I think we need to have that arrow in our quiver to support the decision-making processes, the pulse to pulse fight that happens in the electronics regime. You're talking about these bots. Talk about swarms, products coming out. These are swarms. These bots are essentially another form of swarm. And we've got to have machines that can carry out and execute the things that we need them to do in the battle space. And that's where I think we need to take artificial intelligence and machine learning for our customers. Great. Well, we have about five minutes left for questions. So people with the mics, we have one there. We have a question right up here in front, please. Please give your name and identify yourself too. Thank you, Patrick Tucker from Defense One. For Dr. Griffin, you just mentioned a moment ago how you're ahead of the cool kids on urgency regarding US GPS reliance, particularly across the economy and not just for defense. As you put together 10 areas or so, as you've said, where you're going to be looking to make investments for the future of research and development for the Defense Department, what are some of the key areas that are emerging as threat areas that have been left out of the larger conversation that you're going to be paying particular attention to? Well, the kind of priorities for the research and engineering secretariat are not secret. And we didn't just think them up. If you read the National Defense Strategy, the unclassified version, they call out 10 or a dozen areas where the United States really needs to modernize its national security establishment. So for example, well, AI is one of those, and that's what we've spent this panel talking about. But another is quantum computing. Well, I'm on a mission to change that to quantum science because the potential engineering applications of quantum science now 100 or so years after the scientific invention of the discipline go include, but go well beyond computing, hypersonics. I think any open publication in the defense arena will provide you with ample evidence that China now has a significantly advanced offensive hypersonic capability that the United States needs to have and does not yet. Directed energy weapons, we, as a result of the urgency of the mid-east war, something like a decade ago, we started scaling back our investments in directed energy. Well, even with the world's best AI, let's assume perfect AI to counter a swarm of drones might be difficult to do that with physical bullets. I might need directed energy to be able to shoot what I can target as one example. The microelectronics industry was very worrisome, and I believe properly so, to those who crafted the national defense strategy. It used to be that on purely economic and performance grounds alone, leaving aside trust issues, United States microelectronic vendors led the world. People wanted to buy US microelectronics because that was the best stuff. That's not so anymore. We need to fix that. So those are a few of the areas, and I have a few more. Let me let someone else have a turn. Questions? They're in the front place. So in the lead up to World War II, I'm Matt Ryan with the Council for Emerging National Security Affairs. In the lead up to World War II, there was a usability gap, specifically with respect to technologies like aircraft. Dr. Singer mentioned this earlier, but is there going to be a usability gap when it comes to AI and human capability to remain in the loop, or is it just a matter of computing power when we decide who's going to have that bad day? Go for it, Ivana. The short answer, since I see the stop sign, is yes, there will be a gap. And it's something that was actually highlighted in the 2016 is that we need to start retraining the workforce to actually start to think of, or to retrain the workforce to work directly with semi-autonomous or AI in general. Because right now, it's kind of just an ad hoc process that people sort of just learn on the fly, but I don't think there's any comprehensive training. Pat, any last thoughts? I mean, this idea of finding a gap that AI will either fill or be threatened to have overmatch, I think this is a decision that we have to make as a nation. If we believe that artificial intelligence machine learning has that level of impact, then I think it's up to us to take the proper steps to recognize that impact to provide both the investment and encouragement and the national sort of infrastructure to allow it to blossom and take advantage of it. Well, big things I learned. Number one, the customer service I talked to apparently isn't so much customer focused. Number two, most of the brave souls who follow me on Twitter are apparently Twitter bots. And number three, I would sure like to see more engagement and interaction between three representatives such as yourself. I feel most of the discussion, not just around AI, but around technology in general, could use more of this and less of people just wearing suits, so to speak. So please join me in thanking our panelists. Thank you. Thank you.