 The U.S. Naval War College is a Navy's home of thought. Established in 1884, NWC has become the center of Naval sea power both strategically and intellectually. The following issues in national security lecture is specifically designed to offer scholarly lectures to all participants. We hope you enjoy this upcoming discussion and future lectures. Well, good afternoon and welcome to the sixth issues in national security lecture being held here in the virtual world. I'm John Jackson and I'll be the master of ceremonies for today's event. To give us a warm kickoff, I'd like to turn the screen over to Admiral Chatfield. Ma'am. Hello, good evening. It's nice to see you all here today. I wanna thank our community, our families, our Naval War College Foundation and all who have dialed on to join us for our issues in national security affairs. My husband, David Scoville is here with me and we are delighted to join you in this lecture series this evening. Thank you very much, Admiral and David. Good to have you with us once again. As we said in the past, this series was basically arranged to provide a glimpse of the Naval War College academic program to the spouses and significant others of our students. But we've expanded it significantly and now it's open to Naval War College Foundation members. Members of the greater Newport community and virtually folks from all around the world. So it's a great pleasure to have you here and just look ahead a little bit. The next event is gonna be on Tuesday, the 1st of December and we're gonna feature a lively discussion entitled Machines versus Humans with Professor Tim Schultz. So make sure you dial back in if you can. As in the past, the event will be in three parts today. We'll have the scholarly lecture we'll have a question and answer period and then we'll have the family discussion group meeting which is primarily designed for members of the local Newport community and we'll be hearing from a guest speaker in that event. So let's move on to the main event. While the speaker is speaking, if you'd like to ask questions, use the chat function that Zoom provides for us. And we'll take a look at the questions and pass them on to the speaker after he's completed his remarks. So I'm very pleased to be able to introduce our speaker associate professor, Tom Creely. Tom is the founder and current director of the ethics and emerging military technology, graduate certificate program here at the Naval War College. Additionally, he conducts ethics and a technology research for the Department of Defense Joint Artificial Intelligence Center, the Defense Innovation Board, the National Security Commission on Artificial Intelligence and the US Cyber Solarium. He is an affiliate of the JAN Research Group at the University of Wisconsin and is an adjunct faculty member at Brown University. He's also served on the NATO Science and Technology Organization technical team. I'm very pleased to pass the digital baton to a friend and colleague, Professor Tom Creely. On to you, Tom. Thank you, John. It is gonna be with you today. Good afternoon. I want to thank John for his introduction. He is one of the important people in the ethics and emerging military technology program teaching unmanned systems, which we look at critically. This lecture is going to equip you with some knowledge, history, and tools that you can use. Ethics of technology is a priority with the Department of Defense. A year ago, DOD issued its Artificial Intelligence Ethical Principles. Now those principles had been worked on for about two years by the Defense Innovation Board, but it's become a priority in DOD. Even the National Security Commission on Artificial Intelligence is recommending AI and ethics education as a priority of the service academies and the war colleges. The name of war colleges has taken the lead and made it a priority since 2016 with the creation of the Ethics and Emerging Military Technology Graduate Certificate Program. And it is the first of its kind anywhere in a military academic institution. In the last lecture, Dr. Holmes spoke on the importance of sea power in China. It is significant for our national security. We must have a strong position in the seaways and maintaining peace and the exercise of trade across the continents. But we also must prepare for the conventional naval conflict with carriers, ships, tanks, and planes. However, our peripheral vision must expand to include emerging disruptive technologies. There is a threat derived from multiple disruptive technologies acting simultaneously to undermine our American society, culture, values, and economic system. It's discreet, ambiguous, and yet intentional. It is a long, slow, and patient strategy by adversaries. When I lived in China for three years, one of my Chinese colleagues says, Chaps, also Chapman at the time, said, you're only 237 years old at that point. He said, we're over 3,000 years old. We're patient. We take the long view. So this is a test of the American will that cannot be neglected. The ethics of technology is a rapidly developing new discipline of academic study and scholarly research. And we do that at the Naval War College with our EMT program and the special students who are selected, the blightest minds to do some hard thinking. And a fundamental question that we'd like to ask our students when they come into the classes, what does it mean to be human in an age of technology? You see the mural, the right, that is about a three-story mural at the University of Torino, Italy. And you see the thinker representing humankind holding the power cord, contemplating the power that is in his hand. Why does ethics of technology matter? I invite you to continue to think about what it means to be human in this age of technology as we go through the presentation. We develop technology at an exponential rate to the extent that our ethics cannot keep up. Legislation is light years behind. Therefore, ethics must prevail by engaging with technology. Human interaction with technology has increased exponentially. I don't know many people who suffer from technophobia, not that many people that go without an iPhone or a tablet or a computer. And technophobia is the fear of technology. In the industrial age, they were known as the Luddites, those who fear the loss of jobs, automation, the loss of skills, because of the age of mass production. So technology is a defining feature of the human condition. Most of us suffer from technophilia. That is the law of technology. We embrace it as a society without questioning. It's shortcomings, it's pitfalls, it's vulnerabilities or the ethical dilemmas that it creates. Certainly the medical profession and engineering professions have robust ethics programs to keep us safe in medical research and engineering fees. Yet many technologies, especially startups, are developed without ethical consideration this day and time. Now the concepts and academic discipline of ethics technology began in the 50s, 1950s, as scholars began to examine the second industrial age in the internal combustion engine, proliferation of electricity, transnational transportation, assembly line, and other technologies led to many social consequences. People started moving from the farm into the cities and to the suburbia, working in the plants and the factories. And so you can see the impact that technology has on society much further than we really imagine. In this presentation, we're gonna talk about the philosophy applied ethics, give you some tools. Also the philosophy of technology and some history of technology. Dr. Tim Schultz, my colleague in the program, he will be presenting some of this next week at a much deeper level. And then talk a bit about the ethics of the Graduate Certificate Program that we have that our students are engaged in deep research. And then where is the future of ethics of technology headed? So back to the slides, we asked a question of what is ethics? Aristotle said, we're all philosophers and we're all ethicists. But a definition that I've come up with that I think that is appropriate for this setting is ethics is what one ought to do with respect to others in terms of virtues, duty, benefits to society and fairness, as well as the dignity and worth of each individual, each person. Some say ethics as you cannot from the legal officer, the legal perspective. Some say it as thou shalt not from the chaplain and the clergy. In other words, some people see ethics as constraining rather than freeing. But as I've developed over the years, I see ethics as a freeing opportunity. Ethics is much more than black and white, yes or no, right or wrong. Ethics is a collision of values that compete in a complex situation. We can find those in our own families and our relationships, as well as in emerging technologies. But these are the gray areas of which there is no clear answer. And we have to struggle. Our values have to be challenged, turned inside out and twisted and turned in order to know what do we truly believe in what kind of difference can we make as humans in the midst of technology? Ethics is risk mitigation. It is taking the right risks for the right reasons. To expand ethics from this narrow conception, we teach that applied ethics enables a person to make the best decision possible when they're conflicting and competing values. Understanding the terms of duty, greatest good or consequences, virtue, justice and rights. And using a decision-making model framework empowers a person to make the best decision with due diligence. The ethical dilemma occurs when there are competing values in the situation. Technology provides plenty of complex ethical dilemmas and artificial intelligence, neural technology, biotechnology, social media, information and other technologies. Ethics is methodically taking the right risks for the right reasons for a higher good. Therefore, it is essential that ethics education more than training be a continuous program throughout the military. There's a need for constant feedback, loop and learning that provides offensive strategy as well as defensive strategy in protection of our country, our national security and the values. Ethics frameworks help mitigate the risks. And using different lenses of duty and greatest good, virtue and justice and rights provides a foundation for moral reasoning. So let's take a quick look at these ethical principles. There's deontology, that is duty. Emanuele Kant was the thinker behind duty, deontology. And this refers to the rules and the laws and the regulations. We're certainly familiar with rules and laws and regulations being in the military, but he sees a duty that is about the intention to uphold the law and the duties. A concept he derived was a categorical imperative. Act only according to that maximum, whereby you can at the same time will that it should become universal law. So we have a duty to serve our country. We take that the oath of office, these are important. But we also find that utilitarian values can conflict with duty. And utilitarian was developed in thoughts of John Stuart Mill, that is the greatest good for the greatest number of people. How do we find balance between goods over harms? And then how does that conflict with doing a duty? Virtue, character, Aristotle and St. Thomas Aquinas with the original figures in this area. And it focuses on people's actions, moral character. And this is the habituation of honesty and courage, generosity, compassion, fidelity, fairness, self-control and prudence. It is about aspiring to develop good habits. It's aspirational for that higher good as having moral autonomy as individuals and being accountable and responsible. Justice, each person is giving each person what he or she deserves, treating people equally unless they differ in ways that are relevant to the situation in which they are involved. John Rawls is one of the seminal thinkers and a theory of justice. And we hear a lot about justice and people want justice, but it's important to know what it truly means. It certainly writes a justified claim on others. I have a right to be left alone. The right to privacy, the right not to be killed, rights to protect our human freedoms. These have a grounding certainly in the Bill of Rights and the Constitution. So these are five lenses. They're other lenses that are used in examining ethical situations. Then we're going to look at a framework which you can use. In fact, you can use these with your family members, your children and teach them these steps along with those other definitions that I gave you. So the ethical decision making framework is a system that helps you to order your thinking critically to examine the dilemma that comes up. First impressions, note those carefully. Know the way you're thinking inclines is the first step of balancing it, necessary. We all have biases. And when we look at things at the very beginning, we get an impression through the lenses that we look at life with. Sometimes it requires us to step back and have a clear review of what we are examining. Don't allow those emotions to override your cognitive abilities and thinking processes. Those first impressions may or may not be correct. Step two, do you have the relevant facts of who, what, when, and where? This is important to verify the information, making sure that it is correct, that the sources are reliable. But also think about avoiding hearsay, third-hand information, unsubstantiated claims and gossip because those can adversely affect your decision-making process. Step three, what are the various opinions on issues and arguments? That position that directly opposes your first impression is often the most helpful one to consider. What values are in conflict in the midst of this? What moral principles are colliding that you can measure it against? In other words, you want to utilize the utilitarian duty, justice and virtue and other principles and apply them through the lenses. And then four, is don't feel obligated to the early ideas. The more fully and non-prejudicial you explore the issue, the better your judgment is likely to be. What are the pros and the cons of the alternatives? And it's important to consult with other people because you don't have to make ethical decisions on a vacuum, you can consult with an ethicist or somebody has a background in it or in someone with an area of expertise that is related to the issue of the problem. And step five, make a choice. Stand on your own two feet and make a choice. So a lot of times that requires courage. There is no moral courage without a cost in that decision process because it's important that you be able to explain your line of reasoning that led you to that conclusion and being able to defend it. Did I make the right decision? Are my motives for the right reasons? And did I bring resolution to the problem of the situation in ethical conflicts and competing values? We don't always know what the outcome is going to be but at least we have a process by which we can operate. So we wanna look a little bit now that we talked about the ethics of technology to the history of technology. We'll look at the Middle Ages forward. Romano Bordini in his book, The End of the Modern World, Techna is the craft of human ingenuity and skill. Skilled craftsmen developed the tools for utility, those tools interacted with nature. For instance, sail with wind, the plow with the earth. There was collaboration with the earth. Prior to the Middle Ages or the Renaissance, technique was the craft of making a tool by hand. One at a time, homo favor, man the maker was the skilled artisan. The Middle Ages, the Renaissance period and age of enlightenment changed the focus from humankind's interaction with nature to conquering it. With the age of the Renaissance, man began to separate it from nature. It began to do production for the masses. Out of this came the printed Bible of Christian and Hebrew scriptures, loons and milling of materials, of fabrics and other inventions. This became the age of enlightenment, the European culture, artistic, political, philosophical and economic rebirth after the Middle Ages. And it led to the machine creating mass production, which is further help separate humankind from nature. The Industrial Revolution came along in England and the Second Industrial Revolution, manufacturing and mass production were key in it. And certainly in more modern times, we have seen the military industrial complex emerge from World War II, which has created tremendous opportunities and jobs and development of technologies. So technology has given humans tremendous power yet our ethics have not kept up with it. It's rapid development. In fact, there is not a gap between ethics and technology. There is an abyss between our ethics and the exponential development of technology. And it's getting faster and faster. With the focus on technology ever increasing, humankind moves further and further away from a transcendent God, from nature, and even we're moving away from one another as we look at current technology. Mass man conforms to technology. So what is technology? Is it a cell phone? Is it a tablet? Or is it a computer? Sure screen here, slide. So we've seen two definitions of technology. Philosopher, French philosopher Jacques Allure says that technology is the totality of methods rationally arrived at and having absolute efficiency for a given stage of development in every field of human activity. And then Peter Engelmeyer says technology is the inner ideal of all purposeful action which springs from utilitarian designs and drives. These are broad definitions. I can remember when we first started the EMT program and we asked students what their technology, what is the definite technology? And they raise their phone, their tablet that we're sitting there. And so it is much larger than we think of it. And it comes as processes and our ability to think in the productivity of thinking. A question I ask you, is technology value laden or is it value neutral? Well, it can be both. Technology can be value neutral if it is just the hardware, the computer just sitting there because it can only function by what we code it to do but we type into it and that part of it. But it can be value laden. And we see the values that are in algorithms and artificial intelligence. What would the internet look like today? Had it been created by the Chinese or another government rather than the United States? A colleague of mine who teaches in California in the midst of Silicon Valley four years ago had a major software developer calling and saying, we have some of the best coders in the world if they have one problem, they don't know ethics. And they're writing code that reflects biases that are reflecting their values and not necessarily the values we wanna see or the values of our customers. So in creating, in directing the ethics of emerging military technology graduate program it is our goal to narrow that up this by expanding ethical capacity and strengthening ethical agility to these complex emerging dilemmas. Philosopher Jacques Law and Langdon Winner said that technology is pursued for its own sake without regard to human need. And Ian Barber who wrote a book on ethics of technology classifies technology in three categories of liberator, threat and instrument of power. As liberator, our standard of living has exploded. We really enjoy a lot of conveniences especially when we have to work for Christmas coming up off Amazon. We have so many choices to make. And as we are alive here it helped provide the instant communications that we're currently experiencing. And certainly it has brought the world closer by communications and also by global travel. Technology is also a threat because in mass society we lose individuality. Efficiency for productivity places humans as a cog in the machine. What is our role? What is our function to do the repetitive type work? And also it is a threat because it adversaries can exploit the technology. And third, technology is a power. Control of technology has power over another. Technocrats with moral superiority. Well, this is how we need to impose technology into our social lives. It's beginning to define life. It's beginning to define values. And it's also addictive. The addictiveness of the internet. The addictedness of some of the drugs that have been made. And so it has a tremendous power that we need to be reminded of these three values that Barbara has given him. So what is emerging technologies? Well, the definition is technology is being developed and holding a realistic potential not only to become a reality but to become socially and economically relevant within the foreseeable future. Artificial intelligence, neuro, bio, nano, information, cyber, social media, internet, 3D printing, robotics, unmanned systems or some of those technologies that are emerging, that are affecting the economy, that are affecting social issues. And we have to be much more attentive to these. With respect to national security, why develop these disruptive technologies? Perhaps if we did not develop disruptive technologies, our adversaries wouldn't either. But I seriously doubt that our adversaries who are in pursuit of dominant global power would not remain passive. In global competition, having a competitive advantage is important. As President Putin of Russia said, whoever wins the AI intelligence or artificial intelligence race will rule the world. We see China as a primary threat to artificial intelligence and quantum computing. Now, if an adversary uses cyber for a denial of service attack by locking a computer and demanding a ransom, what is going to be the response? Do we pay the ransom or do we demand that it be released? We have seen numerous hacks recently with hospitals in the midst of COVID. This definitely affects the care of people in an adverse way. But see, adversaries see this as an opportunity. The city of Atlanta was shut down by a hack. They refused to pay the ransom, but getting the technological experts to correct it cost a whole lot more than paying the ransom. But they stood by their principle of not giving in to hackers. So we can see that hacking in artificial intelligence can be detrimental or certainly a risk to our national security. A year ago this month, as I mentioned earlier, those principles were implemented because we recognize that we must start speaking nation to nation, government to government, leaders to leaders in sharing ideas and concepts of governance of artificial intelligence. The European Union Parliament is further ahead of the United States in ethics of technology and governance in particular. So we'll take a little closer look at artificial intelligence. The computer analyzes and processes digital information. It has the ability for reasoning, problem solving and learning. It uses, this uses are unlimited. It can predict behavior, job performance. It can track your health in real time. AI is a powerful tool in decision making, providing execution of speed, but less bias, operational ability and accuracy. Now the brain operates at 20 watts per second, but AI operates at 10 million watts per second. So using AI is important in the decision making process because it can process a lot more data in a very short of time to enable the commander to make decisions. The term AI has been misused. Artificial General Intelligence, AGI, is a human level or higher intelligence that encompasses machine learning from neural networks. A narrow AI of the systems designed for a specific purpose of function, robots that get your Amazon order or your home security system or maybe your iRobot that is vacuuming the floor. But we have a concept of thinking that AI is way out there in Terminator, X Machina of the film with Ava as a humanoid sentient being, but we're not that far and it would be an awful long time to get that far. Nevertheless, AI is getting smarter and smarter as it learns and as it develops. One of the questions that we have to deal with is how do we deal with the biases that come in AI? One of our students this past year developed a model for the Joint Artificial Intelligence Center examining a decision-making process and pulling out biases through testing and evaluation. And that has to come consistently time and time again as nuances occur in running the algorithms. The question we asked is how do we ethically respond to our adversary's offensive use of artificial intelligence against us? What moral authority does the commander have and where does artificial intelligence authority end? And in our ethics for AI principles, the commander is always in the loop. The commander always makes the decision. The commander has the moral authority. Therefore, the commander has to be better prepared in ethics in making these decisions and as well as using what the artificial intelligence provides. Some of you are aware of Project MAVEN. It was a project that was to look at imagery and improve drone strikes. It was designed specifically to narrow the target to the person and to reduce or eliminate the collateral damage of the collateral kill. But about 3,000 employees at Google did not like this idea. They did not like the fact that Google was participating in a killing machine. And they protested that for the company, demanding the company drop the contract with Jake and they actually forced their leader to abdicate the contract. Certainly there was another company that picked that up. But the question I have is about what critical thinking measure did the Google employees analyze the competing ethical values? Did they use the five values we mentioned earlier? And did they look at it through the framework in order to make a rational decision, including aspects of national security, their production? I doubt it. I think that their decision was initiated by the power of social media that triggered emotions. And that is the ethic of emotivism, where as a company, decisions made out of emotions. Other companies have had the same response back, well, a year ago this past summer. That is a threat when employees can demand the contracts be canceled and leadership follows through. That requires that we have a closer relationship with technology companies as we move forward in its development. Neurotechnology, these are companies that are involved in deep brain research for mind reading purposes. Brain function enhancement and neuro weapons. You can remember that in Cuba and in China, a number of diplomatic personnel and families suffer some brain damage due to some type of technology that was used as a weapon against them. There's an example of a soldier who's been injured in combat. Can a chip in his brain help him recover use of his limbs or the prosthetics? And also, soldier enhancement. When we enhance them to do different skills and abilities and conflict in combat, what are the consequences when that system retires or gets out of the service? How do we handle that technology? What is his or her rights with the human enhancement that they have? These are just some of the questions in neuro technology that are examined. Biotechnology. Well, we all ought to have some expertise in neuro technology. We all have some expertise in neuro technology. Well, we all ought to have some expertise in this because we are experiencing the pandemic of COVID-19. And it poses a threat to national security as we have witnessed for the past 10 months. I remember about four years ago, former assistant surgeon general came and spoke to our EMT cohort. And he said, rats, bats and fleas carry disease. He said, a pandemic is coming. I can guarantee you that. But we're not prepared for it. And now we see that that was the truth. He also pointed out that we had to shut the borders down with some of the students gave some pushback. He says, look at me. I'm Pakistani and I'm Muslim. We have to shut the borders down. Because we did not know what is coming across that border. We don't know what diseases are coming across. The highly communicable diseases. And so biotechnology is taking a broad path of tremendous ethical issues that we look at. I recently, just last week on Showtime, watched CitizenBio, which was quite intriguing. It's about bio hackers. Experimining with CRISPR gene Cas9 technology. In their homes, garages, basements. They lack protocols, ethics, oversight and accountability. It's quite interesting to see this happen. In fact, you can go in the internet and buy your own CRISPR gene technology kit for a few hundred dollars. That is a threat to national security. The last one I want to cover is social media. Facebook has 2.7 million active users in the second quarter of 2020. They collect everything you do, every keystroke, every purchase, everything that happens has been collected and it's turned around and sold. You and I are the products of social media for others to make a lot of money. An interesting program that was on recently, Netflix, the social dilemma that featured Tristan Harris, who was the ethics officer at Google and talked about the unethical activities, how it was designed to be addictive to keep us scrolling through the pages and then our feelings hurt because we don't have enough lights on a post that we've received. The thing about social media that everybody has a megaphone and a platform to speak. In fact, one of our experts that have come in and speak, Ethan Zuckerman has just published a book called Discrust. It is about the social media insurrection against traditional institutions. It's quite enlightening. Yvonne Noir Harari, an Israeli scholar and strategist of the government, says that we're moving from surveillance to data surveillance, what is collected. So you can see the challenges that we face with these technologies in ethics and technology. So we're moving from surveillance to data surveillance, ethics and technology. Now the EMT program, two days ago we had a total of 29 people who have completed it. And they continue to research these hard questions as to what are the ethical implications. The program is designed to bring in the subject matter experts such as Mattia Schuetz, Dr. Molly John, George Lucas, Linda Wallach, who are on the front line of ethical issues and technology. We use the word futurizing around the war college all the time, but no one had a definition for it. So we developed a one day course on an exercise of what it means to futurize. We typically like to look at the ethics as what has happened in the end of the past, but our program looks at what is the likelihood of what might occur in the future. And then we also want to have the, we have a zeitgeist project, and that is zeitgeist is the spirit of the age, technology impacts history in a period of time through the arts, economics, politics, values, policy and other domains. And so that does play a role in our decision making. And we also work with collaborators in this program. Then I explain. Here are some of the titles that have been written on in the past. We collaborate with the Lincoln Lambs, Brown University, MIT, Carnegie Mellon, the Joint Artificial Intelligence Center. You might ask, so what is the future of the ethics of technology? Well, it has a bright future. In fact, it is so important that Blackstone CEO, Steven Swarthman, pledge $188 million to Oxford University to stand up a Center for Humanities and Technology, with a focus on ethics of artificial intelligence last year. Initially, he gave $5 million to Harvard University for the same purpose. At this past August, IBM contributed 20 million to establishing the IBM Technology Ethics Center at Notre Dame. I have begun to get calls from companies who are trying to get contracts with Jake that they are looking for people to help with their ethics. And it's become a major concern, not only with military, but also with American business. So I want to invite you to be proponents of ethics of technology. Let's talk about it, to spread the word about it. And I thank you for this opportunity to be with you today. And I turn it back over to Professor Jackson. Well, Tom, thank you very much for a thoughtful discussion. I did not appreciate it being told my brain is so much slower than an artificial intelligence, but I kind of knew that already. We don't have time for a lot of questions, but let's go with one of the big ones. And that is if we had an artificial intelligence that was so sophisticated with a challenge to protect the Earth, would it potentially decide that humanity was the worst thing happening to Earth and attempt to purge us from the circumstances? Well, we are the biggest polluters of the Earth. But that is getting into the human thinking level of AI. I think we're a long way off of that. I know when the Wallach in his book, Moral Machine says that machines will eventually supersede humans in ethics. Because one, we're emotional. Two, we have biases. We also change our minds. And we're slow, as was indicated by the wise. That would eliminate a lot of jobs. It would also put the machine in power, as stated earlier. We would have a technopoly over society. I think we're a long ways off of that. But at the same time, we have to also know what our adversaries are doing, what are their objectives. Because sometimes when we look at, you know, how we treat Earth, conservation, sustainability, our adversaries don't necessarily have that same view. And so it's important to futureize in how they're thinking, what are they doing? This is all about smarts. It is about brain power. In solving these problems and complexities. Very good. Thank you. When we talk about ethics of any sort, military or otherwise, it requires people to be looking internally, to be self-reflective about what they feel are the appropriate ethical decisions and processes. Do people naturally want to be reflective? Is it something that we have to encourage them to do? I think that we have to encourage people to be reflective to think. In our program in EL730 Ethics of Technology, they're required to reflect on the presentation, their readings. Not a repeat of what was said, but what did the discussions and the questions, where did it take that individual in their writing? And so we find quite profound thoughts by our men and women in the program. Reflective learning is powerful. And it's part of the process of when we go through that ethical framework with those values. Because we have to think about, what do I believe? Because we lead from what we believe. Our ethics values are reflected in how we lead. And we see that in the commanders. And we see that in other leaders. And do we want to follow that person? So we have to set high bars for ourselves to have ethical thinking and reflection of our own values. And certainly, if we don't make any mistakes, we have nothing to reflect upon because, hey, we're all right. But if we learn from those mistakes, and hopefully not be punished to the point that we bow out of making decisions, that we lose courage, that we are overcome by fear. And it takes a courageous person to be able to reflect on themselves and to understand who they are and why they make the decisions they make. And so reflection is important on many different levels of the very personal soul level to the ethics of technology in the broad global sense. Well, the algorithms that are used by these AI systems are ultimately inputted by human beings. Do the humans inadvertently add their own bias? And once the AI is utilizing machine learning, have we seen instances where the machines have become racist because of what they have read through the internet? Good question. I don't have all the answers to that because some of this is far beyond me. It would take some of my younger, smarter students to engage this. But I would say absolutely the human bias enters into those algorithms, whether it's intentional or unintentional. And so that is why testing and evaluation and ethics learning is a constant going on come round and around. And in machine learning, it can definitely develop bias because of what it sees on social media and what it reads. And that's been proven as well. Again, it goes back to testing and evaluation and making those adjustments. So it's not a foolproof that technology, that artificial intelligence is foolproof. And we can depend upon it. That's one of the interesting facts. People trust technology more than they do humans. Professor Matias Schuess has done some great research in this area who is at Tufts University. He's on robotics efforts. And the people are willing to give up freedom for a sense of security, even though there's not a real threat. A question about the use of autonomous weapons systems. And do we believe that these systems can be designed to ensure that the issues of distinction, proportionality and accountability can be built into a system. And can a autonomous weapon be truly autonomous? Can or should it be? No, I don't think it should be. I think that the commander needs to be in the loop because I personally believe that we as people have the moral autonomy and responsibility to make those decisions. We certainly use artificial intelligence and helping make it the decision in condensing the information and sorting it out through billions and billions of pieces of information that aid in that. But at the end, the commander has to assume that responsibility and be in the loop. Then a group about a year ago, I think two years ago, and an AI problem was given and three lieutenant commanders sat there and said that they would go to the rules of engagement. And the Army War college professor, ethics professor and I were sitting together and we told them, you have to make a decision. There is not a rule of engagement with every moral dilemma that you're going to face in conflict. And they had the hardest time because they were fearful of making decisions on their own two feet. And again, this reinforces the need for ethics education and engaging technology. Very good, Tom. I think we've about exhausted our time here. Do you have any closing comments you'd like to offer before we wrap up? Well, I want you to know that the men and women in the EMT graduate program are making a difference. Their research contributes to national security policy and it is also publishable professional research. And so I like to say, and I'm proud of these men and women, they are the ones who made this program what it is because of the hard work that they have done. And it's getting recognition from people in Washington, the Wall Street Journal, other think tanks and institutions. And so your support for this in any way possible is greatly appreciated. Thank you for this opportunity to share with you about ethics of technology. Thank you. Thank you very much. I think once again, you know, the Naval War Colleges looked upon as a source for scholarship and new ideas and creative thinking and the EMT program certainly supports that effort. So thank you, Tom. I appreciate it.