 After the success of the future Warfighting Symposium in its first two years, I talked with colleagues here at the college and recognized the obvious. Over 20% of our student body is comprised of international officers. And so here we are talking about emerging technologies and space and cyber warfare, but we were doing it only amongst ourselves. And so this year what we're implementing is this recognition that we're in this together. And the epitome of this recognition is in the next events this afternoon, both in this afternoon keynote from General Ryan. And then in the final evolution, a panel discussion with four former chiefs of Navy, from the Indian Navy, the Royal Norwegian Navy, the Japanese Maritime Self-Defense Force, and the Colombian Navy. And so as I introduce our guests, I wanted to give you this sense, this sense of context about why we're all here today. But now it gives me great pleasure to introduce Major General Mick Ryan, who serves as the Commander Australian Defense College since January 2018. After graduating from the Royal Military College in 1989, he took command of a battalion group in East Timor in 2000. He was a planner for development of the first Australian Defense Force Network-centric Warfare Roadmap in 2005. He served as the Deputy J-3 for multinational security transition command Iraq in Baghdad. From 2006 to 2007 he commanded the first Combat Engineer Regiment. He also commanded the first Reconstruction Task Force in Southern Afghanistan from August 2006 to April 2007. And he was awarded the Order of Australia for the command of this task force. In 2008 he served on Army Headquarters on the Adaptive Army Strategic Reform Initiative and as the Military Assistant to the Chief of Army. Then he came to the United States from 2010 to 2011, Major General Ryan worked in the Pakistan-Afghanistan Coordination Cell on the U.S. Joint Staff. And during this time he also led the Joint Staff effort for the President's Afghanistan-Pakistan Annual Review for the NSC. In January 2013 he was appointed Director General Strategic Plans in Army Headquarters where he was responsible for the Army's contribution to the Defense White Paper and Force Structure Review. Major General Ryan has a bachelor's degree in Asian Studies from the University of New England and is a graduate of the Australian Defense Force School of Languages. He's a distinguished graduate of the United States Marine Corps Command and Staff College and a graduate of the Marine Corps School of Advanced Warfighting. In 2012 he graduated with distinction from Johns Hopkins University's School of Advanced International Studies, earning a Master's Degree in International Public Policy. From 2014 to 2016 Major General Ryan commanded the Darwin-based First Brigade, the Australian Army's oldest and most operationally experienced battalion formation, combat formation. And from February 2016 until October 2017 he led the education, training, and doctrine efforts in the Army as Director of the General Training and Doctrine. Please join me in welcoming our guest Major General, Mick Ryan. Thanks Mike. Thanks Mike. That's a very nice bio. I'm not a Hemsworth, just so everyone's aware of that. After Peter Singer's comment this morning I have no relationship to any of the Hemsworth boys if that isn't already very, very obvious to you. So Mike's asked me to talk about leadership here today in the cognitive age. And I'll talk a bit about that. I've recently certainly as Commander of the College back home being very invested in that most important six inches in the battlefield which is between your ears. So what I'll talk about is how do we as leaders think about generating an intellectual edge over the next couple of decades and how might technology influence that. Now I was recently in Silicon Valley in Seattle looking at a bunch of technology firms only about six weeks ago and it was really interesting. They all have very different approaches to the world, very different cultures and they're rolling out some truly groundbreaking technologies. What I think the sentence that stuck with me the most is that we will never again think or work as slowly as today. Now I don't actually think that's an exaggeration. Indeed it might underestimate the impact of the acceleration of change in the environment we're seeing around us and I know this has been a theme this morning and I'll talk about the acceleration of change but more as a driver rather than as a discussion point itself. I think it's true to say the world does now sit at the edge of a precipice where humans and machines will collaborate in a much more symbiotic way. The rapidly evolving capabilities of artificial intelligence holds the promise for better and faster decision making by military leaders. The human councils of previous eras will likely be replaced with AI decision support tools and frankly in as little as a decade I don't think it will be possible to generate advantage at most levels of the military and other human endeavours with our assistance from some form of AI and I think this will be a really interesting topic to talk about during Q&A. So it therefore behoves us as military leaders to anticipate what this means for our organisations, for our ideas and for the development of those who follow us, lest our institutions join the long cure military forces whose failure to anticipate change has seen them suffer catastrophe. Now any excellent book on military innovation, Dima Adamski describes how a military institution needs to figure out the tools of war or the hardware and anticipate their application, the software. The task with regard to software will be much more demanding as Dima says and a cultural approach will be indispensable to it. This domain of military software, concepts, innovative structures and processes and intellectual preparation of military leaders is where wars can be won and lost before a shot is fired, insufficient attention in this area results in a capability gap in military organisations that may be difficult to perceive from the outside but is critical in preparing for and conducting military operations. Now this gap which I describe as a military software gap can result in a failure of imagination, a failure to anticipate and a failure to learn and adapt which is the famous Cohen and Gooch prescription in military failures. It's a gap that has caused military failure and catastrophe from antiquity through to modern times and it's a gap that unfortunately is entirely owned by military leaders. Therefore I'd like to explore in my talk this afternoon and the Q&A session how military forces as well as broader national security community might apply knowledge of advanced technologies to build an evolved intellectual edge to prevent the formation of this software gap in particular how might the application of AI assist our organisations to build this intellectual edge in our strategic as well as tactical leaders. But before I get to that I do want to build the case for why we need to do this and it's a very simple driver at speed. Now changes in the global environment in geopolitics, demographics and technology are occurring against the backdrop of what Klaus Schwalbe has described as a fourth industrial revolution. This is a revolution that's underpinned by connectivity, biotechnology and silicon based technologies that include artificial intelligence. Now adding to the complexity of national security planners the reemergence of China as a powerful player has driven a reassessment in many nations of national security policies. But these developments possess many historical precedents. The first industrial revolution leads to a proliferation of technology and manufacturing on a scale not witnessed before. It was followed by another industrial revolution from the late 1800s into the early 1900s which resulted in motor cars, airplanes, wireless communications, assembly lines and widespread electrification. Some characterised this second revolution as the greatest technological discontinuity in history. Only the last three decades of the 20th century which has witnessed the birth of space travel and the explosion of cheap computing and connectivity has been described as the digital revolution. Now what does distinguish the current era from its predecessors is the pace of change and I know we've discussed this a little bit this morning. Max Boote has written that innovation has been speeding up, took over 200 years for the gunpowder revolutions to come to fruition, 150 years for the first industrial revolution, 40 years for the second industrial revolution and 30 years for the digital revolution. Keeping up with the pace of change is getting harder and the risks of getting left behind are rising. Now this acceleration is also a theme in the 2017 US National Intelligence Council report on global trends. As this report notes, artificial intelligence and robotics have the potential to increase the pace of technological change beyond any past experience and may be uppacing the ability of economies, societies and individuals to adapt. Now this is also demonstrated by the record of patent registrations in the United States over the period of these revolutions. During the first industrial revolution around 10,000 patents were registered. By the end of the second industrial revolution at the start of the First World War it had registered one million patents. Over three decades of the digital revolution nearly 3.5 million patents were registered. This acceleration is also apparent in patents for artificial intelligence. Between 2006 and 2011 patent publications grew on average by 8% per year. Then between 2012 and 2017 they increased by 28% per year on average. While over 340,000 patents have been published since 1960, 53% of these have been published only since 2013. Now the pace of changing technology is well known to you all in this audience. We've discussed it this morning and you're well aware of it in the broader society. As one author has recently noted, transformative technology is as old as the sundial. There are so many examples that I could use to illustrate this point but one really stands out for me and that's the iPhone. Now the iPhone 6S released in 2015 could process information about 120 million times faster than the mainframe computer that guided Apollo 11 astronauts to the moon. This is pretty amazing. But just two years later the iPhone X had two to three times the speed of the iPhone 6. That means the increment in power in just two years after 2015 was twice as large as that in the previous 46 years. Now it's not just in technology where we're seeing this huge acceleration in change. It's occurring in areas such as urbanisation. The movement of people towards cities has accelerated in the last 40 years particularly in less developed regions and the share of the global population living in urban areas has increased by a third in 1960 to 47%. But between 1960 and 1980 there was a 5.5% increase in the percentage of the world's urban population. The next 20 years this percentage increased by 7.4% and over the period from 2000 to 2020 we expect the urban population of the world will increase by 9.5%. So the increase in the pace of change is not just a technological phenomenon, it is a demographic and societal one. And of course for us the most pressing area of acceleration is in military activities. Many of us in this audience have witnessed over the course of our careers the profound changes in the pace at which we must not only undertake operations but the increasing speed with which we have to adapt between mission sets and the speed at which media and our higher headquarters are able to gain visibility of military actions at almost every echelon. Renowned academic Michael O'Hanlon has recently written that technological change of relevance to military innovations may be faster and more consequential in the next 20 years than it has proven to be even in the last 20. Notably it is entirely possible that the ongoing rapid pace of computer innovation may make the next two decades more revolutionary than the last two according to Michael. And as General Dunford recently stated the accelerated speed of war ensures the ability to recover from early missteps is greatly reduced. Now there is a pretty simple reason why the pace of change is different in this industrial revolution. At heart it is about physics. Previous industrial revolutions were based on building things more quickly and more cheaply but it would be impossible to double the quantity of production each year. World information flows however are currently doubling each year. Electrons can violate the laws of physics that have slowed down the automation and globalization in other industries such as agriculture and manufacturing. And it's why the current era is different to previous technological explosions. We are now changing what might be described as inhuman speed. Now this speed is a range of implications for national leaders, policy makers as well as community and business leaders. It's destroying and creating industries at a pace we have not seen before. It is transforming workforces. Job destruction is part of every new business model that embraces AI and machine learning and this will flow on to our institutions. But perhaps the most profound implication is that regardless of industry the generation of a competitive advantage is becoming much more difficult. When advantage is generated it is likely to be more fleeting than it was in previous eras. Currently Rita McGrath has written that we now exist in an era of transient advantage and that successful institutions must spark continuous change and avoid the rigidity that leads to failure. It is through this lens that of constantly evolving sources of advantage that nations will still need to develop and pursue strategies that harness all aspects of national capacity including their military power. And a key element will be the development of an involved intellectual edge. Now military leaders by our nature seek to generate advantage over potential adversaries. Historically there have been four key sources of this advantage. Geographic, technological, quantitative and intellectual. Geography has long paid a central role in building a competitive advantage. Whether it is the island isolation that allowed England to build its globespanning empires in the 18th and 19th centuries, vast expanses of fertile land of North America with oceans to its east and west, or the isolation of the resource-rich continent of Australia, nations have leveraged geography to assist in defending themselves. However the advantages of geography have declined. With the hyper-connectivity of the contemporary world, long-range sea and air transport capabilities, and the ability of individuals to move almost at will to any point on the globe, geography no longer guarantees sovereignty. Space and cyber activities are also largely free from geographic constraints. Now second source of historic advantage has been technology. From Greek fire to crossbows, tanks to jet aircraft, the enigma machine to contemporary high-capacity computing, military institutions throughout history have sought a competitive edge through better technology than the adversary. Along the Western solution to military challenges, advanced technology now generates a smaller potential advantage than in previous centuries. As recent publications such as the 2018 United States National Defense Strategy Commission's described, the technological edge that has been the preserve of Western military institutions for several centuries has declined. Complicating this situation as I've previously described, where they do generate a technological advantage, these are likely to be more transient. Now a third source of historical military advantage has been mass. Generating a larger force and adversary has long been the aspiration of us in the military. Whether it's capacity to generate forces to achieve local overmatch at the tactical level, or build a large force capable of operations in many different parts of the globe, mass has played a crucial role in historical military success. This conception of mass does not just include the number of people in uniform, although this was generally useful until the 20th century. But as the two world wars demonstrated successful military mobilization had to also include efficient mass industrial mobilization. And as shown by the United States and the USSR, the capacity to mobilize large numbers of people and ensure industry can keep them adequately equipped, fed and supplied became the acme of military skill in the first half of the 20th century. Unfortunately though, most Western nations no longer possess the capacity for large-scale industrial mobilization to build military hardware, and they are likely to possess numerically smaller forces than potential adversaries. Therefore, Western military organizations face challenges to the three macro sources of traditional military advantage. We must turn to a greater investment in the other remaining source of advantage, the intellectual edge. It not only provides a source of strength and addresses the software gap, but can also be used as the binding agent for other marginal sources of strength into a greater whole. Now this clever use of military force within a smart use of all aspects of national power is built on the possession of the best ideas that are applied in tactics, operational concepts, strategy and organization. Now, I would highlight too that this is not an argument to eschew other sources of advantage. That would be really dumb. My argument, however, is that only in democratic nations where we can nurture the capacity to explore the full range of options in any given topic can the intellectual edge find its full manifestation and be completely exploited by military institutions. Now, I think this intellectual edge manifests in two but interconnected, two different but interconnected ways, the individual and the institutional. Now, the individual, the intellectual edge for an individual is a capacity for a person to be able to creatively outthink and outplay potential adversaries. It's founded on the broadest range of training, education and experience that can be provided by institutions, as well as a personal dedication to continuous learning over a long period of time. Increasingly, this intellectual edge for an individual will be underpinned by cognitive support through human artificial intelligence teaming. Described as System 3 by Dr Frank Hoffman, this still nascent field in the collaborative application of biological and machine intelligence will increasingly be core to the development of our future leaders. The second manifestation of the intellectual edge is institutional. While the intellectual edge in individuals is vital important, so too is a collective institution-wide intellectual edge. This comprises an organisation's capacity to effectively harness the disparate and diverse intellects of its individuals to solve complex institutional problems in the short, medium and long term. This institutional intellectual edge must be applied to the challenges of forced design, operational concepts, the integration of kinetic and non-kinetic activities, and personnel development and talent management. And this institutional manifestation demands excellent leadership. Therefore, nations are challenged in the change and security environment to build, sustain and adapt the intellectual edge in individuals and at various levels in organisations. Now, this isn't a new challenge. What does compound the challenge, however, is the historically unprecedented speed of change in the environment that I discussed earlier. It drives the capacity to be able to rapidly think through challenges and develop counters and to do this on a continuous basis across the spectrum of tactical to strategic activities. Given the enormous complexity of this problem, enhancing biological sources of the intellectual edge with silicon-based intelligence appears to offer one pathway to an enhanced advantage for nations in the 21st century. This AI support will augment the creative and contextual abilities of humans, not displace it. One recent article has proposed a human coup d'oeuvre might be augmented by a data-fused cyber-d'oeuvre that supports human decision-making. It will be an increasingly fundamental approach to master humans out of retain a full measure of decision authority in an environment of rapidly increasing tempo and military operations. Now, I propose there are two key theories that are relevant in the application of artificial intelligence to assist human cognition. First, the foundational theory of the extended mind, which explores how human cognitive processes are extended in the world. It's a theory that is important to applying artificial intelligence to human decision-making because it proposes that tools outside of human biology can serve as extensions of human cognitive states and processes. The second theory is that of AI extenders. This is a nascent approach that explores how the extended mind thesis can be applied to supporting human decision-making. Now, in recent decades, the extended mind thesis has gained traction in cognitive science and in the philosophy of mind and knowledge. This thesis denies that cognition is limited to individual minds or brains. While some authors have argued that the individual cognitive process spread well beyond biological boundaries, the provenance of the extended cognition thesis is commonly recognised as the 1998 paper, The Extended Mind, by Andy Clark and David Chalmers. Their thesis describes how the tools that humans use assist them to complete cognitive tasks can become seamlessly integrated into their biological capacities. The key idea here is that tools and biological intelligence together play an indispensable role in bringing about human cognitive functions. A standard example is the important role that a pen and paper may play for a mathematician in solving complex equations. These cognitive tools are more than just tools, they are incorporated as part of our minds. So the extended mind thesis offers a simple, useful and explainable theory for improving human cognition. This might permit us to be more capable and bettered a range of different functions if this external technology is appropriately integrated. And if these external technologies are highly accessible, reliable and constantly available, they may even provide humans with extended or novel cognitive capacities. I think the pathway to realising this is the concept of AI extenders. Now, in a 2019 paper, Hernandez O'Rallow and Karina Voll propose that artificial intelligence might allow for the extension of human cognition to new capabilities not conceived in the extended mind thesis when it was published in 1998. This extension of human cognition with artificial intelligence is distinct from fully externalised use of AI. There is no autonomy for the AI that's involved. It is truly an extension rather than an independent agent. Now, there's a broad spectrum of functions that AI may be used to extend cognition and permit the development of an AI-enhanced human intellectual edge. As Hernandez O'Rallow and Voll have proposed a range of different functions of human cognition that might benefit from AI extenders, these augment existing biological cognitive processes to meet humans, to think through problems and develop solutions in a way that may not otherwise be possible. Noting the very immature nature of our understanding of how human cognition might be extended with AI, I would propose that our first steps into this new world should be with the most basic of cognitive functions, that our people and our leaders apply routinely. And I believe there are five. Enhanced memory, attention and search, comprehension and expression, planning and execution and metacognition. Now, enhanced memory recently researchers at the University of Pennsylvania have shown how machine learning algorithms might be used to stimulate, decode and enhance memory. In a different approach, Elon Musk's neural link is researching high bandwidth connectivity between the brain and computers to allow a human AI merger. The rapid advances in this field, as well as neuro-technology, indicate that the enhancement and augmentation of humans through brain computer interfaces is possible in the short to medium term. The application of AI extenders for enhanced memory is likely to have large application in the military and wider national security circles. Now, in attention and search, humans frequently ignore or overlook objects or activities. This can comprise information that is deep longer term importance, but lacks shorter term context. It's only in hindsight that the importance of some information with a larger picture is recognized. There are many examples that illustrate this, including the failure of imagination discussed in the US 9-11 Commission report. AI extenders might allow individuals or teams to examine large amounts of information through multiple live feeds and databases in order to identify things or bring focus to issues that humans or humans in different sized teams may otherwise overlook to scar due to groupthink or fail to appropriately prioritize. Recent advances in AI fields such as network optimization, facial recognition and synthetic training data are likely to contribute to the development of AI extenders in this area. Now, with comprehension and expression, these extenders may provide humans and human teams a significantly improved understanding of information. Systems monitoring, various activities, events, individuals and groups might be able to report probabilities of events, for example, enemy actions, or quantities, an adversary size or industrial capacity to produce precision munitions, for example, with very short lead times. Contemporary real-time analytics such as IBM Z and Amazon Kinesis show promise, particularly for developing real-time tactical situational awareness. In planning, deciding and executing activities, military organizations operate across a range of organizational levels and timescales that demand well honed short and medium and long-term planning capabilities. These processes could be significantly enhanced and potentially speeded up through the application of AI extenders in developing models of action, testing and modeling various activities against known and projected enemy capabilities, and the comparison of different courses of action for their capability to achieve high-level outcomes. AI extenders may also be able to model the networks and anticipate decisions, actions and interests of other people outside of deliberate or formal planning activities. Now, the application of AI extenders to support this will be founded on advances in high-capacity computing and the nascent field of generative adversarial networks. Now, the functions of AI extenders I've proposed here do not comprise an exhaustive list. As institutions begin to apply these AI extenders to widening array of activities, more functions will be discovered. So these initial functions provide useful initial steps only in exploring how an AI extended intellectual edge might manifest in military and national security affairs. It is therefore worth exploring how the intellectual edge might be improved at the key levels of military activities, those being the tactical and the strategic. Now, there are two important military decision-making layers for this edge. As I've just said, the tactical and the strategic. These are not the only levels of decision-making relevant to military activities. Policy-making drives military strategy, yet is largely the realm of civilian leadership. Operational decision-making is another layer resting between tactics and strategy. Have we given the tactics and strategy sit at either end of the extremes of military decision-making? They are worthy of initial attention. I'm sorry for the eye-chart here, that wasn't my attention. But with AI and tactical decision-making, the intellectual edge is about success at the sharp end of military endeavors. Historically, this has been measured largely by physical actions within complex context but is increasingly shaped by cyber and other influence activities. As AI starts to be applied to tactical activities across the land, sea, air, cyber and space domains, it will start to change the balance of power in tactical military endeavors. Kenneth Payne has recently noted that this will change the utility of force by enhancing lethality and reducing risk to societies possessing AI war-fighting systems. And a marginal technological advantage in AI is likely to have a disproportionate effect on the battlefield. It may or may not and we can discuss that in the Q&A session. Now I've illustrated on the slide here some areas where an AI-extended intellectual edge might be used in tactical actions. It has multiple possibilities for decision support at the tactical application layer. A capacity to support alignment of tactical with higher aims is just the tip of the iceberg. Using human in the loop and human on the loop systems, forms of AI may be applied for rapid decision making. Other yet to be developed AI might support the integration of joint capabilities and assisting tactical planning through rapid simulation of outcomes of multiple options. Now in many respects strategic thought as well as development and execution of strategy represents the ultimate manifestation of the intellectual edge for military professionals. It requires years of dedicated study and the experience and experience across a broad range of endeavours to master the art of strategy. Regardless of the type of disruptions that might be witnessed in the strategic environment strategy will remain a central preoccupation of military institutions and nation states. However, how it's developed and the speed at which it must evolve is being disrupted. Now as the table on this slide describes the five key extender functions that I described earlier can be applied to strategy development and execution in the near future. This intellectual edge at the strategic level is a function of best matching purpose to action as Colin Gray has described. Building on this theme in his book on the future strategy Gray has also written that the most enduring function of strategy is management of potential lethal dangers. So we need to be right enough to enable us to survive the perils of today ready and possibly able to cope strategically with crises of tomorrow. So this enhanced functions in AI for strategy is designed to get at that goal. Now the tactical and strategic levels of military endeavor are just two examples of how we might use AI extenders. There are a range of other human endeavors across society, government and business which potentially benefit from the use of AI extenders to provide an extended intellectual edge. I think successfully achieving military and national security objectives in the 21st century will demand our institutions realise the potential of all our people in a way that nurtures and celebrates this intellectual edge supported with appropriate AI. But it requires an institutional mindset that doesn't replace humans with machines but replaces some lower order human cognitive functions with bespoke AI. It should permit humans to apply their cognitive capabilities to generating diverse and imaginative options to complex problems. And it requires a discipline but adaptive institutional leadership to achieve it. So I'd like to conclude with what this means for our institutions and for developing military leaders. So first, we need a plan and it needs to be multi-phase and adaptive. The development of military leaders through education, training, experience, talent management and other mechanisms provides the essential software of a military institution. An institutionally endorsed view of how military personnel will make AI supported decisions, especially their leaders, is required. In developing and executing this plan, there will be human and organisational barriers to overcome. This is just the nature of organisational change. But while always challenging these institutional challenges, changes can be helped by a clear explanation of purpose or why AI will be used to support decision making. This should form part of an expansive view of future military capability and national security policy. Now, given the potential magnitude of change that's likely, these plans should include a compelling organisational vision. Now, Carl Bilder in the early 1990s did some great work on what an organisational vision for a military institution might look like. He believes that such visions must take into account whether a new era is likely to change the essence of the military organisation. For us, that is this new era of human teaming with AI going to change the essential nature of the organisation in which we serve. We should ensure that our institutional visions in this area enhance human machine teaming inspire our people, providing a sense of identity that is attractive to the people in the organisation. These visions need to be realistic and relevant with a realistic appraisal of the challenges and the opportunities faced by us. For our leaders, this means that our leaders must build institutional strategies for what is a significant shift and they must act as facilitators of change and innovation. This isn't an easy skill for us to master. Organisational culture and deeply ingrained personal and institutional habits can obstruct even the most creative and energetic innovators. Our leaders at all levels must act as agents of change in a nurturing environment where innovation and creativity are encouraged and incentivised. Military institutions therefore need to educate future joint officers and service officers and their civilian peers to take on this role of strategic innovation, supporter and nurturer. Can I go back one slide please? Thanks, Steve. So second, we need to engage in better strategic engagement and scanning. Engagement between like-minded military institutions between services and between like-minded nations must continue to evolve and embrace a greater sharing of ideas on the application of AI, particularly in educating our people. There is a wide array of ideas in military education being shared online, but few of these relate to the use of AI. And hence sharing of best practice use of AI and developing our future intellectual edge must be one of the cornerstones of the future approach to Western military alliances. Strategic engagement with our civilian universities is also essential. One author in seeking out solutions to nation-level innovation frameworks has recently described how a technical union might provide a framework for this engagement. This technical union would provide the basis for government and academic collaboration that draws strength from the innovative power of that nation's to create technologies best suited for a geopolitical environment of competition and conflict. Engagement with the commercial sector is also vital. It's here that the day-to-day application of AI is being explored and new business models developed. A range of companies, large and small, have developed AI use cases and are already implementing them in the civilian sector. Importantly, they are developing new organisation models that allow for rapid and high quality decision making that is designed for this new era. Given we're still using organisations and processes that are a legacy of the second and third industrial revolutions, we can learn much from these structural innovations as well as the lessons on the use of AI. Third, we need to experiment. We need to choose use cases, learn and iterate. Holy cow, this thing hates me. Right, there we go. Military organisations must experiment with different forms of AI extenders. There are sufficient commercial AI products for experiments to begin now. Indeed, the use of AI and shifting from manual to robotic process automation is occurring now across civilian industry. It well may be that the truly revolutionary aspects of AI are not in the highly advanced and very attractive high-end applications. The truly revolutionary aspects which free humans from drudgery and allow them to better engage their creativity are likely to be found in the simpler mundane applications such as automated processing. If the contemporary advances in robotic process automation in the commercial sector continue, I think the revolution will truly be in the mundane. Now, if military institutions are to effectively start using these AI in a range of different functions, they will need to be more than just deep technical experts in the development of algorithms and the design of AI for military systems. A recent UK government report describes how skilled workforces using new technologies should be a mix of those with a basic understanding, more informed users and specialists with advanced skills. A more recent publication has described four areas of instruction, leadership, analytics, translator and in user. This means we need to increase technological literacy at all levels. And over coming years at almost every rank level, military personnel will require basic literacy in a spectrum of new and disruptive technologies, not just including AI. This will include knowledge about application, how to provide a level of assurance and quality control and ethical considerations and how to creatively combine these new technologies with new concepts and human organisations at every level. For our leaders, that means as members of a profession, it is an institutional imperative for leaders to build and nurture what Richard Minehart has called a committed learning environment. This must start with our leaders and they must be powerful advocates for this approach. Leaders that nurture a business-as-usual professional education and development program underpin informed change in military institutions. And finally, we must build checkpoints that assure us that we are doing this in line with institutional culture and our national values. The degree to which humans rely on AI as part of these AI extenders will require scrutiny if it is to be applied in line with our national values and community expectations of our military services. As recent incidents have demonstrated, AI is just as capable as humans of misbehaviour. Instances of algorithmic misbehaviour have been identified in Google searches where search engine auto-completion routines assessing large numbers of historical user queries learn to make incorrect defamatory or bigoted associations about people or groups of people. The journey to extend human cognition with AI will therefore possess a variety of ethical challenges that must be addressed in parallel, not lagging behind technological issues. There may be many elements of augmenting humans with technology that will challenge our traditional notion of human decision-making. For example, by what mechanism are people chosen to be the beneficiaries of AI extensions? And once having used these AI extensions for a period of time, is it actually ethical to remove them at some point in the future? Or as Norbert Wiener explored as far back as 1954, how might humans ensure that they don't become so dependent on AI extensions they're not able to function without them? While over-reliance may represent risk, under-reliance could represent inefficiency and potentially failure in achieving military and national security objectives. So there is some way to go before we place our full trust in the decision-making capacity of machines. The trust that machines will operate in a way that is fair and aligned with the values of their human users is an essential part of an effective human AI teaming approach. For our leaders, this means it has never been more important to provide the why. Providing the why is essential responsibility of our leaders. Purpose or rationale is more important than what. No amount of computer-provided analysis or assistance in decision-making will change that. Leaders inspire by giving people meaning and purpose. And remember, our profession is about more than intellect. It's about achieving an optimal brand of intellect and character. There is no artificial intelligence and probably never will be that can provide the inspired or even heroic leadership an example that our people demand in the grimaced or even the best of circumstances. Now philosophy professor Daniel Dennett has recently written that the gap between today's systems and the science fiction systems dominating the popular imagination is still huge. Though many folks, both lay and expert, manage to underestimate it. The large array of contemporary non-fiction and science fiction that explores the potential of AI is in many ways overwhelming. No single person, no single leader could possibly expect to keep abreast of all developments in this field. But one does not need to be across all aspects of AI developments to recognize its potential for assisting military leaders with our cognitive processes. The capacity of humans to make sense of a world changing at a rapid pace is diminishing. Where we must make sense of information and make decisions that involve life or death, some form of supplementation to our human cognition is required. Now the application of our extenders to achieve an extended intellectual edge represents the first steps that military institutions might take to improve the quality and responsiveness of decision-making by individuals and teams. These steps will also provide useful information about the micro-relationships that will form between humans and AI to inform subsequent generations of human AI teaming. This is an undertaking that will demand institutional leadership, the development of new visions of organizational purpose, strategic focus, collaboration with industry and academia and tolerance of risk and failure. But it does offer significant potential advantages to decision-makers in the 21st century. Thank you. Happy to take any questions. Hi, good afternoon, Sir Ron Bess, Marine Corps. So your last quote actually kind of related to my question. You spoke at length about addressing the military software gap, and I was just wondering what sort of impediments have you seen or do you predict, be they ideological, political, or even from military leaders in terms of creating that whole of nation or whole of government effort to address the gap? Thank you. Well, I think it's a fact that military institutions are inherently conservative. That's not always a bad thing. In fact, generally it's a pretty good thing, right? We've learned lessons from decades or even centuries of different operations. We've encoded them into our organizations, our processes, our tactics, and the way we develop strategy because they work and they are capable of giving us a better chance of being successful in the future. But the same things by which we encode these old ideas into our organizations also provides obstacles to future change. So this gets back to leaders providing purpose. Why do we need to do this? It's the most important conversation that our leaders at every single level can have. Fancy technology is useful and you can demonstrate it, but we need to provide that compelling rationale for why we need to undertake the change that is necessary, whether it's organisational, whether it's different concepts, whether it's different teaming concepts. But you're all here to learn to be broader leaders, to be broader influencers in whatever organization you come from. And you are part of this journey, you are part of the leadership of your institution that will be taking your people on this journey. So have a really good understanding of why you need to do this, not just whether it's human AI teaming, but in everything you do, provision of purpose, provision of meaning is the most important thing that a human being can do to those they lead. No, McBurnett, US Army Enforce. So one of the challenges to developing functional AI is large data sets with some known solutions. And you kind of make the supposition that the Western liberal democracy is kind of the best form to attack that. And I'd like to agree with that, but in developing large data sets to known systems, especially for people that typically involve some issues of privacy. And I think we've all been talking around that China is kind of who we're going after in this competition. And they don't seem to have some of those concerns. So how do we generate the data sets to do the mundane in order to keep up with the competition from China? And then on the other edge, what do you think about cognitive systems? The real race I think is for good old fashioned AI, the brain that is using humans as the tool as opposed to the humans that are using AI as a tool. And if they're going after that and we're going after the mundane, are we at risk? So there's a couple of bits to that, I think. My proposition around why Western countries can achieve the full manifestation of the intellectual age is not around data. It's that in democratic systems, there are no topics that are off the table, right? Because discussion of certain options don't threaten government systems like they do in authoritarian regimes. So I think it's really about being able to discuss anything gives us that advantage in generating the most diverse range of options. When it comes to data, it's being done now. Whether you know it or not, data is being collected against you at all times. It doesn't matter whether it's Facebook or who you shop with, that data is being collected. Now, even if there are more robust privacy legislations enacted across different countries, the European Union now has pretty robust laws on ownership of data. The generation of synthetic data training sets is where a lot of companies are going to not get around legislation, but to act in a way that's more ethical in the collection of data and then getting their arms around the data they have and exploiting it. Companies such as Splunk, for example, are great examples of companies that are able to mix this, get their hands around all the data in a certain company and then exploit it for a range of different functions, including business intelligence. But I think the privacy concerns is a really important point. In democratic systems, you should have some right of ownership to the data that's generated about you. The Europeans have got at that. I think in the United States and Australia, we haven't quite got at that yet, but I think most of the big companies that are involved in this field are anticipating that there will be greater legislative oversight of their data collection, their data retention activities in future, as there should be, I think, in a democratic country. Yeah. Good afternoon, sir. Commander Stewart, United States Navy. It's theorised that the values of Christianity may have contributed to the fall of the Roman Empire. Do you see a possibility that our desire to add an ethical component to technology may actually be our downfall because we will be competing against people who may not have an ethical framework to employ these as weapons or as a competitive advantage? I think like the speaker before lunch, I'm actually a big optimist. And I do see some comparisons between the fall of the Roman Empire and the United States. I think that's a really bad misreading of history. I think the prospects, in particular, for the United States are all positive. Your demographics is better than any other country in the world, way better than countries such as China and Japan, where they're seeing declining population. So your demographics is very good. You, despite some of the stuff you see, you actually do have a fairly healthy political system and a democracy. You have a very well-educated populace. So I actually don't see that decline happening. I think that would be a really negative reading of the facts as they are at the moment. When it comes to ethics, I think, once again, to use this country as an example, it's a country that's demonstrated over and over. And again, it's highly valued human life in a way that other nations may not. That's actually a strength, not a weakness, because your populace responds to that. And when they are called on to make those ultimate sacrifices, they will make them willingly because they know that's not what they're being asked to do all the time. They're only asked to do that in the diarist of emergencies. So I think having a strong values-based approach, having a strong ethic amongst your people and in institution is a big strength. It is certainly not a weakness. And I think the argument where those countries who are more likely to unethically use AI or autonomous weapons systems are likely to be successful, we need to test that out because I'm not sure it will actually hold out. I think having a values-based approach is what actually is going to be the key to success in the future. We're the good guys, right? That means something, okay? We're the good guys, right? And that should mean something, because I think it really does. Yes, sir. Matthew Eber in DCMA. I was wondering how are you going to build trust when you have the black box problem? And how do you see the increase of the attack vector with this new AI? So we already have mechanisms for building trust and military institutions, right? It's called good training, collective training, mission command, those kind of things. I can't look inside your head if I'm commanding you, can I? So we already have some mechanisms there, but I think increasing people's literacy just in what is AI. AI is about having an algorithm that has some kind of learning mechanism machine, learning on neural networks and access to data. That's pretty much it in the most course, but most people don't even understand that. But also understanding enough to ask hard questions about what are the left and right of arcs of decisions that that algorithm might be able to make about human beings. Okay, so getting at this technological literacy, making sure all our people understand the basics of these advanced technologies is a really important part of developing trust and then using old techniques that we've used over thousands of years of building cohesive military organizations. We'll wrap around that technological literacy. There are issues with AI, with it being a black box technology and things like the Tay Chatbots, another great example of what seemed like a good idea at a time, took a pretty innocent chatbot over a 24-hour period to be a fairly racist neo-Nazi, but they learned from that. And scientists are learning how to better, not just better build those algorithms, but also make them more explainable. And the European legislation over the last couple of years is also about making AI and those kind of things more explainable. I think you'll see more of a demand for that in this country as we are seeing in ours. So it's a bit of the old and a bit of the new. All right, thank you, Mike.