 In 1997, Artificial Intelligence won. IBM's special purpose computer DeepBlue defeated our greatest champion, Gary Kasparov, in the story game of chess, sending shockwaves around the world. Chess has often been viewed as the game of distanced abstract reasoning, a vehicle to exhibit our intellect and foresight. But alas, a set of algorithms had rendered us obsolete. DeepBlue used an algorithm called Minimax, which amounted to listing the possible board states and moves it can take in the future, and evaluating the value of each of those moves in the context of victory. This involves minimizing the maximum gain of the opponent by looking as far ahead as it needs to on a decision tree. There is obviously much more to this process than what I have outlined here, but you get the point. A computational process could now play at the level of grandmasters, relegating us to a lower division. Fortunately, we have a game more storied and complex than chess that held out against AI, at least for a little while. Go is a game that is rumored to have as many permutations as there are stars in the universe, and is a game about territorial capture in the vein of chess. In ancient China, it was viewed as one of the four great arts, showing how performance in a game can itself be artistic. It has a higher branching factor than chess, making it much harder for AI to solve, so in order to conquer us at it, artificial intelligence was equipped with a new method. This method is called Monte Carlo Tree Search, and ironically, it uses randomness to play the game with more precision. Monte Carlo Tree Search chooses paths randomly, plays out those games, and then calculates the probability of success in choosing those paths. It then uses this information to play more effectively in the present. In 2016, DeepMind's computer AlphaGo defeated Go champion Lee Seidel 4-1 by combining this method with neural nets that were trained on a repository of prior matches. It was a mix of looking to the past and projecting the future that beat us, not to mention absurd computational power. That was it. Go was our final bastion of hope. We are now invalid. AI and games have been fundamentally intertwined from their inception, pushing one another to new frontiers. In his book, Playing Smart, AI researcher Julian Tegelius argues that AI and games are intertwined in three distinct ways. First, games have driven the evolution of AI because they provide precise rules and narrow subdomains that AI conventionally excels at. Second, he argues AI is the future of video games, whether it be with robust procedural systems or narrative directors in games that can create dynamic stories on the fly. Finally, he thinks games and AI can give us a deeper insight into something even more elusive, that being intelligence itself. Tegelius crafts a powerful argument for not only the use of AI and games to improve the kinds of games we make, but also to serve the broader purpose of our interrogation into intelligence itself. Given this, understanding AI and its implementation in games is of vital importance going forward, not just for the future of games, but also ourselves. In the beginning, AI for games was fairly simple. Early game AI was a blend of static patterns and dynamic breaks from them based on player action. With space invaders though, we got a glimpse into how AI ought to be integrated into design, where a limitation in the hardware of the game made enemies get faster the more of them you killed, creating a dynamic sense of progression we now call flow. The most common method used by games is called finite state machines, which just organizes a game into a series of states, each of which has certain instructions. The AI is then driven into one of these states based on a player's action or a trigger. A perfect example of this is Pac-Man. In the neutral state, each of the ghosts follow different scripts regarding how to chase Pac-Man. One chases him normally, another aims two tiles ahead of him, another combines these two methods, and finally, Clyde actually retreats if he gets too close. However, when Pac-Man eats a power pellet, they all start fleeing from him. They have entered a new state with new behaviors. The AI system just switches between these modes based on which state is currently triggered, and this can be conceptualized fairly easily. With this simple set of behaviors though, Pac-Man generates a deep engaging gameplay that dynamically adjusts to the player's inputs. This kind of system extends into many games. Stealth games like Metal Gear Solid and Batman have enemies that follow pathfinding scripts, until your actions or presence forces them into a confrontational state. Pathfinding in games, incidentally, is mostly handled by something called A-Star. A pathfinding algorithm just tells an entity how to navigate a grid, node system, or mesh using some very simple instructions. A-Star just calculates the shortest way to go from A to B. It works by starting at A, looking at all the next available positions, and selecting the position closest to the goal from there. Most games use variants of finite state machines and A-Star algorithms to configure their AI elements, and often, this is good enough to create compelling gameplay. This leads us to a variant of the finite state machine, that being behavior tree scripts. This was popularized by Bungie and Halo 2, and its purpose was to handle the increasing complexity problem of creating believable enemy characters. Halo revolutionized console shooters, and was interesting in how it featured wide open levels and distinct enemy types. This enemy variation borrowed the unit differentiation of Doom's enemies, where different sets of unique enemies create interesting priority puzzles. Because of the wide open levels, Halo couldn't just rely on patterns though, and had to create behaviors for idle combat and defensive states. Behavior trees are very similar to finite state machines, except that they are hierarchical, creating prioritization for the behaviors exhibited by the AI. A behavior tree consists of nodes, which generally have multiple child nodes as well. Each node specifies a sequence of actions, or can apply logical operations such as conditionals and negations, and priority for which branch is selected is calculated based upon what is happening in the game. Halo 2 has what is called stimulus behaviors. These behaviors force an impulse on the part of the NPC to give a particular action a priority at a given point in the game. The common squad behaviors for smaller enemies to retreat in the event that a larger class, like elites or brutes, have been defeated. Damien Isla expands on this further in his paper on the topic, but by using this method, Bungie could create more dynamic AI responses in wide open areas, as well as manage complexity better. So now we have patterns, dynamic state adjustment, and intelligent prioritization as tools for design, and these were all brought to bear in another FPS that introduced a new method for AI, FEAR. FEAR also uses ASTAR and finite state machines, but it modulated this with what is called goal-oriented action programming. As the developer explains, we wanted FEAR to be an over-the-top action movie experience with combat as intense as multiplayer games against a team of experienced humans. This is where planning comes in. They state, we decided to move that logic into a planning system rather than embedding it in a finite state machine, as games have typically done. A planning system gives AI the knowledge they need to be able to make their own decisions about when to transition from one state to the other. Fundamentally, a planning system gives the AI an objective and lets it figure out what to do to achieve it. This was done using the STRIP system, which consists of goals and actions, where goals describe some desired state of the world we want to reach, and actions are defined in terms of preconditions and effects. The benefits of this system is that they could layer behaviors, decouple goals from strategies, and create dynamic novelty as well. In essence, there were a bevy of ways to solve problems, and the AI would dynamically shift strategies to meet a specific goal based on the context. The gameplay this creates can be quite intense, with soldiers dynamically adjusting to their behaviors, coordinating their actions, and working together to hunt you down. Enemies also use cover intelligently, lay down suppressing fire, and use grenades when you are cornered, giving them the impression of intelligent agents. An important thing to note is that the design of AI in games is not driven by technology necessarily, but by the demands of crafting new and engaging forms of play. Fear's system was to create dynamic cinematic battles against human-like opponents, but Halo was crafting battles for wide open space. Bungie also realized that simply by making their enemies harder, they could fool players into thinking they were smarter, and this is an insight that many game developers highlight in different GDC talks. Metal Gear solid features finite state machines like Pac-Man, but the enemies also have specific sensory dimensions, along which they can detect players, whether it be vision, sound, or other fallen comrades, bringing to bear elements of systemic design. Similarly, Batman Arkham's enemies will never turn around abruptly to create a sense of predatory empowerment, and a game like Mark of the Ninja can create interesting stealth puzzles only by having predictable AI. Depending on the type of game you are making though, having enemy AI interact with systems like in Zelda Breath of the Wild or Dwarf Fortress, or having them exhibit more predictable sets of pattern behavior might be the better design decision. For example, the latest Doom had at its core the idea of push-forward combat, where the player would be driven forward instead of being forced into cover. Thus, they designed the game around this idea by having health and ammo be doled out for close-off glory kills. However, the AI system in the game was also tailored to this. If you are moving, enemies are actually less likely to hit you, further incentivizing motion, and enemies are also drawing from a common cue system to see if they are allowed to attack, making sure you don't get overwhelmed. This can also be seen in games like Hellblade and Assassin's Creed, where enemies will never attack you all at once, either to work with the mechanics of the game or to deliver on a particular fantasy. With immersive sims like Deus Ex, Bioshock, and Thief, enemies large and small having a predictable suite of behaviors actually allows improvisation, creative foresight, and planning, leaving players room to experiment with the systems in each of those games. With systemic games, we move into the realm of understanding AI from a higher level of abstraction. For example, Left 4 Dead, a game where you have to fight off legions of the undead with other players, has what is known as an AI director that actually monitors the level of tension felt by the player and adjusts the experience accordingly. Now we are thinking of AI in the context of something akin to an author, creator, or storyteller. Left 4 Dead's director monitors the skill of the player and the number of enemies attacking, then determines where and when to send in more of a special kind of infected being. What's fascinating is now we have the rudiments of a system that attempts to read the player, as well as manage a player's experience, not just program AI behavior. Another game that features an AI director is Alien Isolation. The game is a lovingly crafted homage to Ridley Scott's movies and tasks you with avoiding the titular alien that is stalking you across a remote space station. Much like Left 4 Dead, the game features a director AI that monitors the tension felt by the player and alters what the alien does accordingly. As explained by Andy Bray in his talk at a 2016 conference, the player can't be scared senseless all the time, otherwise they might simply walk away from the game. Fear is a careful blend of abject horror but also stressful anticipation, and so the Xenomorph's behavior had to be adjusted with this in mind. The director uses what is known as a menace gauge to monitor what is happening to the player and how much stress they were under. The variables used to calculate this were proximity to the alien, but also things like line of sight. This is coupled with an AI job system that gives the alien prioritized positions to search for you. The alien itself is driven by a behavior tree system, but what's interesting is that certain behaviors of this tree are locked down until later in the game, giving the alien a sense of adaptability and crafting a difficulty progression in the game organically. The game is a harrowing cat and mouse between you and the alien for hours on end and was enabled by the AI systems at the heart of the game. The idea that we can leverage AI to generate a certain kind of narrative experience is scarcely something we could have envisioned at the start of gaming, but here we see our first fours into this domain. The game's shadow of Mordor has what is called a nemesis system which creates dynamic stories of vengeance and betrayal. Enemies that kill you during the game get promoted in a hierarchy and end up getting stronger over time. However, the developers also ensure to give orcs unique traits to make them memorable, blending dynamic and authored elements in design. With narrative design, sometimes inconveniencing the player can have the maximum impact. An example of this can be seen in The Last Guardian, a game where you and your mythical creature friend have to escape the confines of an isolated castle. You are given the ability to prompt the beast, but he only responds to you a percentage of the time, creating the impression that he is an independent agent. The game crafts a beautiful and haunting tale that replicates many of the mundane and frustrating elements of taking care of a pet, but this only served to enhance the connection players felt with this beast. The problems with creating robust interactive worlds that perhaps push us towards the holiday are numerous though, and have to do with input limitations, recognition and systems management. In his talk, The Future of Storytelling, Jesse Schell outlines how games need to start using head verbs, that being text, voice and speech recognition if they want more robust avenues for storytelling. Similarly, Ken Levine tells us to understand the structure of modular storytelling so that we actually know how to manage director AIs and systems. The problems of interactive narrative have been spoken about extensively in books like Hamlet on the Holodeck by Janet Murray and Uninteractive Storytelling by Chris Crawford. And for now, we are still using very static systems to tell stories in games. One attempt at creating more dynamic narrative can be seen in the game Fassad, a game where you play as a friend of a couple having a dispute. The AI director in the game crafts a narrative by intelligently choosing the next story beat based on your interactions and the needs to satisfy an overarching arc. The game also has a text parser that allows the system to recognize the player's inputs and a reactive planner that controls the moment-to-moment performance of the characters, dynamically blending multiple simultaneous behaviors. It's a mix of Zork, alien isolation, fear and the last guardian and has multiple outcomes that play out based on your decisions. It's a very crude and limited system in some sense but it shows us the template for what might be done on a larger scale in the future. In his book, The Master Algorithm, Pedro Domingos argues that the future of AI is about learning. The idea is that we can't rely on static AI systems we create but we must enable AI systems to learn for themselves if we are ever going to push towards new horizons. Domingos is talking about a much broader project than just games but as we have seen before, games can play an instrumental part in realizing that future. According to Domingos, machine learning exists in five tribes and each replicates a natural process of the real world. There is inductive reasoning, the scientific method, connectionism, which replicates the way our brain works as a neural net, evolutionary algorithms, which replicate natural selection, Bayes theorem, which deals with probability and analogies which tries to model how we think creatively. Games have and can be used to illustrate these domains much like how probability theory came out of the study of games and how poker can show how Bayesian probability works through updating prior probabilities. In this book, Tugelius states, if we are going to realize the vision of AI-driven games, we will need AI that can adapt its behavior, learn, understand, and even create new games. He uses this to introduce how machine learning is used in games. Evolutionary algorithms replicate evolution by creating variation, a way to store hereditary characteristics and a selection funnel called a fitness function. An example of this can be seen in the game Galactic Arms Race, which creates weapons using a system that selects for weapons based on their usage. He also talks about neural nets or reinforcement learners and how they use something called back propagation to teach AI how to act. Neural nets can be used to teach AI to recognize faces, cars, and compose music. An example of a game that uses this is Black and White by Peter Molyneux. In Black and White, you are a god tasked with getting denizens off of land to worship you and you are eventually given a creature. What's fascinating about it is that you can teach it and it will then think and act based on what was taught by you. Creatures can learn facts about the environment, how to do tasks, and what strategies to apply in certain situations. After a creature completes a task, the player can use the mouse to stroke the creature affectionately or give it one or more slaps. Much like how reinforcement works with a child, this system allows you to train the creature for your own desires. The creature also has metrics by which to craft an internal life of beliefs and desires based on the information it has gathered. Machine learning is the new frontier of artificial intelligence research and if Tegelius is right to think that AI and games are fundamentally intertwined, it's interesting to think of the many ways learning can either enhance games or be further studied by using games. Games, however, are built by rules and generate systems and so an understanding of the systemic structure of games is important to close this loop. Real-time strategy games show some interesting things about managing AI systems at higher levels of resolution. For example, the game Total War Shogun has a finite state machine that applies to groups and manages their actions in accordance with precepts from sunsills the art of war. For example, if they outnumber opponents 10 to 1, they surround them, whereas if it's 2 to 1, they divide them. There is also unit AI, combat AI and campaign AI that exists at different resolutions, showing how AI needs to be integrated at every level of the game. In his article on the subject of the design goals for the system, the researcher Tommy Thompson says, with these three systems, the core tenants of the AI of Total War are in place to manage individual units whilst retaining cohesive behavior, to group units of troops together in a manner that is responsive, reactive and challenging and to create strategic opponents with personality. The designer who appreciates the use of unconventional AI methods in games is Willwright. The Sims has a system called Smart Terrain where the objects in the world signal their affordances, their uses, to the Sims inhabiting the world. This is interesting because it shows how we can conceptualize the play space as a space of narrative and gameplay possibilities that signal things to different agents based on their internal desires. In Sim City, a method to map out domains called influence maps was used that can be analogized to the function of a city. Willwright's conceptualization of these games came out of books like A Pattern Language by Christopher Alexander, an urban dynamics by Jay Forrester, showing his fluency at understanding emergence and the dynamics of urban living. Sim City even starts to model rudimentary versions of phenomena like urban decay, illustrating how the patterns of reality can be laid bare in simulation form. If you are wondering what this has to do with the future of AI, the answer is very subtle and deeply speculative. In his GDC talk, The Nature of Order in Game Narrative, Jesse Schell argues that an understanding and recognition of patterns is what might drive AI to new realms of creativity. He further argues that people like Christopher Alexander, who tries to map the many patterns of reality, will be as instrumental to the coming century as Einstein was to the previous one. What's interesting is that Jesse thinks AI will need a repository of patterns to learn from and a way to dynamically generate new ideas, both true and beautiful. What's also fascinating is that Schell thinks of systems and narrative as one dynamic structure. There is no real distinction. When we look at procedurally generated narrative systems like Dwarf Fortress and Rimworld, we get a better sense of how dynamic narrative and systems converge. We are generating worlds that in turn generate stories. Rimworld gives you a choice of different narrative directors that changes how the systemic events of the game can transpire. This shows the power of AI tools to manage gameplay and narrative possibilities. Procedural tools in games are now fairly pervasive, with games like Minecraft and Spelunky using these systems to craft randomized levels for dynamic play. This tradition goes back to a game, Rogue, which was our first foray into automating the job of creativity itself. Inspired by the dynamic storytelling of Dungeons and Dragons, the designers decided to use procedural tools to create the same sensation of novelty and adventure. The algorithm first generates a dungeon, then divides the dungeon into segments, then creates rooms and marks them as visited as it proceeds through them and then creates connections for traversal. It then adds items and monsters to create gameplay in this procedural world. Suffice to say, this created the roguelike genre, characterized by permadeath and novel strategies for emergent play every time you engage with it, showing how AI can be instrumental in guiding new design. Another game, Elite, used procedural tools to create an actual universe, and this method can be seen in games like Minecraft and No Man's Sky. Elite used what is called a seed value to dynamically generate areas of the universe without needing to store thousands of star systems. The seed just encodes the emergent space and constructs a new reality in accordance with a simple set of guidelines. More recent forays into procedural generation, like Spelunky, combine random and authored elements to ensure levels themselves have some coherence and flow. In his book, Derek U explains how procedural generation in Spelunky works, crafting levels and blocks with connections between them that always have a clear path through the exit. Like Rogue, it then layers in additional enemies and items in accordance with distributive rules, and the result is something that feels both random and scripted. With procedural generation, we get a glimpse into how AI itself can be used to design games, which comes with the question of how we get AI to actually exhibit creativity. Kamen Brown used an automated system called Ludi to create the board game Yavaleth that has a surprising amount of depth to it by using evolutionary algorithms and a stored database of games like chess and Go. Micah Cook created a different system, Angelina, that creates bizarre but fascinating worlds that show the inkling of some kind of artificial creative spirit. This brings up an interesting question of authorship though. At what point do humans end and AI begin? There are programs that have actually fooled people into thinking that computer-constructed musical productions are authentically human, and so it may only be a matter of time before AI itself is nominated for awards in game design. In chapter eight of his book entitled Designing for AI, Julian Tagalius argues that game design today is stuck with principles that developed when AI was at a rudimentary stage. He states, at conferences, I would try to convince game designers that their company stood to gain from using new AI methods with the response of being that it was not necessary. For example, why do we need robust AI in a game like Mario Kart when we can just use rubber banding techniques to create the illusion of a scripted race? The problem with this thinking though, according to Julian, is that our design practices today are a vestige of the past and needs to adapt to new design. This is something referred to as path dependence where the events of the past have fixed us on a path to a specific future. Because AI was extremely simple, designers design games are on simple patterns whether it be the simple patterns of enemies in a game like Mario to how enemies in Doom are transient and meant to be gunned down quickly. This can be seen more clearly in certain design patterns. Why are boss fights simple pattern recognition tasks that require you to do the same thing over and over rather than being a more dynamic affair like in the case of alien isolation? Why are dialogue options in games based on canned responses rather than the genuine internal states of different characters? And why is difficulty scaling done on a quantitative basis rather than about more intelligent AI that responds to the player? This position doesn't stand without criticism though. As one could argue that it is either unnecessary to create compelling gameplay or this betrays a fundamental misunderstanding of how difficult and expensive it is to design with AI in mind. In the GDC talk, what do designers want from AI? A panel of designers lay out what they want from an AI driven future. They outline things like how they want characters that recognize players and respond to our actions that have flaws and vulnerabilities that they act on and have meaningful choices and dynamic storytelling abilities. Raph Koster expressed this best when he said that we have reactive props, not actual characters and we need to craft dynamic simulated universes that in the vein of Warren Spector's thoughts in the subject are rooted in non-combat AI. This brings us back to the connection between mechanics and meaning, systems and storytelling. What is a future with non-violent verbs and a disavowal of the path dependence of our design sensibilities look like? Often games that craft deep emotional experiences don't need sophisticated AI. The AI of Yorda and Eco was a very simple set of behaviors but by having her be helpless and tying our well-being to hers, the game elicits a sense of selfless concern in the player without the need for complex tools. Similarly, Ellie and the Last of Us is invulnerable in the game, a contrivance for gameplay's sake and the connection instead was forged during the cutscenes. A storytelling future that pushes us towards the holodeck might combine tools like this. Elizabeth and Bioshock Infinite had a team of designers dedicated to her creation with animation inspired by the work of Disney. They also had AI techniques that prompted her to do things based on where you looked, leaving the game's narrative sequences a dynamic flow. However, what if we combine this with the gameplay of Eco as well, where we actually have to care for her, creating an embedded narrative with meaningful mechanics? Now, what if we add facades or alien isolations narrative directors that recognize your actions in the game to tailor a narrative experience that is dynamic every time you play? What if, like Shell argued in his talk, Elizabeth now recognized our text, voice and mannerisms and the story being told changes in accordance with our actions in the vein of something like Detroit Become Human? What if AI was smart enough to craft not just dynamic gameplay scenarios for us, but also script and opera of adventure, hope and love in the vein of Mass Effect, except it is a procedurally generated cosmos that has the scale of No Man's Sky. What if we can craft a universe of play and storytelling that can reflect back on ourselves in the real world and help us be better people in our day to day? As Tegelius argues, AI is the future of games, but games are also the future of AI. His final synthesis is that they can both illustrate something about intelligence itself, but more than that, they can speak to who we are. We may create AI that can usurp us or almost entirely replace us, but if harnessed intelligently, AI can be used to craft deeper games, more meaningful storytelling experiences, and can be used to examine ourselves. The history of AI in games is one of close collaboration, intertwined inspiration, and symbiotic synergy. There is no reason to assume that the future will be any different.