 Good afternoon. My name is Jean-François Cloutier. I come from Portland, Maine. I organize the Portland, Maine, Erlang, Elixir beat up. I also code in Elixir for a living. I'm one of the lucky few, and I hope you so much luck yourselves, it's really a good way of earning a living. It's very fun. But I'm going to talk to you not about my job, I'm going to talk to you about my hobby. And my hobby in these days is to do robotics using Elixir. And if you want to follow along with the slides, there's a PDF of the slides at this URL. This URL will appear on the top of the next few slides so you'll catch up. First of all, let me introduce to you and it's got some nice features. It's got an ultrasonic sensor to sense the proximity of obstacles. It's got a little motor, medium motor that kind of symbolizes its mouth so it will activate when it's eating. I have a speaker so that it can emote and say things, say how it feels. It has a color sensor, which I use for the robot to detect food. And food is a particular color that it sees on the floor. There's a beacon. A beacon simulates the scent of food so the robot can smell. And it will smell with the infrared detector, but it also has two large motors. Left wheel, right wheel to move about. Touch sensor, it's connected to a bumper, which you see in the front, to detect frontal collisions. And this infrared sensor, which it uses to both sense the distance to the food and the relative direction of the food. And finally, there are LEDs on top of the robot. And I use that to also to communicate an emotional state to the robot. This is an autonomous robot. And most importantly, this robot is powered by Elixir. Now, basically, this is the behavior of the robot. Marv is curious. So it roams around trying to avoid collisions and trying to get unstuck when it gets stuck. But when it gets hungry, and it does get hungry, then it starts foraging for food, looking for food. Unless, of course, Marv is scared, in which case he panics and acts like a headless chicken. So that's Marv. The blue paper you see at the top, that's the food. It's looking for the food. And just above the food there you see a little beacon. That's the scent of the food. And Marv will say a number of things. He'll say I'm hungry, a hoe, when it's colliding. I'm stuck when it's getting stuck. I'm scared when it's getting scared. And when it's eating, he'll say nom de nom de nom de nom. So let's watch Marv in action. So now it's roaming around. It's getting stuck. It knows it's stuck because it's trying to move and it doesn't detect a change in position. So it just said a hoe, a hoe. Now it's getting unstuck, moving about, moving about. And it tries to avoid collisions. So you'll see that it, whoops, tries to avoid collisions. Sometimes it's successful. Now it's hungry and it's just picked up the scent of food. It's moving quite fast. Let's get closer. It slows down and orients itself more precisely, found the food. So it says nom de nom de nom. And it's eating the food. The motor is activating. And it's going to eat until it's full. And when it's full, it's going to say, OK, I'm not hungry anymore. So let's go back to roaming around. I'm a curious robot. Let's see what's out there. OK, but now it tried to move and it got stuck. So I'm scared. So now it's acting really, really panicky and doing very silly things. The red lights are flashing. It's running around like a headless chicken. Until it calms down and continues roaming. Now it finds that it's, uh-oh, it collides. Uh-oh. So it tries to avoid the collision. And it keeps moving around. There's some random elements trying to avoid, but it can't avoid that obstacle. So now it's, again, stuck. It realizes it's stuck, says, I'm stuck. I'm stuck. And we'll try to get unstuck. There we go. Moves around. Oh, food. I'm hungry. It's been a while. All that exertion finds the food by the collar. Nom de nom de nom de nom. Nom de nom. Eats. It's really happy. That's a fulfilled robot there. Billy Full decides, OK, let's do some more roaming. And on and on. So that's the totally autonomous robot. And it's powered by Elixir. Now, how did I get there? How did I get started? And I mean, so much fun. It started about a few months ago in June. I saw this talk by Torben Huffman at Elixir User Conference 2015 using Elixir to get the fund back into Lego Mindstorm. He basically installed Elixir and started doing fun things with it. And this is made possible by a group of people called EV3DEV. And EV3DEV is a variant of Debian Linux that you can boot the EV3 with. Because the EV3 has a port for a microSD card. And you can boot from that card. So you can boot your own operating system on that little machine. It's really cool. And EV3DEV, what it does is it exposes the motors and the sensors of the robot to you, the programmer. And the way it does that is very simple. It exposes the connected motors and sensors as directories containing ASCII files. So for example, for a sensor, you would read the driver name file. Just ASCII it would give you the name of the sensor. What kind of sensors? The color sensor is a touch sensor. Mode would tell you what mode it's in. Because some sensors have multiple modes. The light sensor in C color, reflected light, ambient light. So you can set the mode. You write the name of the mode into the mode file. And that's it. You've changed the mode of the sensor. And you can read the measurement from the sensor by just getting the value from the file, or value 0, or value 1, depending on what you're going for. Motor is the same thing. Driver name will tell you if it's a medium motor, or large motor. You can tell the motor what to do by writing the command into the command file. Run forever. Run to position. Run relative. And you can also set the parameters ahead of that. Say, what's the speed that I want you to have? Speed, SP. And then you can read from that file, what's your current speed? So you have all these very simple ways of interacting with a robot. And you can do that from any programming language that runs on Linux. Which means, Alexa. Wow. That's exactly how I felt. So obviously, obviously, contact of my friends from Amazon and a kit was on its way. As soon as I arrived, my technical staff ran some acceptance testing. While I was busy getting EV3DEV on SD card, which is very straightforward, follow the instructions, ev3dev.org. Really great work, these guys did. So that went very quickly. Plug the SD card, reboot the EV3, and EV3DEV comes up. And it'll shout out here to David Lechner, who's the main contributor, and he's very, very helpful. He's there at all ungodly hours of day and night, answering issues. Great guy. But at this point, I'm still tethered to my EV3. I have a USB cable, so I have a USB network. And I don't want that, of course. You don't want your robot to be tied in, right? So first thing I try is Bluetooth. But I have a Ubuntu machine, and just for the love of God, I cannot get the connection going. It won't speak to me. And eventually, I trace the problem to Ubuntu, and there's lots of issues. And you look for answers, and there's no real good answer. So I gave up, dejected. And then I went for Wi-Fi. I got myself a little dongle, and plugged it in. Boom, it worked. Wonderful. So now, I have my EV3 brick, which is the brain of the robot. And I can plug in the motors and the sensors and start just playing with it, just straight from the terminal. So I connect over Wi-Fi. I go into slash sys slash class. I can see LEGO sensor, there's LEGO sensor, and then there's LEGO tackle motor. These are the two directories that are really of interest to me. LEGO sensor, I go. I see it detects four connected sensors. I go to sensor zero. I can see all those files. It will tell me about sensor zero and allow me to control it. Very nice. And let's say I go into sensor two here. I say, I read what's in the driver name. It's the color sensor. I say, what are the modes that I supported? Reflect, ambient color. The other ones are not really useful to me. I say, OK, what's the current mode color? OK, well, what's the color that you're seeing? And I put a blue sheet of paper underneath. It says, oh, it's seven. It's blue. I put another color. It says, oh, it's two. I don't remember what color it is. It's a green. And then I say, OK, I'm going to change you to ambient light. And I'm going to say, OK, what's ambient light? Well, it's 10%. Good. So it's working. Very nice. And same thing for the motors. Go to the taco motor, get the motor zero. It's the medium motor. I can see what commands I can run, run forever. I can say, OK, what's the duty cycle specification? That's zero. I'm going to put it at 100, which means work as hard as you can. 100%. I say, run forever. What's your speed? Pretty high speed, number of degrees per second of rotation. That's basically what the speed is. And I can then write the stop command and then stop the motor. Everything's working. Wonderful. I want Elixir to do that for me. So I need to install Erlang and Elixir on my little Linux that's residing on that microSD card. It's easy. It's easy. Relatively easy. For Erlang, you need to go and do a complete build from sources, because that's the only way you can get release 18. And that works, just follow the instructions that works. There's a little gotcha. You've got to turn off the in memory swap, memory swap, virtual memory swapping, and put it back on the SD card so you have enough room to do the build. So there's a little gotcha, so it's on my blog. If you have questions, you can contact me. It's a very simple workaround. And it will build. It'll take a while. It will build overnight, but it will build. Elixir, of course, download the pre-compiled zip, install it, done. So where am I? I reboot. I can run. Here we go. I've got Elixir running on the EV3. And that makes me very, very happy. OK. Now what? What do I do with this? Well, I have a bunch of questions I'd like to answer for myself. And that's kind of my program of investigation here. Well, one, can I interact with my robot using pure functions? I'm using Elixir, functional programming. What will it look like, a pure functional approach to interacting with my robot, with my motors and sensors? Can I do that? What do we look like? Question two, I don't want my robot to be driven by a giant control loop sequentially. Hey, it's Elixir. We've got processes, right? So can my robot be driven? We've got agents. Can my robot be driven by a society of agents? And we'll talk a little bit more what that means. But if you've seen the movie Inside Out, the Pixar movie Inside Out, you know intuitively what it means as an individual to be a society of agents. Three, how good a fit is Elixir for robotics as a whole? Is that a great tool, or is that a super great tool? So I want to answer that question. Can I interact with my robot using pure functions? Question one, let's see that. So you all know, of course, what a function is. Input, output, no side effects. So what do I have? What does EV3DEV give me? It gives me basically global and mutable states with side effects. That's what it gives me. What is a file? It's a global variable, and I write to it, and I read from it. So that's not exactly the model I want. So I want to create this barrier there, and on this side have functions and immutable data. And I want first a function that gets me all the connected devices. And each device becomes an immutable piece of data. As you'll see, it's a struct. Then I want to be able to have a function that takes this device, the device being either a motor or a sensor, and then change the parameters, change the speed, change the mode, and then I get a new device. Then I want to be able to say, OK, if it's a sensor, all right, go, sense. Tell me how dark it is. Tell me how far we are. If it's a motor, go, run, activate. And I may even get a new device out of it because this might change the state of the device. And then you rinse and repeat. And that way you get a functional model of interacting with sensors and motors. So it's pretty simple. And I think one of the key lessons of functional programming is that you have simple data and small stepwise transformations. And that makes people not so bright like myself be able to actually do something interesting. So a struct here is used to represent a device. Devices either a motor or a sensor. So what do I have? I have these attributes of class. Is it a sensor, a motor, or an LED path? Where is it in the file structure that I can find my sensor, my motor? Port, where is it connected to? Port A, B, C, D, 1, 2, 3, 4? What type of motor sensor or LED is it? Is it a color sensor? Is it a large motor? Props would contain the idiosyncratic properties of this particular sensor or this particular motor. And mock is through a fault because in order to do testing, I have also sensors and motors, mock sensors and mock motors on my laptop. OK, so that's very straightforward. And reading and writing to the file system is very straightforward as well. So we have reading, the attribute, what's the current color of? What's the current value zero for that sensor? Writing, I want to change the mode of that sensor or change the speed specification of that motor. Then I want to, reading is this very simple code as well with some transformations just to make it very clean and writing as well. So we won't go deeply into each, into the code itself. It's available to you. But I just want to impress upon you that that's pretty straightforward, pretty easy. And we're talking about the discovery. I want to get all the sensors, for example, that are connected to the brick. So if I'm not in testing mode, then I essentially scan the right directory, which is the LEGO sensor directory. And I get all the files out of it, the file names. Then I filter and just get the file names that represent actual sensors, actual motors. And then from that file, for each file that represents a sensor, I just in it create this destruct representing the sensor. And that's really straightforward as well. I, okay, I get the port name. I get the driver name. I extract the type of sensor from the driver name. I, using a reg X, then I construct my sensor device, set the mode, and here, I'm done. Straightforward. When you want to interact, when it's set the mode, I want to say on my color sensor, I want it to read ambient light. Well, it's just a matter of if it's not already the mode it's in, then set the attribute, and then get the new device out with the new mode. So that you're not going to be using this device for further calls, as we saw in the diagram. How do I now access my various sensors, color sensor, light sensor, and whatnot? All the sensors implement a behavior, so they expect to be able to respond to implement the function sensors. What are all the things you can sense, color, ambient, and reflected in this case? And here, read sensor, read color, or read ambient, or read reflected, and that's just a dispatch. And then, a reading is very straightforward. You just get the attribute, just read from the file system, and then translate the result that you get into an actual meaningful value. And return the value in the updated sensor, because in order to read the color, you may have to set the sensor to color mode. So what I return is the value in the updated sensor, and that updated sensor is gonna be the one that I'm gonna interact with going forward. All right, so I'm going very fast, but that's essentially the strategy employed to do functional programming on my robot. And yes, indeed, I can, of course, I can interact with my robot using pure functions as if there was any doubt, but it's very nice. So I've created a small domain-specific language, and this becomes the basis for what comes next. Now, that's the big question. That's really the one that got me interested, because long, long time ago, I was in the world of AI, and I was very interested in the concepts of society of mind and society of agents and whatnot, and I thought, hey, can I do that on my robot? Can I implement a society of agents? Is it possible? As opposed to this? I didn't want that big control loop. That's not how our mind works. We're not just one big control loop. We have all these things going on in our mind. At the same time, sometimes conflicting, sometimes working together, exchanging information, and out of that emergent cacophony, out of that cacophony emerges who we are. So on a very, very simpler scale, can I do that with my robot? Okay, the idea comes from Marvin Minsky. You may have heard of him. He passed away just a few weeks ago. He was probably one of the most influential AI and cognitive science individuals. Very, very smart man. He wrote The Society of Mind in the mid-80s. That's the one that influenced me. And this came out in 2005, the Emotion Machine, which I'm reading right now. And what I'm doing is a very, very, very pale and simplistic reflection of what has come up with. And what it's come up with is that the fact that there's a theory that the mind is a society, a society of agents, small, simple processes, these agents, and of course, we know agents, processes, we go, Alexa, of course we do. And it's all of them working together, exchanging information, processing information in different ways that we get intelligence from that that we get intelligence, or at least interesting behaviors. All right, so I came up with my homebrew society of mind, society of agent, and this is what it looks like. I have, in terms of the data, because we look at the data first when we, I think, we do functional thinking, you look at the data first, is what are we gonna be transforming? We're gonna have percepts, and percepts are units of perception. How far I am? What's the color? Am I in danger? Am I hungry? Perceptions. Motives are units of motivation, behavior or triggers. Am I, am I afraid? Am I curious? Am I, do I feel hunger? And motives are either turned on or off. Then we have intents, it's units of actions. I wanna move forward, I wanna move backward, I wanna turn right, I wanna turn left. These are intents. So that's the data that's gonna be flowing around, and I have a bunch of agents. And I have detectors, internal clock, I've got perceptors, which is kind of a higher level cognition. I have motivators, which decide whether I'm motivated by hunger, I'm motivated by curiosity or not. I have behaviors, what do I wanna do, how I'm gonna go about it, my strategies, I'm gonna act up. Actuators, how am I gonna move forward, how am I gonna move backwards once I've intended to do so. And an nervous system to connect everything together, plus a memory to remember what happened recently. I'm gonna go through each one a bit more details. But first of all, let's look at what a percept looks like. Again, it's a very simple struck. It says, what is this percept about? Is it about distance? Is it about fear? What is it? The value, if it's distance, is it five, is it 10, is it nine, if it's color, is it blue, is it red? Since when was that perception realized, created, instantiated? How long is it still true, until what time? Source, who produced it? Is that a percept or is that a detector? Time to live, how long are we gonna remember this thing? Resolution, how precise was the value? I mean, if it's distance, a distance of three and a distance of four are essentially the same. But a distance of three and a distance of six are not the same. So what's the resolution here? Transient, do we wanna even remember this, this perception? For example, perception of that time has elapsed. Well, I don't wanna remember that, but you wanna remember that you were in danger five seconds ago. Very simple. Motives, units of motivation. Again, what is the motivation about hunger, fear, value, is it on or off? When? And importantly, what other motives will this inhibit? I'm sure that being afraid will really do a number on my appetite. When I'm afraid, I'm not hungry. It inhibits my hunger, right? So there are inhibitions across motivations. And source, which motivator turned that motive on and on? Intent, again, simple struck. What is the intent about? Moving forward, moving backward, turning, value. Are you gonna move forward fast, slow? Are you gonna turn left, right, by how much? When was that intent generated? By whom? And is this a strong intent? If it's a strong intent, the robot is gonna work a little bit harder to make it true, make it happen. All right. So let's look at our different agents here in our society of agents a little bit more closely, but not in too much details. We have an internal clock. It ticks every second. It says, time is fast, time is fast, time is fast. Apparently, that's actually quite a crucial part of a robot. If you only have a notion of time passing, you can't do anything much. Detectors for each sensor, and actually for each motor, you have a dedicated detector, which pulls that device and reads the current value, the current color, the current distance, the current speed, and produces a percept, which is then centered in the central nervous system, which will then dispatch it to whoever is interested, including the memory, which will memorize it. All right. Perceptors. Perceptors is a higher level perception. It's how do we make of what we've perceived so far? In the context of what we're, you know, recent past, what's happening right now in the context of recent past, what do we make of it? For example, I get a distance percept that says 10. I look at a perceptor, which is concerned with collision, will say, in the recent past, was I less than 10 or greater than 10? It says, oh, you were greater than 10. So that means that I'm getting closer to an obstacle, collision imminent. Perceptor will be interested also in the touch sensor being pressed. If the touch sensor is being pressed, while in the recent past there was a collision imminent, that means we're colliding now. So perceptors produce higher level percepts and other perceptors will even produce even higher level percepts. And that way you create a certain level of awareness of the environment. And of course, these percepts are communicated to central nervous system, which dispatches them. Good. Each perceptor is, as its own concern, light, collision, and whatnot. And there's a module perception, which contains a list of all these configuration. For each configuration, there will be a perceptor. That will be instantiated. And here the collision one focuses on distance, touch, collision, and time elapsed. Nothing else, doesn't care about motives, doesn't care about past and tense. And it will remember for 10 seconds and it will apply the logic here in the function collision to decide if indeed we are looking at producing a collision percept or not. And we'll go into the coding of that collision function a little bit later. Motivator, motivator, given percepts that are flowing in, does that mean that I care? Does that mean that I should feel something about this? Does that mean that I should be afraid? Should I feel hunger? Or should I stop feeling hunger? Should I stop being afraid? And again, we have a motivation module that contains all the definitions for the various motivators. And I have three of them, curiosity, hunger, and fear. And each one focuses, for example, for fear, focuses on a danger, basically. Is there a danger that's being recognized, perceived? And applies a logic here, fear, to decide whether to turn it on or off. The motive, the fear motive. Let's look at hunger. That's the logic here within the motivation module. When do we turn a hunger on or off? And in this case here, this is a simple rule and here what we have is a function that returns a function with multiple heads. It's nice that the lecture is so expressive. But basically what it says here, I will have, when I receive a percept that says, oh, one perceptor said, you're very hungry. Fine, in the context of prior percepts. I'm gonna say here, if I was not in any danger in the last five minutes, yes, fine, turn a hunger on. But if my stomach is telling me, you should be hungry, but I'm still remembering that I was in dire danger a few seconds ago, hunger's not gonna turn on. I'm still not ready to feel the hunger. And also here the fact, we have the fact that this motive of hunger will inhibit curiosity. Once you're hungry, you don't roam around, you look for food. And here it says, if your stomach, your perceptors tells you you're not hungry, well, you just turn the hunger off, there's nothing very complicated here. Okay, behavior. Behaviors are finite state machines that are triggered by motives. If hunger is turned on, then we have the foraging behavior that's kicking in. And the forager behavior will be driven by percepts coming in because as we'll see, well, when it's turned on by hunger, but whether or not we are on track or off track towards getting close to going to the food will direct our behavior, right? What do we do? If I'm on track, don't change the direction, just keep going. If you're off track, is it to the left? Is it to the right? If it's to the left, then turn to the left and try to get back on track. So you have all these state transitions and for each transition, these transitions are driven by new percepts coming in. And with each transition, you have some intents that are generated. Okay, oh, you're off track and the food seems to be coming from that way. Turn that away and then move forward. Are you still off track? Reorient. Are you on track? Just keep going. All right. And we have also reflexes, which are behaviors that are simply driven by perceptions. Like, okay, at any point in time, whatever you're doing, if you're stuck, if you're perceived that you're stuck, just fire the unstuck behavior and then go back to foraging. Same thing for collision. Just avoid the collision and then go back to foraging. Okay. And this behavior, this transition, this finite state machine is coded as basically structs. And I won't go into more details, but it's pretty straightforward. That didn't take, that took a few hours just to get it done. So again, lecture very expressive, very easy for me to get to where I want. Finally actuators, how do we go forward? How do we go backward? Well, if I'm a two-wheel robot, going forward means moving both wheels the same speed together. If I want to turn, it's moving a wheel in one direction, the wheel in the other direction. If I'm a single-wheel robot, going forward is a very different thing. If I have threads, it's a very, it's a somewhat different thing as well. So actuators translate intents, moving forward, moving backward, turning into the scripts of commands that are just commands to the motors themselves. And they look like this. Going backward is a function. It returns a function. And given the intent, which would contain, go forward fast, for example, and all the motors that the robot knows of, it would say, okay, let me get the speed out of the intent and the duration of the intent. And I'm gonna create a little script here called going backward. And I'm gonna, in that script, three steps. I'm gonna first tell the right wheel how fast it's gonna go, the left wheel how fast it's gonna go, and then the third step is go, activate, run. And when the script is gonna be run, the intent is gonna be realized. Okay. Finally, there's memory, because a lot of perceptors and motivators depend on context in order to decide whether to be afraid, to be hungry, and whatnot. So there's memory, which essentially just stores all the percepts, all the motives, and all the intents that are generated, and then forgets about them over time. So it's short-term memory. And it can be queried. So it's my little data store, right? And the simple nervous system is just one big dispatcher. It's an event dispatcher. And it connects all these agents together. Agents don't know of each other. They only know of the central nervous system and memory. That's all they know. And when a percept is created, it's just sent to the central nervous system and the agents who need to know about that, that the percept will be notified of it through handlers, and we'll see that. So that's my little, you know, model of the mind, my model of agents. I'm just driving the robot right now. And just as a recap, so imagine a detector or an internal clock just generates a percept. It's made available to the dispatcher, the perceptor by the central nervous system. Perceptor says, aha! That, given the recent past, I recognize that we're on an imminent collision course, for example. This is sent, this may be sent to a motivator. And motivator says, ah! I know how to feel about this. I'm gonna turn on a motive. Behavior says, ooh! That triggers me. That triggers me as a behavior. Behavior is triggered, and now, as new percepts are generated, it leads to intents being produced that are listened to by the actuators. It says, ah! So you wanna move forward. Fine by me, moving forward. As it moves forward, of course, now the distance between the robot and whatever the obstacle has changed, which leads to a new percept being generated, and on and on. So what we have here is all these control loops. We don't have one control loop. We have any number of control loops happening at the same time. So we have the society of agents. These agents are interacting together and all at once, basically. Very different from ye olde giant control loop, sequential control loop. That's what I wanted to do and did. And this is just a, this is Marv's mind map. So you can see all the perceptions that it's capable of, either from detectors or perceptors. The motives that it operates that drives them, fear, curiosity, and hunger. And all the behaviors it has, getting unstuck, which is a reflex, colliding, well, actually uncolliding, or avoiding collision, which is a reflex, panicking, exploring, and foraging. And all the various, it's vocabulary for actuation. It knows how to turn right, turn left, go forward, go backward, and it knows how to eat. Also knows how to say things. It's not in the picture here. So that's the robot's mind. That's Marv's mind. Now, implementing this, you saw pieces of the implementation, but now I wanna concentrate on the OTP architecture of this. And we've covered all of this in various talks. We've talked about gen servers. We've talked about supervisors. This to me, I love OTP, but this to me, this problem, this very problem of a society of agent was an absolute marvelous fit for OTP. OTP was just this perfect tool that fits in your hand and it just becomes part of you. It's just an amazing feeling. So the general architecture of this here. So we have EV3, we have a robot supervisor, EV3 application and root supervisor, robot supervisor that supervises other supervisors for detectors, perceptors, motivators, behaviors, and actuators. The robot supervisor also supervises three workers, the CNS, memory, and internal clock. And the detector supervisor is responsible for instantiating the detectors and restarting them if they fail. Perceptors and thing, motivator, and actuators. And under the CNS, we have a bunch of handlers, event handlers, gen events, for detectors, perceptors, motivators, and whatnot. So that's the general architecture. The source code you can find in here if you want, if you look at it. But we'll just look at a very thin slice. I'm slightly, I might be running a little bit over. So, okay, so if you look at, I just wanna give you now a sense of the simplicity of the code. We're not gonna go through the code. You can go through the code if you want. This is the EV3, that's the application, where you set the supervisors on the end point of supervisors, and when you start it, I can start execution, I can start perception. That's where I start everything. And the robot supervisor, same thing. You can see, you know, just start all the children, and which are either the workers here or the other sub-supervisors. Strategy one for one, very straightforward. And this is where we go into more details into what start perception, start execution is all about. You start actuators, you start the behaviors, start the motivators, pretty straightforward. Starting the perceptors, what does that mean? Well, for each configuration of a preceptor, in the modular perception, we looked at that earlier, well, you gotta say, okay, a preceptor supervisor, start me a preceptor based on that configuration. Same thing for the other guys. And let's look at preceptor supervisor, starting a preceptor, well, you start a child, passing the preceptor configuration as the state of that child. It is, it is, the supervisor started the child and that's it, basically. And it's a simple one for one, which means you create that preceptor only when asked. When asked by, I'm being asked to start preceptor, okay? Remember perception, which was where we defined all the configurations for preceptors. And there they are, we looked at one already. It's pretty straightforward if you decode it a little bit. And the preceptor agent is essentially, when it's an agent, so it's a, yes. And when you start it, it started on a preceptor, preceptor configuration, and it's an agent. And you say, essentially, okay, when you get a precept, that's of interest to you. Analyze it, out of it comes a new precept or nil, and that's what you return. The analysis is you run that logic, like collision logic, for example, you just run it with the precept in the context of the recent past that's of interest to you. And the CNS, in this case it's a gen server, and I did that because I wanted to monitor my gen events. If they failed, I could actually fail the CNS, restart it through the supervisor. Otherwise, the failure is silent. The failure of gen events is silent unless you monitor them. And you want to monitor them from a gen server in order to catch the message. So it's just a, okay, you get, when you receive a precept, just dispatch it. And you dispatch it to the preceptor event handler. Straight forward. And since I've registered all my handlers as monitored, if one of them fails, I'm gonna get this message and I'm just gonna crash myself and let my supervisor restart me. It's very, very nice when you're debugging because you don't want silent failures. It's really, really not nice. And finally, bringing everything together, the preceptor handler. Well, the CNS has received the precept. It's dispatched it to all the handlers that are registered. One of them is the preceptor handler. It processes the precept. And what it does, asks all the preceptors that it knows of, all the preceptors it knows of, is this something of interest to you? Are you focusing on it? And then analyze that precept. You'll give me a nil or a new precept, in which case I will then shoot it back to the CNS so that it re-enters the mind if you want and it keeps going. So that's where the loop gets closed. Okay, so can my robot be driven by a society of agents? Yes. Now, my feeling's about how Erlang, Elixir and OTP worked for me. Well, I found it was remarkably easy to implement that society of agents. Even though it's a relatively complex problem, it just was a perfect fit. The beam, because of its software time guarantees, I didn't have to think about it, how all these processes running around. None of them was a hog. None of them precluded other processes. I didn't have a process that would just like take all the time and starve everybody else. None of that. So it was just simple. It just worked, thanks to the beam. Functional programming, my code was manageable. It was easy for me to express myself. What I meant is what I wrote and what I wrote was what I meant. And that's an excellent feeling. So how good a fit is Elixir OTP for robotics? Totally awesome. Totally awesome. I knew that coming in, but I still was shocked how great it was. Oh wait, there's a little bit more. What else did I learn? I learned about the urgency of now. A robot is not like a program. Time is not elastic, time is now. Time is a harsh mistress. And you can't just react to an intent that's three seconds old. You cannot deal with a percept that's two seconds old because that's the past. You're completely out of sync. Your robot is out of sync with the past. So you have to deal with that because it happens. This is a slow processor. There's a lot of things going on. So what I did, my solution was ignore stale precepts, ignore stale motives and intents. And if they have stale intents, send an alarm message to the central nervous system saying, I'm overwhelmed. And the central nervous system will tell the perceptors and the detectors, shut down. Shut down. Don't see anything, become blind. Faint. I make the robot faint. And everything else can get in, washes through the system and then after a little while, a second, half a second, the CNS revives and now everything is in sync. We're now dealing with the present, not the past. So that worked. Another thing I learned, debugging a robot is painful. If it's the real world, the effect of commands are unpredictable. You want to say, you say, move forward. Well, if the wheel is slipping, you haven't moved forward. So you don't know for sure that what you ask for is gonna happen. Sensors can lie. They see ghosts sometimes. Overall, behavior is very sensitive to tuning. And the Heisenberg principle applies. If you start tracing things, you change timing of things and then the problem you've seen is gone. So that's interesting. And debugging is really much harder because of all these things and also because the deploy-edit cycle is long. It's a slow computer. All right, I also learned Elm to build a dashboard. It's really cool. And I'm not gonna go into the details of it. It's a highly reactive dashboard. Things happen all the time. If I had three more minutes, I'd activate one. But I use Phoenix, I use Elm, and really it's just the best thing, the best combo. It's better than peanut butter and jam. But that's another story. And I'm really, really eager to answer your questions and show you the dashboard if you're interested when the talks are over. So parting thoughts. A robot is a bunch of sensors and actuators working together to achieve shifting goals in a dynamic environment. So it's more than just a robot that we've been looking at here because this is true also of a smart house, of anything else that will come out from the internet of things. It's the same kind of problem. Now, societies of agents have evolved everywhere in nature. Everywhere you look, you'll see societies of agent. There's a reason why. Maybe the internet of thing needs to form societies of agent as it evolves as it tackles more complex problems. And Elixir will be a fantastic enabler. I have a blog. You can see more of the details. And I'm really happy to answer your questions later after all the talks are done. And I wanna thank you all. This is where you can find me. I organized the Portland Main, Erlang and Elixir Meetup. And again, you can find the slides at this URL. Thank you very much.