 smart things and why it takes nerves but first if you want to follow with the slides they're available at this URL because I know if we look at some code it's difficult to read code on the screen from anywhere from middle of the room back so if you want to follow with me there they are the URL will be on the other slides that follow the Internet of Things it will be amazing you'll have your stove checking with your fridge and suggesting recipes from the cloud your toilet paper dispenser will direct message you when it's running out is that cool there's even the idea of a smart bookmark that knows what page it's in in your book and stores it in the cloud so you never forget where you are in the book to reading it's amazing stuff and how about a biosensing ring that will tweet your final farewell when you die seriously these are not the kind of stuff that are gonna be transformative the Internet of Things will be transformed transformatives but the one that really matters is the one that is an Internet of Things that removes inefficiencies reduces risks and it does that by automating distributing and integrating information far and wide so we're talking about smart not just smart cars but smart traffic we're talking about smart energy grids smart houses and on and on and that means systems of systems a smart car is already a system of systems it has navigational locomotion systems it has entertainment it has energy management it's a bunch of systems and that car will integrate with the traffic system in order to optimize flow through a city going around areas are congested or where there's an accident and whatnot so we're talking about systems of systems there in the world of industrial processes we'll see coordination amongst various tools in the manufacturing process and these tools themselves are systems and this whole process will also coordinate with supply chain systems in order to minimize downtime and maximize revenue so we're talking about again complex systems of systems benefiting from automation and and distribution of information but this is not the Internet of Things that is being sold right now there's a lot of companies there's a lot of money being invested in things but what we see on the market of what I've seen is a lot of companies fighting amongst themselves to establish their proprietary networking technology as a standard we're seeing the sensors to cloud basically you have for example your Nest thermostat in your house in all these houses sending information to a central hub and then the central hub will maybe do some big data analytics and then come back to you with a report on your on your iPhone on your on the web or whatnot and that in my my for my perspective is not really interesting this is not where the Internet of Things promises lie I think the Internet of Things should be communities of smart things and what I mean by a smart thing is it's an autonomous assembly of sensors and actuators that together make sense of their environment and act purposefully and in response to a changing environment and now these smart things are going to be part of communities and in these communities they will coordinate they will share information I think it's smart things all the way down personally I think it's a fractal system of systems of systems a smart thing with all its sensors remember a smart thing is assembly of sensors and actuators is itself a sensor because it makes sense of the environment in that sense it can report on what it has discovered what it has made sense of to other smart things in its community or to other communities a community as a whole can integrate all this meaning all this information all these perceptions from all the members of the makeup that community and itself act as an integrated sensor to feedback into its members providing global reviews and also to other communities so I think a community of smart things is itself a smart thing so it's a fractal thing let's talk about autonomy what do I mean by that well I think there are five major components to an autonomous thing a smart thing they're sensing so taking in sensory inputs pretty obvious then this perception which is making sense what's going on if you think of a self-driving car it's taking some pretty raw input beeps from LiDAR and and radars and then it has to translate that into there's a motorcycle coming this way and I think it's going to avoid me it's fine so making sense of all that input is perception motivation if you're autonomous you must have goals you're trying to accomplish one or many things and those goals are going to shift over time you're going to have long-term goals going to the hospital and short-term goals avoiding that pedestrian who's just launched in front of your car those goals shifts then they have behaviors plans how am I going to achieve these goals so you may have plans long-term plans how am I going to get from point A to point B and then short-term goals how am I going to you know negotiate this intersection we're talking about an autonomous car and finally actuation which is translating those intentions moving forward turning right turning left into actual physical action which is moving the right wheel and moving the left wheel putting on the brakes and whatnot so smart things are autonomous but in order to be autonomous they require cognition let's talk about the cognitive architecture of a smart thing that's something I presented last year and I'm gonna I'm gonna repeat some of the same material for most of you who have not we're not here and then go beyond that the cognitive architecture that I've been working on has been inspired by the work of Marvin's Marvin Minsky in cognitive science he wrote the emotion machine and before that the Society of Mind which is the book that I read that first made me aware of this approach and in a very very very big nutshell Minsky's ideas that autonomous smart behavior emerges from the interactions of many kinds of agents doing simple things concurrently many kinds of agents doing things concurrently obviously I think Alexa I think the beam right so it was not very big leap here's a diagram of this very simple cognitive architecture which I have implemented and let's just go through the big pieces of it and explain what they do very briefly let's start with the detector detectors pull sensors they ask sensors for measurements how dark is it how far is the obstacle straightforward there's an internal clock that just ticks time is fast time is fast it's in it's quite surprising how critical that part is but that these are these are generators of per seps per seps are units of perception time has elapsed it is kind of dark there's an obstacle 14 centimeters away from me these per seps like every other units of cognition feed into a central nervous system and I mean that in a very limited sense as a router feeds and then is consumed by other components other agents of this society of mine one of the kind of agent is the preceptor so you have all sorts of perceptors and perceptors are just basically trying to take low low level per seps and produce higher level per seps so a preceptor that is responsible for detecting whether we are getting closer to an obstacle will depend on per seps of distance and look in its memory is the latest distance per seps smaller or greater than the current one am I getting closer am I getting further away so yeah perceptors and perceptors produce per seps like collision is imminent and these per seps again sent to the central nervous systems to go to the other agents which will cover lay soon but feedback into other perceptors now I may have a preceptor of danger that says if I'm an imminent collision and it's really dark out there I'm afraid I should be afraid so a fear per septer will generate a fear per sep which will be fed back again now with all this perception this layered perception being processed all that concurrently of course these per seps are being fed to the central nervous system and everything goes into memory short-term memory so we can use that and analyze what's going on in light of the recent past some of these these per seps was also will also hit something called the motivators and the motivators basically analyze these per seps and say should I care and how should I care there are motivators for fear should I should I be a fresh it should I feel panic right now should I or motivators for hunger should I feel hungry right now and these motivators basically generate motives which again are fed the central nervous system memorizing memory and those motivators initiate behaviors there are behaviors for what to do when you're hungry what to do when you're curious what to do when you're about to collide and these behaviors are essentially little fun at state machines that through each step produce intense what will what should I do now I want to go forward I want to go backward I want to turn left I want to scream mommy I want to do something intense these intense again fit to the central nervous system memorized but also consumed by actuators actuators are the agents that translate those intentions of acting into actual actions depending on the physical nature of your device in this case if it's about moving an autonomous vehicle moving forward if I have a two-wheel drive means turning both wheels at the same time if I I want to turn left and I'm gonna maybe turn the right wheel faster than the left wheel so that's these are actuators so that's that's basically the the map of the cognitive architecture that was developed and you can think of the activation of all these agents as creating all a lot of who da loops observe orient decide and act we have our detectors are the observers perceptors give you an orientation what's going on the behaviors act I mean the the motivators decide what's important what are my goals and the behaviors act now there's another piece that is new to to this architecture which is called attention and attention essentially says I'm not gonna care about detection detectors that don't matter right now if if I'm hungry and I'm foraging then I need only certain senses to be alert I will not pull all my detectors at all times I will pull only my the detectors that could possibly change what I am right now what I feel what I do so that's kind of new and it's it's actually very useful when you have a very a small device that where you cannot fire all your detectors all the time and just waste all these cycles okay now meet Marv and Rodney these are my two smart things they're Lego robots they they have all sorts of interesting sensors and activate actuators first of all they have LED lights they have a speaker a wireless adapter they have ultrasonic sensors for distance infrared sensors to detect infrared beams emitted by beacons I have a color and light sensor to detect the color on the floor and also the ambient light I have a touch sensor that can be either pressed or released for collision actual collision detection and for actuations I have a two large motors for a locomotion and I have a medium motor which I'm using as a mouth so when the robot eats it just activates that motor symbolically and there's going to be another actor here their mom their mom is another smart thing because we're going to be establishing two communities you'll see the kids and the parents so that's the bomb and the mom of course is very smart too and essentially it's a laptop running elixir on Linux okay the demo so we have as I said two communities the brood and the parents I have two puppy rovers in the brood community and I have a mom laptop in the parent community and let's see if we can get this thing going so you'll see Marv and Rodney Rodney roam around bump into things spread panic look for food and fight over it that's their that's their puppy behavior and motivators and perceptors are going to drive this autonomous these autonomous behaviors mom the laptop it's going to keep an eye on them and calm them down when they're panicking and and and you know get them to share food so let's see if we can start it up for some reason it sounds like that so they're going around and and bumping into things saying things like a hole when they bump and they're looking for food and food is the white stuff on the on the ground so he's eating right now and and the other one is colliding and trying to get out of that jam nom de nom de nom now Marvin is hearing a rum they eating and run and run and run and Marvin wants to go to that food so Marvin says you know in mind mind mind it's mine and mommy the sound is not good right now but mommy says Rodney share your food and when Rodney hears you know I need to share my food he starts he stopped eating as hunger goes away and he can move away and and let let Marvin you know get access to the food so now they're panicking they're panicking and one start panicking and the other started panicking because he heard the other one panicking and mom seeing that panic was out of control intervenes and tells them to calm down and they hear mom and that's when it stops this and bring this up here so we had two autonomous smart things acting out with their mommy overseeing them and trying to manage their behavior a little bit so now we're talking about building communities of smart things and in terms of tools to do that I found that elixir is a perfect fit for coding these many kinds of agents that do simple things concurrently nerves project was critical because I had to fit a rather large amount of capabilities on a small device the ev3 and the nerve project made that only possible but an actual pleasure and this this kind of architecture is is implemented as a framework elixir based framework nerves enabled that I call Marvin and if you feel familiar with the hitchhiker's guide to the galaxy galaxy you will know where that name came from okay and that's Marvin actually so in order to implement this this current of architecture I made good use of elixir processes every single agent is a an independent process implemented using OTP and relying very much on the beams of soft real-time characteristics the smart things as you saw the the two robots actually sense each other perceive each other and one knows when the other one is eating and gets all greedy or when one panics the other one starts panicking just by hearing the other one panicking this was achieved through PDP networking with distributed elixir and the mom watching them in it from another community and we're considered that a remote community kept in touch could hear what they were doing could know what they're doing because they were actually reporting through a REST API and she could talk back to them through a REST API as well and that was implemented using Phoenix now I'm not gonna turn that into let's look at the code presentation because there's too much I don't think that's that's very productive but if you get access to the slides you'll be able to look at the code a little more closely but I just want to make a few points one was that one is that the data structures involved are very simple they're very simple structs for example this is a this is a struck for a percept it's I need to know what is the percept about darker lighter what not it's about light the value of it could be a number for the ambient light or if it's is it getting darker or lighter atoms when was that percept generated since until when is it valid because you can have the same measure being returned by a detector where the same distance as we were five seconds ago so you want to know have we stayed at let's say 10 centimeters from an obstacle for the last five seconds that could be important where did it come from the source time to live along is going to stay memory and other properties but that's pretty straightforward now remember this is just for the color coding how does that translate into an elixir application so this this is the observer we're going to look at it in the right orientation I wanted to show you the whole thing and I want to show you where the various parts of the cognitive architecture map onto processes supervised processes in Alexa so what do we have here we have detectors perceptors motivators behaviors and actuators they're all supervised agents actual elixir agents the memory attention and memory and attention components are actually supervised gen servers the central nervous system is actually a gen server wrapping a gen event with handlers for perceptors detectors and whatnot so this is in more and more details you can see how it maps so we had here you know percepts they map here with with detector supervisor and what comes underneath and internal clock we have intents under behavior supervisor all the various intents that were activated on the puppy rovers and we'll see where these come from very shortly we have attention here we have actuators and all the various actuators again we'll see where that they come from that were active on the puppy motivators and you know greed curiosity fear danger and perception the various perceptions that that the puppies can generate so this is all mapping onto supervised gen servers and agents in Alexa important point all these cognitive agents communicate strictly through the central nervous system through events there they don't know each other at all so I can add more components to my architecture and I don't have in you know complications as to too many any agent having to know too many others there's no coupling it's all through events let's look a little bit how a percept moves through the system very quickly let's say a detector detects that ambient light is is like say 10% it will be transmitted to the central nervous system this this percept which will then dispatch it to perceptors its perceptors handler or to the motivators handler in other ones as well but let's say let's go through the route of the perceptors handler the perceptors looks at all the perceptors that are active in the system and says who's interested in the fact that ambient light has a value oh this one does that's the one that the perceptor for detecting it for figuring out if things are getting lighter or darker so it's it goes to the perceptor which analyzes the the this this percept and maybe comes to the conclusion that things are getting darker generates a new percept which is sent back through the perceptor handler and through the central nervous system and back into the system so you can see this all these percepts turning through the system turning on turning off motivators initiating behaviors activating actuators and then as the robot moves moves into a lighter area of the of the of the room a new percepts and on and on so that's the code we're not going to go through it but that's a relatively small amount of code and if we spend some time looking at it it's quite declare declarative it's quite straightforward but we want to go through it and a perceptor the the core of a perceptor is an analysis function that takes in a percept in the context of recent memories and then does first of all checks is is the percept fresh enough because if it's an old percept because my my my my brain is is falling behind because there's too much to process then it will simply ignore it otherwise it grabs all the memories that are that are relevant and then calls the logic the the perceptors logic which is a function passing the new percept and the recent memories and maybe out of that comes out a new percept that simple now our smart things mind is what I call its profile so a puppy as a profile that the puppy profile we have the mummy profile and they have different perceptors different motivators different behaviors and whatnot the puppy profile as we've seen it cares about perceiving things adding lighter darker if we're colliding if we're in danger if we're hungry why are we hungry we hungry because we haven't eaten in a while we've done some exercise is is there food am I on top of food can I eat scent am I getting closer further away from food am I stuck is someone else eating and I want their food right someone else motivation this curiosity hunger greed and fear and important thing about motivation is that one motivation can inhibit another if you're hungry curiosity just like is inhibited if you're greedy then you're your hunger goes away because now you're not chasing food you're chasing someone who's eating food and if you're afraid then everything else is irrelevant you're panicking so it's kind of a Maslow pyramid of needs kind of thing implemented here and for behaviors we have two kinds of behaviors in general we have reflexes which kick in at any time you may be foraging roaming around tracking and other but if you're about to collide to something reflexes kick in and you avoid the obstacle and then you have motivated behaviors which are more long-term they're driven by motives he's eating my food I'm going to try to find him and take it away from him that's your motivated behavior these are puppies profiles and the way they are relate can be mapped out this way you have all the perceptions and how one perception feeds into another feeds into another feeds into another how they trigger motivations our motivations inhibit motives in a bit one another and then all those motives trigger behaviors and all those behaviors are expressed through intense intense to act so let's we can see a bit more detail but I don't want to run out of time so perceptions feed into motivation motivation feeds into behaviors behaviors into actuation that's that's the mind of a puppy mummy as a profile it's a simpler profile not because mummies are simpler because I had less time and her perception is here are are my my puppies panicking out of control is one puppy hugging all the food and her motivation is is only once maternal instinct again it's because I had little time and her behaviors were very simple other either calming down panicking puppies or admonishing puppies to share their food now so we've talked about a profile our smart thing also has a platform platform is the the the smart things anatomy it platform determines how do you talk to sensors how do you talk to actuators the platform defines and implements rules of actuation again if the you may have two different platforms trying to execute the same intent which is moving forward if you are a two-wheeled rover turn both wheels in the same direction the same rate if you are BB8 it's a different way of moving forward but the intent is the same so the platform translates what could be identical intense into very different modes of actuation in in this demo we have three types I showed is two types of platforms the rover the puppies and the hub the mummy I also have a muck rover which allows me to to do testing with entirely on my laptop now the platform runs on a system the rover platform runs on the nerves ev3 system it could potentially run on another system a raspberry pi if I had the proper sensors and the proper actuators the same profile could run just plug in on another platform the hub platform which is the platform of the mummy is any beam capable system now in order to talk to ev3 sensors and motors the nerves project supports the ev3 as a target system integrating the good work done by ev3 dev.org which not to make a long story short exposes all the sensors and actuators as read writeable files makes it very easy to interact with them and and nerves project this is really really fantastic I mean very little ceremony super fast firmware build and burn cycle I mean I cannot say enough good things about it in a short a smart thing is a platform on a system plus a profile that should be clear then there's there could be some code where we can see how a profile is actually defined and very briefly I have a list the profile here the percept the perception part of a puppy profile is actually a list of perceptors actually configurations for perceptors and the important part parts of a of a perceptor configuration is what is the perceptor about what does it what what comes in as input and what is the logic by which it will generate new percepts and this logic here is actually a function it's a function multi-headed functions multi-headed function here let's say for light and just go through this one very quickly the first one says if I have you know and a person that comes in I have no recent memory of anything I can't tell if it's getting darker or or not so I return nil throw my hands up don't know if I get an ambient percept and there's some value I look at the last ambient percept was in memory if its darkness is greater then it's getting lighter if it was lighter if it was darker and it's getting and then it's the value is less it's getting lighter that's it I compare with recent memory that's it that's all it does very simple motivation same idea motivator configs one for curiosity the other one for hunger again same idea a name what it's interested in and what's the logic for generating a motive and here the one for hunger is basically if I have a I am hungry percept that just came in my stomach rumbled and I have not been in any danger in the last five seconds I could be motivated by hunger if I get a I'm not hungry anymore my stomach is full percept basically then I turn off hunger as a motivator very simple again lots of simple agents working together interacting it's out of their interaction that comes out that something interesting comes out it's not the parts are simple the interactions are simple but as many of them and they happen concurrently and interesting thing stuff happens because of that behaviors same idea in this case here these are each behavior is a final state a finite state machine that says what do we do for example panicking when you start you start panicking which in this case would be turn the lights red yelp and then go into the state of panicking and when you do that do the panic behavior which is just run around like a headless chicken and then when you go from danger to ended when you finish panicking then you simply do nothing you're done so very simple again again then the panic behavior is as I said turn on the lights and go backward really fast turn some random directions some random amount and that's it simple and the actuation of it's a going backwards so my the platform of a rover defines actuation actuators and there's one for going forward backward turning right turning left and by not and let's look at going forward very simple again a function and it's a script that says which wheel to turn which way and underneath there's the platform knows how to translate go forward into the right writing the right information in a right file so that the right actuator is activated the right way straight forward again then I have all this thing come to life I have the smart thing supervisor and what it does is for every detector configuration for every perceptor configuration for every motivator configuration and so on of the smart thing it creates an agent it creates a perceptor with that configuration a detector with that configuration a motivator with that configuration starts them and the thing is a lot let's talk about community communities of smart things communities created by name a smart thing knows the name of its community it knows at least one other peer that he can connect to and as soon as it connects to it it connects to the network and it can broadcast to the entire network and any broadcast received becomes a herd percept which then goes into the the whole process it's done with p2p it's p2p with you done with the distributor Lexar using PG2 we'll go to the code it's again it's pretty straightforward and that allows me to entangle two minds the way the fact that they can communicate over that pb network and that's basically how they hear of each other that feeds into a new percept when they hear one panicking it triggers their sense of danger individually and when there are when they are when they sense another one is eating and they're not hungry the greed motivator kicks in and they start to tracking behavior which means they're tracking the other guy not the food attracting the other guy and they can actually they can actually say things so you can you can you can they can communicate through p2p through each other so that one robot can say oh I'm eating and then one says aha I want your food that's a more blown up that's basically the idea and again simple perceptions what happens when it's this is the for the mummies profile how she will perceive out of control panicking she will perceive food hugging and she has behavior for calming down her brood for getting them to share their food and just a little example here of a new rule that was added to the puppy profile that says when I hear mummy saying calm down I turn off my fear motive straightforward now connected communities together I consider members of a community as being on a local network and communities being remote and the communication is done through rest calls with Phoenix doing the work and it's pretty straightforward I mean it's just rest with Phoenix it's very straightforward this is how a percept crosses the boundaries of a community straightforward again but the internet of things it's not about puppy rovers and mummies actually I'll make the case that a self-driving robot as all the essential characteristics of a smart things it integrates multiple sensors and actuators it must observe orient decide and act autonomously so I think I think two puppy rovers interacting under the eye of a mom touches many of the key aspects of communities of smart things still a demo a little bit more about nerves the brick is small actually actually 64 megs of RAM three 400 megahertz single core very small and it's kind of slow this could not fit on any v3 without nerves nerves really trims down Linux and I have 15 megs of free RAM with this code and Linux on any v3 which I think is pretty amazing the process is amazingly fast you can actually feel the burn and you it takes actually less than 30 seconds to go from to compile build the firmware burn it and then just pick it up and plug it in 30 seconds that's really good so I think that nerves and and and Alexa gives iot coders can give iot coders really superpowers amazing superpowers I love the fact that functional programming and pattern matching with Alexa gives me the declarative code I love the fact that if I want a I want to go process crazy in my architecture that's not a problem at all works just fine and I love the fact that with nerves I can fit Alexa and this architecture on smallish devices so recapping if you want to realize the big promises of the internet of things we need to build communities of communities smart things a smart thing is autonomous and autonomy calls for a functional cognitive architecture and if you have access to the slides I'd encourage you to read this paper from a Scandinavian research institute about architecture of autonomous vehicles and that's the term they use functional cognitive architecture Marvin is a framework I'm building rebuilding for for creating communities of smart things and lecture phoenix nerves make it an absolute joy to develop I want to thank Frank Hunliffe who's been invaluable and his help has been amazing in getting this done and my intern William for actually building the robots his robots thank you