 So to start our journey into artificial intelligence one of the things we're going to first start to talk about is sort of this idea of Say an agent and its environment now the entire idea to an environment for just a second is again If you think about like the world for a second, right? There are You know trees Here's my bad attempt at an animal Animal animals and then you know in our case people and So each one of these types of things that are going on in the environment have to be interacted with or Potentially can be interacted with by sort of our Agents and so as we're starting to think about this we can treat this as if we call it an architecture now the entire idea to This is that the agent is receiving sort of updates about the environment as it's happening So you know again if we're thinking about it from that sort of real-world perspective of say for example a tree again I now my agent knows that the tree is sort of in the area However, it can also change so if we're playing a simple game of tic-tac-toe That's still an environment and so when the environment changes say for example, I place an X in the center here again, those updates are needing to be received by the Agent and we call these just to use some super fancy five-dollar words Precepts again the entire idea is that the agent based on what types of sensors it's receiving or it has in sort of Installed or configured on it those are reading in those Precepts of the environment and doing something known as the agent's function now agent function It can also be sort of referred to as an objective function You'll see that a lot in linear algebra for example, but the big more important thing is again This is sort of the brain portion of our agent It's thinking about what to do again if it saw just to use that tic-tac-toe example again If it saw sort of the X was in the middle What should it do or if it's I don't know you're you're building an agent for Minecraft And you see a tree, you know you punch the tree each one of those decisions either again tic-tac-toe or punching trees That gets fed into what we would call the actuator So again if we're working off of the tic-tac-toe board, you know We don't really have like a physical Representation of the agent but again it says oh well, you know Processing the inputs that I see the X in the middle I want to place the oh my turn say in the corner Versus again if we play this sort of Minecraft kind of card again We see that tree and here's Steve He would have a square head go Punch the tree again each one of those activators are being fed back into the environment Because again as you can imagine it they have the update the environment has to update again When I placed where when the agent placed the oh into sort of the environment the environment changed if Steve keeps punching that tree the tree is going to give it give him wood all of a sudden And the same kind of thing can happen as we start to sort of play around with at least a little bit of foreshadowing in the problem Set zero one again We're gonna be working off of a self-cleaning robot and so that same kind of concepts going on but rather than in a Three-dimensional world. It's almost very similar to what we were seeing with the crossword or the connect tic-tac-toe sort of drawing I showed a second ago. It's gonna be working off of a 2d Tile screen and again when we think about this our agent for example is working off of its own sensors Something like a camera for example could be telling us our agent if we're thinking about this from a top-down sort of Design there we are If we think about that as a top-down design, you know, maybe the agent can see above it to the left and right of it and below it Or if you're thinking about this maybe from a different perspective some of the more modern video game styles they design out well this agent sort of has a Direction associated to it and I'm just trying to draw that in Nicely and again since our agent is facing downward It would have say of a field of view so instead of seeing you know things behind it Wherever those eyes are facing again That's where you would project out sort of how many tiles it can see forward And the way that we can also think about this again if we start to Play sort of the thinking game. What else could an agent have something like an antenna for example? if it's communicating with other Agents that are in the environment again sending and receiving signals about what the other agents are perceiving around the environment So again if we're thinking about it from problem-set one's perspective What we're gonna be focusing on is this idea of well you can make your agent move up down Left or right and you can perceive up down left and right Well in our case if we're thinking about this, you know the agent sees To the right of it a dirty tile Oh, well then when the agent sort of makes the decision to move to the right It's the actuators sort of making our action of Go to the tile in this case you can see oh well the agent is also deciding I need to clean the tile So what happens with this is again if we think about sort of our agent Was right here and here's where that dirty tile was our agent moved to that dirty tile and just to say it's still dirty and Then one final one the agent cleaned it each one of these actions so move Right clean These could be stored as again what we consider the complete history of our agent or the Precept history in our simple example, you know again, maybe there was a dirty tile Up here as well and as the environment updated again Maybe I want my agent to know hey there was a dirty tile that you didn't touch quite yet Well, that helps us decide what maybe to do in our next action And so the same kind of concept can come in here This is where you know you start to bleed in a little bit of your computer science background But in essence what you could think about is this is Just a that's not how you spell else. This is a simple else if chained conditional statement if Else else do Nothing and so again if we think about those precepts Well, I have a conditional if my agent is at a one and a one is Currently a clean tile then what should I do? All right? Well very simple move to the right now As you can sort of see here. I'm using a very very condensed version So this is a two by one environment. There's only two tiles a one and b one But again, all right. Well, if I see that my tile is clean. Let me move to the right Just to go see if it's dirty if however a one had been dirty Well, again, if we think about you know, here's that same dirt pile Or that oil slick or whatever we've got in that space Hey, our agent sees it clean this tile Same kind of thing again If it's now we've moved our agent over to be one if it sees it that's a dirty tile Clean it if it's not a dirty tile move it right on back over to a one And you can sort of see it's almost very similar to a finite state machine Where it's just sort of backing for you know going back and forth between these two tiles again We can expand this because as you can imagine I could have either something like the entire history if I'm sort of recording what was the environment sort of one step before or two steps before or ten steps before again that Allows me to maybe make decisions if we're thinking about this again from the Minecraft sort of example I saw that I punched wood. I You know tore down the tree I now have wood and that's all of sort of the history of my agents actions So I can assume. Oh well since I have wood I can make a crafting table and a house or tools and things of that nature and That starts to get us into what we would call sort of our performance measures Essentially, we're trying to evaluate How good Is my agent Now you can clearly see from sort of the slides There is no universal measure that decides these types of things again A tic-tac-toe playing agent versus a self-cleaning robot agent versus Uh a minecraft playing agent each one of those are going to act and behave Differently and they're gonna have different goals associated to them, you know, again if we're thinking about tic-tac-toe Uh, we would want number of wins. We would want to try and ensure that we are always winning if we are thinking about it by Say the cleaning robot cleaning robot, maybe it's number of tiles cleaned and then if we're thinking about it from our Minecraft perspective As you can see I teach I do not Uh, I do not have An art background, but I don't know, uh In the game Don't you have to kill like a dragon in it? I don't My point being is again each one of these have different performance measures And so you're not going to have sort of one solid thing And that's actually where we have to kind of be a little mindful because think about The fact that I said, oh number of tiles clean for my agent Well if my agent happened to have an ability like, uh drop Stuff or drop, uh collection again if we're thinking about an agent, you know, it's Gotta clean what does cleaning mean? It means, oh, you know, here's all the waste I'm collecting it into maybe a bin if it has to go unload that stuff later on Well, if our performance measure was just simply number of tiles cleaned What's stopping it from dropping off all of the stuff back on to in this case b1 And then going back over and cleaning it because it can just repeat that process and it's technically doing a good job But that's actually where we start to get into a little bit of what we would I I'm calling sort of rationality versus omniscience You know again When we're thinking about designing out agents, especially more real context not Drawings on a picture or on a powerpoint, you know, again, we're not We're only looking to sort of work off of the expected outcomes again if I'm Walking if I have an agent a self-driving car or food delivery Agent again. I see a crosswalk coming in I have to make some decisions and I'm going to plan. Oh, well, you know There may be a pedestrian walking through or in this case a pedestrian is Again, if we're thinking about it as a food delivery agent You know again has to make sure it can cross the road before it crosses the road But what that means is we don't account for everything You're not going to see a whale in a Petunia falling out of the sky and if you get that reference, thank you for reading a book You're never going to see this right? You're never going to plan for this You're never going to have this in your evaluations, but that's sort of the That is what I would call You have to limit it is what I'm getting at here You want to plan for as much as you can but you don't need to plan for literally anything As minimal probability sort of stakes What this sort of Turns into is sort of as we start to design out agents We want to create or plan out sort of It's peas You know performance measure environment actuators and sensors Again, if we're thinking about this from sort of the hot topic autonomous vehicles Self-driving vehicles what I've got here on the slide again. What we're looking at is all right. Well If I'm designing out a self-driving vehicle agent What would be some of the performance measures that I would be looking for? Okay, you know, I wanted to Let's see be safe again If I'm thinking about a vehicle it has sort of If we're dealing with a two lane road and here's my agent I don't want it veering into the other sort of Lane same kind of concept you could see that that's sort of what legal sort of getting at there And if you know, I'm playing around with something like maybe this is A self-driving vehicle for Somewhere like uber, right? Oh, all right. Well, you know again looking to hopefully maximize the profits We'll deal with you know, people wanting to get into a self-driving uber later, but that's another little thing What we then get into is again all of these different performance measures are sort of things that you can do to evaluate whether or not Your agent is a good agent and then you also have sort of the environment and again We're not thinking about the environment as Literally the world that it exists in but what are some of the different aspects that the agent may have to encounter And interact with so again if we're thinking about this the environment for a self-driving car is the road but that more specifically If we're thinking about it on like a highway perspective that's something like lanes and then there's other traffic So again, there's a car going this direction if this was a four lane road You know, maybe there's a car over here and a car over here as well So again, each one of those types of things are elements that our agent would have to kind of deal with pedestrians again if we're Looking at the bull petunia example same kind of opposite Then we've got the actuators again if we're designing out our agent what types of sort of methods are the Is the agent going to have to manipulate the environment or interact with the environment self-driving car again We're looking at it from the perspective of oh, well, you know gas needs to be pushed into the Engine or if you've got an electric vehicle, you know again, you got a signal to simply go Forward or in this case Rotate again if we're thinking about sort of uh, here's a little turn That our road does if we need the agent to be able to rotate the vehicle In some direction same kind of concept if a car is coming around it or ahead of it We'd need to be able to slow that down Maybe some other types of actuators again on this idea of you know letting other people know Horned for whatever reason uh either way and then the final one as you can see is how is the agent interact or Perceiving the environment now. This is where obviously you're getting into a very big kind of spectrum because you know self-driving cars is Very dangerous So you've got almost every sensor out the wazoo. So you've got things like cameras, lidars, pedometers, gps All these nuts and bolts are being fed in to make sure that this thing doesn't crash into people But that's actually where I'll use a different example now Rather than self-driving car wheels, you know giant death machine type of thing Another place that we've seen sort of artificial intelligence being used Is antenna design. So this link automated antenna design with evolutionary algorithms Was actually done by nasa Uh a little over 15 years ago and the entire idea here was well, you know, it's kind of expensive to send Uh satellites out into space. So it's really hard to kind of test these things out there What if we used an AI agent to sort of simulate and make better decisions about how, you know, what they were calling sort of novel designs What kind of antenna designs can work and still maximize and what they were considering again This is taken from sort of the paper Things like maximize beam width maximize impedance I'm a computer scientist. So I'm not going to act like I know those words But the same kind of concept can come in. Well, one of the things was again, if it's going into space, there's a lot of calculations So things like its weight and size needed to be accounted for Because those throw off the calculations if we're exceeding them Okay, well again, that's the performance measures to measure whether or not the design of this antenna was a good idea So what are the what are elements of the environment that the agent would need to be working off of again? Not a Whatever science not a rocket scientist But you could think all right. Well, you know, there's things like radio frequencies that the antenna will have to interact with There's another Satellite and another antenna that it needs to be communicating with so these are things that again the agent needs to be working Around and that's actually where we get into sort of the actuators So when we're thinking about sort of the design of this picture right here Again, if we think about it actuators here Were the actual antenna design not the satellite but like designing out the antenna and so Going through the paper. You can see that they actually were sort of extruding Out that antenna wire and it had sort of two Four depending on how you want to look at it Activities Forward rotate x rotate y and rotate z So the entire purpose of each one of those is as the wire is sort of going out, right? Do I rotate my direction? Do I rotate my let me change colors here Do I let's see red Do I rotate my Agent or rotate sort of the direction of the next Wire in this case. It said oh, you know rotate this direction rotate x y and z to this sort of angle and then move forward This mini centimeters and millimeters. Let me redraw that line Yeah, and then as you can sort of see from this sort of little spot You know again make that same where do we rotate then make that same kind of forward motion Where do we rotate forward? Where do we rotate forward? Where do we rotate forward? Where do we rotate forward? And that's where in this case it ended again Each one of those was an actuator which servo motor or which arm do you sort of deploy and enact? At this stage in sort of your design And then you've got obviously the next little bit here. Let me change my color back since it's a red The idea of again what it was considering it's sensors and so You've got things like voltage standing wave ratio the gain values of sort of the receive and transmission frequencies again maintaining The uptime of a particular radio frequency and then you know making sure that you have good Signal strength with your antenna and you know you're connecting satellite as well So with that to kind of give you a little thinking activity. Let's imagine For our sake. I have a self-guarding robot, you know, I have a bunch of This is the worst design. I have a bunch of Fruits on a plant. Let's call it a tomato plant because Yeah All right with that in mind and you know the comments or just thinking about this by yourself again What would be the performance measures of a self-guardening agent that wants to harvest tomatoes? What kind of environment is that going to be working? Out of or kind of have to interact with what are the actuators again for harvesting these tomatoes? And what kind of sensors would that agent need to have for our sake again? If you're thinking about it, don't throw the book at it with sensors Don't just copy and paste what you saw from the self-driving car You don't need lidar for this. I don't think But again, what would be a sensor that you would perceive or would want Out of your agent to make sure that you're harvesting tomatoes