 Welcome. Hello, everyone. I'm delighted to be here today on the LF edge summit ones. And I'm here with my co presenter Glenn darling. My name is John Wallachie. And we are really delighted you're joining our tutorial today. We've got a whole section on how to build and deploy AI workloads at the edge. We're going to talk about both open horizon the open source project as part of the LF edge. And then we're also going to build some actual containers and got containerized some workflows and will deploy to my Raspberry Pis behind me. So but the way we're going to structure this we're going to introduce you first to open horizon and Glenn will take you through that and then I'll take away take it away with some hands on demo Glenn go for it. Okay, cool. So I was part of the original development team of six engineers that built this software at IBM and then we open sourced it. So I'm able to answer the, you know, your deep technical questions if we have any of those and we'll get a little bit farther in. But now I have a business development role for our IBM version of the product. So what is open horizon. Well, it's fleet management for containerized software and data files, and it supports most kinds of standalone Linux hosts 512 megabytes of Ram and up and I actually sometimes deploy it on to Raspberry Pi one that only has 256 megabytes of Ram and it works just fine there but the workloads can't be very large. So we support x86 64 arm 32 v6 and above and arm 64, which is the v8 platform and we even support PPC 64 le and there's a, there's a group of folks who have made it run on a risk host as well but that's not been released yet risk five host. And we also support most types of Kubernetes clusters so you can either manage containers on a cluster or you can manage containers on standalone Linux hosts. So our very small 30 megabytes of Ram at runtime, fully autonomous agent runs on each of the nodes driven by the policy that you set for the agent. The agent does not bind or listen to any external ports. So you can completely firewall off the agent, just so that it can reach out to our management hub to, to find out things, nothing ever tries to connect directly to the agent. And all of the components in the system offer a rich set of rest API's. And we also provide a CLI for easy access. And new with the most recent release we have secret management, so you can securely deploy credentials to your edge machines using vault. And we have a secure device onboarding or SDO, which enables really zero touch install. All you have to do is connect the power and the network and walk away. You don't need to do any installation on the host at all yourself and the SDO software will reach out if you purchased an SDO device, it will reach out and we'll get all the race software and install itself and configure itself. We also support independent life cycles for your code and your data files so data files will that we were thinking about were these very large machine learning model files neural network model files. And so open horizon enables you to continue to run your neural network and update the model without any service downtime that can be using the old model and one inference and then it can switch over and use a new model and the next inference. And you don't have to shut down the container and bring it back up again or anything like that. And of course open horizon is open source and its governance is under the Linux Foundation Edge project. There's a link there. And here's a QR code that you can use to hit that link. And, you know, IBM pays my salary so I have to talk about the IBM commercial distribution of open horizon. That's called IBM Edge application manager. And or I am lost focus there. And there's a link to it. It supports up to 40,000 nodes for management up and a node can either be one of those standalone Linux devices, or it can be an entire Kubernetes cluster as a single node. And of course we have 24 by seven by 365 support from IBM. And we also add a graphical web UI to complement the rest API since CLI that are available on a regular open horizon. And there's a QR code that you can use to hit IBM commercial distribution version. So just a quick peek at what the web UI of Edge application manager looks like you can hit node services patterns and deployment policies and interact with them create them visually on the web client. So let's talk about the architecture. It's based on Linux containers and it supports any Docker compatible registries repositories for these containers to live in. They are decentralized and the agents are fully autonomous, and they are untrusting. We use a zero trust model. And when I say decentralized. That word seems to be popping up in a few places now. So I want to clarify that. In fact, the original version that we built of this used only peer to peer technologies so we use the Ethereum blockchain for rendezvous and for agreement and bit torrent for transferring files and whisper for communication between the components. And we had no central components at all. It was completely decentralized the agents on the each notes talked to each other to arrange for software deployment. Now, to get greater scale we did add a central management hub, and we've kept it minimal there's many components in the management hub but they're all very small, and they each have a very focused role. So having small scopes of authority minimizes the risk of systemic compromise even if you were to compromise one of the components of the management hub you wouldn't be able to do very much with it. So we have a zero trust model we use certificates and encryption and cryptographic signing everywhere. And we really worked to preserve the privacy of the edge nodes and surprisingly even the IP addresses of the edge nodes are kept private in the system. And we allow independent life cycles for code and data like I mentioned, and to minimize human involvement. We use a zero ops approach or zero touch approach wherever possible. We want to be able to automate everything in this environment because when you have 40,000 nodes it's really difficult to get things right if a human is involved in the process. So you just state your intent, which we call policy, then the agents collaborate with the management hub components to make it real across your entire fleet of edge notes. So let's take a little bit closer look at open horizon, how does it work. We have an autonomous agent on each edge machine, or each cluster on each node in other words, and it will keep working, even when it's disconnected from the network. And so an example of that is if you are deploying a workload, and it keeps failing over and over, you can automatically roll back to an older version that you had running on that same note, even if the network is down. And you specify how you want that to work by creating rollback policies and there's quite a flexible language for defining multiple levels of rollback. What consists, what does a failure consist of how many times does it need to fail within so much time before you consider it a real failure. And let's talk about the management hub a little bit. It has a bunch of different components as I mentioned there's the exchange, which is really just a way to access the shared database of items that have been published into the system. And then we have the agreement robots, which we call agbots that collaborate with the agents and in that first version that we wrote of this agents and agbots are really just the same piece of software. And we now have the agreement robots in the management hub, but originally they were just in all of the individual nodes themselves. And the switchboard is a blind mailbox service that the agents and agbots use to talk to each other. And then there's the model management service this is the thing that enables the independent life cycles for the data files. And we have the secrets manager to securely distribute credentials and rendezvous server and this is optional. It can be used with the secure device onboarding and I'll talk about that in a moment. And then we have the web UI if you use the IBM version the graphical component. So let's see how the pieces fit together. So first of all, here's the exchange with all of those pieces I just itemized for you. And you need some Docker compatible container registry out there are multiple ones. And then you need some edge notes so here's an example edge node that is an SDO computer so if you purchase a computer that has the SDO the SDO software installed. Then it will come with an ownership voucher and the voucher and the SDO software are matched, and they have information about what rendezvous server to use, and they have a unique identifier for this particular host. And so when you purchase it it already is set up like this, and you just get the ownership voucher that goes along with the computer. And I'll talk about that how that works in a moment, or you can just take any other edge computer and then install our agent onto it. So there's a single script that you W get down onto your node, and then you run it, and it will install the agent and, and the ESS which is the part of the model management system which allows for those independent life cycles I was talking about. So let's take a look at a couple of typical onboarding scenarios for these two types of edge notes. So first of all the secure device onboarding or SDO. So with that, you begin by importing the voucher into the management hub exchange. And then once that has happened, the exchange will look inside the voucher at which rendezvous server it specifies. And then specify the one inside our management hub and this is usually only useful if you're creating your own SDO computer by installing the SDO software yourself manually, which some of our partners do. Normally though you would buy an SDO computer and would have its own rendezvous server from the manufacturer. In any case the exchange will reach out to whatever rendezvous server it is and register that voucher and say that this exchange is going to manage that node. Then, at that point you're already in all you need to do in the field is take that SDO computer and plug it in and connect it to the network. Once it comes up, the SDO software on board will reach out to the rendezvous server. And as I said before it could be a different rendezvous server doesn't matter the exchange will have spoken to the rendezvous server to give it the information. The rendezvous server then says, here is all the software you need to install. Here is how to configure it for this particular exchange and for your particular role that you're supposed to have a little computer in this environment. And so, at that point you have an agent that's registered and the ESS software as well. The agent will be working with the management hub to get the appropriate software installed for the role that you've designated using the policy that you designated for this device to come on board with. Now if you're doing manual onboarding, you need to somehow connect to the edge computer you can use the console if you connect the keyboard and monitor to the machine, or you can SSH over network to the machine. And then you configure your credentials in the shell, the Linux shell environment. And then you download and run the agent installed SSH script and the result of that will be the same as if you use the SDO onboarding software. So some notes about once you have a registered device, you can choose to register the device either with a deployment pattern, or a deployment policy there are two different mechanisms in open horizon. Policy is the underlying foundation of both and it's much more powerful, much more flexible, but patterns are simpler for beginners to use. So usually when people start we suggest that they use patterns, but you'll want to graduate to policy as soon as you're ready. And the policies that you set govern the autonomous agent's behavior. So the agent does everything that it does based upon the policies that you have set when you registered in and of course you can change the policies as well over time. So once you have your autonomous agent all registered it's got its policy. How is the software managed on these machines. Well, first of all let's note that the agent can be completely firewalled. So, there's no need to open any ports for access to the agent, the agent installs no port listeners on any external ports. It actually does install a listener on the loopback, the host loopback, which is convenient when you are executing commands there on the local host if you do connect to the host and want to do something, especially useful during development. But in production, the agent cannot be reached from the outside so there isn't even the possibility of somebody connecting to the agent to hack it. So the agent on the other hand always reaches out and it reaches out to the exchange it reaches out to the switchboard those are the two components that it talks to in the management tab. And they are both accessed on the same port 443 or the HTTPS port in the management hub. The AgBots which they're the same color as the agent here because as I mentioned earlier it's actually the same piece of software. The AgBots also reach out to the switchboard and the exchange from within the management hub. So some notes the agents make their own decisions about what software to run on their local hosts and they do that based on the policies that you set for them. All of the communications in this system are encrypted and the agents are only known by their public key when they when they create a switchboard for them. I mean a mailbox for themselves in the switchboard, they identify themselves only with their public key. And that's how the AgBots reach out to them using their public keys as their address. Also the the most important communications in the system the agent AgBot communications they have perfect forward secrecy, meaning that each individual message that sent is encrypted with a different key. And also only the agents and AgBots have the key the switchboard does not have the key at any time. So the switchboard cannot read or insert communications between the agents and the AgBots so even if you compromise the switchboard, you could disrupt communication but you cannot get the agent to run something that it did not think was appropriate. Also note that the agents each independently pull their images their container images from your set of Docker compatible registry one or more registries. And your containers, the data files that you deploy, and your deployment details and that's things like what ports your container is allowed to bind to what volumes it's allowed to bind into the container, whether it runs with various capabilities or privileges, whether it's able to access particular devices on the machine, all of those things containers the files and the deployment information are all cryptographically signed by the authors and I'll talk about that a bit later too. And then each agent independently verifies the checksums of the containers, the cryptographic signatures of the containers and their deployment information etc. Before they act to deploy any container. And the agent as I mentioned is able to fall back to old versions of your containers on failures, even if the network connection gets disrupted. When we designed this system, we were imagining an environment where these things are in the field, and they're going to have very unreliable network connections. Okay, I just want to add one more little detail before hopefully you're not falling asleep out there. So how to deploy software in an environment like this. So here's our developer. And she first of all has to containerize any software we only deal with containerized software so you use Docker build to build your container. Eventually you have to Docker push it to some Docker compatible registry and of course these can be secure registries that require a read token to access the image, or it could be a public registry like Docker help. Then when it comes time to put this software into open horizon, then the developer needs to cryptographically sign and publish it as a service in open horizon. And then, once the services been published, you can choose to use either software deployment patterns or software deployment policies to regulate the deployment of the software to which nodes in your large fleet of edge machines. And later, if you also want to use the model management system, you can use a similar mechanism to deploy your data files using the CSS which cloud sync service part of the model management service and cloud is a little bit of a misnomer there because the management hub, although it can be in any of the public clouds. It can also be on premise and disconnected from the internet. So in factory environments, for example, they often do not want to have an internet connection to some large percentage of factories will not allow an internet connection. And so the management of can live inside the factory in that environment. Okay, so there's a lot more to open horizon, of course, I left out all the cool stuff really. So learn some more about it so you can see why open horizon is better than the rest. And here's a quick summary agents are autonomous firewall driven by policies that you said, nothing ever initiates contact with agents agents always reach out to the management of agents are identified only by the public keys. So it's high privacy, all code, all data files, all deployment details, all cryptography signed. It's highly decentralized so it scales extremely well. And the agents can continue to function when disconnected from the hub and all of the comms even between the internal components in the management of by the way are encrypted. And the agent ag bot communications have perfect forward secrecy as I mentioned. And if you were able to compromise the management hub, you still cannot take control of the agents because they have the policy that you've set for them on board. So if a management hub reaches out and tells them to do something that violates their policy, they simply won't do it. Okay, so now it's about time to hand over to john to give us a actual hands on demo. So john, are you ready to take it away. Yes, I am Glenn really delight so thank you for the background so now we're going to dig in let me. I'm going to share my screen glade if that's if that's allowed up. I'll take control there. And hopefully you see that perfect. Yeah. Right so what are we going to do we're going to do an introduction to I am so the introduction to IBM Edge application manager. We're going to walk through the actual components that great that Glenn introduced you to. If you want to follow along these, these instructions are available in my public github. I just dropped that into the chat. It's github.com slash john walkie slash introduction to I am. And so let's just drive into that there so what we're going to do is we're actually going to install I am and open horizon the agent on two of the raspberry pies. Do you see my cluster behind me Glenn. I've got a little rack here at six different raspberry pies. They do a variety of things in my house control my sprinkler and, and so forth in my back door. I'm going to focus in on two of them so from the top there's a rack, I ordered them rack one through rack six. We're going to play with our rack one and rack five. They run different versions of our raspberry pie OS and fedora 34. What we're going to do is walk through the instructions and actually build a variety of the packages and install the, the agent on these systems. All right let's give that a try. But I also remember when Glenn started to talk about the open horizon management hub, and the differentiation that IBM adds on top of it we just we build a web user experience. I'm going to show you that so that everyone just move with that out of the way. I'm going to log into the IBM web console right so we've got in the IBM cloud running on red hat open shift a an instance here let me just pick one that I know is is interesting. I'm going to very quickly show you that. And then we're going to come back to that my actual raspberry pies okay. All right so you see here that I've got nodes and services patterns and policies. Glenn started to talk about each one of them and I'm going to dive in a little bit more, so that we can look at an edge node, some of my raspberry pies. Those are the services which are the containers for arm arm 64 Intel x86 power PC risk five is coming I heard Glenn that's pretty exciting. So those are the services those are the containers, we're going to put our AI workloads inside of the container and deploy them as services here. We're just fine a set of patterns and policies that tell I am in the open horizon management hub, where to which agents should run it based on a whole set of constraints and properties. So those are the four big sort of attributes that we're going to talk about today. So you see here that I have when I look in my nodes, I've got an arm 64 that's going to be rack five, an arm raspberry pi arm 34 sorry arm 32, that's going to be rack one. And then I also, at some point had installed my laptop x86 into this particular management console as well. And some of our raspberry pi so we're going to actually follow through with my instructions here we built this for coach just a couple of months ago. And it's a quick introduction to some of the commands that you're going to run on your edge node to manage configure them into the console. So architecture I think Glenn did a great job of introducing you to the ag bot and the exchange and the management service and SDO. We're going to push our containers to Docker hub, but in production I'm pushing most of my containers to private container registry services so IBM container registry service quay, the whole variety of them out there. And credentials, the, the agent, and the exchange will take care of all that for you. Okay. All right, let's get started. So I've already logged in here. I showed you a quick intro to the nodes and I showed you the patterns and the policies, but now we're going to go explore the edge. And so I have in my terminal on my Linux laptop. In the way you configure this, if you don't use SDO, you set a variety of environment variables and you'll see me. There I've got a variety of hdn horizon right hdn those are that's the CLI and we're going to run a bunch of CLI commands. I've already put them into my bash or C file, so I don't have to copy credentials around for you all. I'm going to use your as very pies. I've got them configured here. All right, so I've named my edge node. And if you see over here in the management console, when I go click on them. One is called JWM is my initial so pie for edge one that's the top one pie three edge five that's further down in my little cluster. One of them are unregistered one is registered they're not running any workloads yet, and we're going to actually deploy some of those workloads. Okay. All right, so the first thing we're going to do let's go put that in my buffer because I do want to install, I'm going to uninstall and then install it in here so let's go take a look rack five. You'll notice that it is running Raspbian or Raspberry Pi OS 64 sort of the beta. And it is an arm 64 image there that's the, I've already configured this into my exchange. I'm going to run the horizon node list command. You see the name I you saw that up in the console. The org is think it is not running any particular workloads we can run some workloads here. Important thing is it's an architecture we support multiple architectures with IBM Edge application manager and open horizon. Let's take a look here. Hopefully this is done. I was running a test in the background Glenn. I was hoping we can get this going, but if not, let's go and we're going to do a horizon unregister. Actually, I'm going to drop this to the top here, and I'm going to do a horizon unregister. I'm going to just whack the whole thing. Okay, so now we're, we're removing this particular edge node from the exchange, and we're going to reinstall it just hopefully I've got all my environment cleared, and we'll give that a try. The other one is sort of, what's it doing. I was building, I was building my work my Edge workload, and it was going slow. It is a Raspberry Pi, let's be honest. Alright, so now we're unregistered here. Let's, let's clear that screen again. And remember some my buffer so I'll paste it. I'm in the right directory here. And so now we're reaching up to the management console. You'll notice here I'm going to pull down a zip file with it has all the components on it. It's going to install it. If I had all the off tokens right, which I don't have on this particular one. I'm going to just do it here, Glenn. Clear. Oh, the fun of live demos. I know it's great fun. Unregister dash V R F E. And so we're going to do the same thing I had started to do that over on rack five I'm going to do it on rack one. So now I'm unregistering from the exchange. And I'm going to step through the registration of an edge node and run some of the commands. There we go. And so let's now CD into the right directory, like I was earlier, and let's do a paste. And you'll notice here what I'm going to do. I'm running the agent install. I'm reaching up to the CSS. And I'm going to by default install a little test pattern. It's called the IBM web flow test pattern. And so you'll see that run in a second. We think I was wondering why I was running so slow there. I'm just running I can go set up tokens here. Let's see, I think I've got them in a bash RC file. Yes, I do. Let's just see if I can source that reg to think. Excellent. Now let's go and run the age there we go. Yeah. I'm going to run the pattern and policy set. Yeah, so what I just got it while I do is I won't. I won't register the pattern. There we go. Both of them are now running Glenn. That's good. All right, so what we're going to see next as, as it installs the components, there's a new version of, did we talk about the open source project too much yet. To open horizon, because that's why we're here at the LF edge summit. And Yeah, open horizon is an LF edge project. Right. And you can see the various components there annex is the software that's used for both the agent and the egg bots. And the exchange API is the software that's used for both the switchboard and the exchange. And there's various other things that the examples repo has a whole bunch of just example workloads that we use to to deploy to show things working. And DevOps is our mechanisms for doing releases. And the edge sync services. The part of the model management system that lives on the edge device with the agent, and the SDO support software is the instructions and how to deploy the SDO core software onto an simulated SDO device if you wanted to do that for yourself. And it has the rendezvous server that that we run in the management. In case you want to use it normally people don't use the rendezvous server in the management hub. And actually even when I'm when I'm making a simulated SDO device I usually just point out to one of the manufacturers like until as Intel was original creator of the SDO software. So I just point out their rendezvous server doesn't matter any rendezvous server work. All right, let's go take a look at how we're doing on mine so I remember I'm running the agent install. And that's what's running on rack one on rack five. It looks like we've downloaded the packages. I don't know why everything is gone super slow for me. Watching, watching Netflix and no one is I sent everyone out to school. That's true working from home is always fun. Yeah so here's what we're going to do next so we're going to talk through what's going to happen. And so I just want to introduce to you to a couple of things so I throughout the little tutorial I actually exercise cans on keyboard, the various commands that I want you to sort of get experience with. So horizon version tells you what version of the agent you're running on your edge node. And then you saw that I can run the unregister so I can unregister myself from the exchange. So if I want to watch it configure a particular agreement so depending on my node and what my constraints and policies and properties are that exchange and the agent will determine the building agreement that says you should run this particular workload. So that's what you can list out with your agreement list. Yeah, to just to state that a little bit more precisely, the agbots inside the management hub will reach out and make a proposal to the agent on the inch nodes. And then the agent is the one that's going to decide what whether it runs or not based upon the policy that is existing existing on the agent. And then you can look at what agreements have been formed what agreements have been rejected by the agent. But generally speaking that the agbot knows the policy. And so the agbot only makes proposals that it's confident that the agent should accept the only thing that might stop the agent from accepting it is that it might not have had an updated policy or something and then that would just be a matter of time until it the next time it reaches out to see if its policy should be updated. This is what I expected Glenn we must add a pick up there. You'll notice when I said install the pattern it's now it installed the agent and configured it started it up. And now it is downloading the pattern IBM web hello hello world. And it's, it's now the agbot is proposing an agreement. And so there you see, we've started to establish that agreement there's a finalized. And now the service is being pulled down there's actually a container for web hello, and in just a moment it'll get started. It takes a minute for it to check the check some of the container and check its cryptographic signature and etc. And actually talk about that because when we build a container later, fingers crossed to the demo gods will will actually have to sign our container. And then, because we want to make sure that it's cryptographically the right container and we're going to put the details of that in in the exchange. Alright, so there, hopefully it's going to get started. Oh you all know that you can ask questions right in the chat. You're welcome to do that well we're watching the chat both Glenn and I off to the side. Yeah, chat and the Q&A. There we go. It's started. See how it's doing right fine is still stuck. We're only able to see a portion of your windows. It's off at the bottom. Yeah, there's that better. Yeah. Oh, I guess it's the, yeah, it's the bar, the toolbar at the bottom of the zoom is covering it up. I thought it was, I thought it was your end, but it's my end. Yeah. Okay, so here we got that's a success. Great. So now we can do an horizon agreement list. And you see that I've got a, the web hello, the hello world pattern is now running. And if I do a Docker I'll do a clear just so that you'll see my do a Docker PS. The appropriate container for arm now running on my Raspberry Pi. All right, so that's, you can do a Docker. You could look at the output from that container if you wanted to. Yeah, we're going to get to that in a moment Greg Glenn. We're going to use a horizon node list. And let's go take a look. We've got our, our Raspbian pipes a pie for rack one, it is running the arm architecture. And we've got ourselves a pattern that's active. So those are the types of things that we can pull out of node list. All right, let's go keep taking a look here. There's always web horizon help. If you guys get stuck, you can do that. Actually, let's also mention that any horizon command, like even a partial command, like the next command John has done there of hdn exchange service list, you can at any point in the typing that like hdn exchange service. You can put a minus h after that, and it'll give you help that's context sensitive to whatever part of the command that you're in. And there's also a man entry for hdn installed so you can go man hdn to get details there as well. Just, yeah, useful tricks. So let's drop our next command in here. So this is horizon exchange service list. So we want to see all of the containers all of the services that are available in my management hub. And you'll notice that there's a mix here there is x86. RPC. There are some arm containers there's arm 64 containers. So we've got, we're not confined by a singular architecture, we can build edge for our edge it can be a whole variety of architectures. Someday in the near future Glenn I'm hoping we see risk five years well. So, I should note though that this is not all of the services in the management hub it's just all of the services in the IBM organization in the management hub. So, actually, we should probably change that name, but it, the IBM org is the one where all of the publicly usable services go. So everything that we publish as an example goes into the IBM org. In open horizon, but each individual user of open horizon is part of an organization so you know if I was ABC company I might be in org ABC and there might be, you know, 20 users in org ABC, and I could say it's an exchange list and not specify IBM slash and I'd see all the services that are available in the ABC work. And of course I could also publish with public equals true my service to make it available to other organizations besides the organization I'm in so you know if a company had multiple organizations that might be a useful thing. You know it's a multi tenant system so you know there can be many orgs and in one of the management hubs that we are using right now. I think john you're using this one as well there's 50 different organizations currently for various different companies or various different departments within our company. Correct. Alright so the next command I wanted to exercise for everyone is the horizon exchange pattern list, and I said that IBM. And so you're, these are some of the example patterns that you saw the examples directory. We've got a number of examples there's a whole world that we're running that one already. And one that just exercises the secrets manager, the management model management service event streams. You'll see here different architectures now for our Intel x86 and arm arm 64. So, while we're looking at that list actually can you bring it back up again just for a second. So, I've mentioned a couple of times in the talk that we also support various Kubernetes clusters as nodes. So when you deploy to a Kubernetes cluster, what you have to deploy is an operator so you use the operator SDK to create an operator instead of creating a. And operators are very powerful tools that enable you to exercise any of the features of Kubernetes so like scaling redundancy, you know, fail over of the entire service. All of those kind of features can be built into your operator if you, you know, you need to have mobility of your service from one node to the next or whatever that kind of thing. You can deploy an operator as a service if it's a cluster node. And so those operator ones we can't deploy on this Raspberry Pi because it's not a cluster. And the same thing we can't deploy just a simple container on to a cluster you have to deploy operators on to the cluster. Right. All right, so let's, we're going to run out of time here going on. Let's keep it. Horizon exchange deployment list policy. So let's, let's just talk about that for a second because deployments. And you'll notice I'll just clear my screen here that we're just in my particular management hub. I don't have any deployment policies. And so you'll just get up an empty array here but if we were to scale this, we would certainly define a whole set of policies. Now interesting about a policy is you can run multiple policies on a machine, whereas you can only run one pattern on an edge note. All right, let's go jump over back again scroll down was now remember you were asking about logs. Let's go take a look at the event log. The event log is the log of the agent, but I was talking about the log of the container itself, but whatever. Yeah, so I could do a Docker dash F on that on Docker logs. Yeah. But that's just normal Docker commands I was really focused on the horizon commands. Okay, gotcha. If I, if we can go there let's just do it. I'll do a clear just so that folks. Yes. And then I can do a Docker logs dash F on 311 here. And every couple of seconds it echoes web hello world. Yeah, it's not a very exciting example. Yeah, and now we've containerized AI models. We're, you know, we've got a lot of different different ones there. All right, so let's, so we've got another one that we're going to actually build interactively here and I went, Glenn, Glenn, you've got one here is called the web hello. And so I get cloned into my repos my local Raspberry Pi here, and I cd into that directory. Of course I can want to do a Docker login because I'm going to push it up to Docker hub. Okay. And then the next thing I'm going to do is jump over to that directory here. And let's do a clear and an LS. So it looks like you've got a Docker file. That's going to define what the container looks like a make file ago build it. You've got a Python script that's probably going to execute the little web server web hello. Let's go take a look at the Docker file itself I was just playing with that. Alpine was giving me trouble earlier. And so I switched over to a bunch do. And so I'm going to do a app get update and install. And then I'm going to install flask as our little web server. And then we're going to execute. We're going to copy our web hello Python script into it. And then that that will be our web server. Okay. There's a problem with Alpine right now that your host needs to have the libsec comp up to date, or it won't work with Alpine. Yeah, what's with that man. I just found that out like two hours ago. Yeah, well that that's been a problem with Alpine for about six months now. All right, so our little web hello server, it starts up a flask hello it's and it's going to echo to looks like you got some HTML here it says web hello. And then it grabs the IP address of the requester, and it's going to respond with the IP address back to the that. That's what it does. Whatever your client IP addresses. Yeah. Yeah, it looks pretty simple. Cool. Let's go take a look at the make file next. The eye on the make file. And I needed to do a couple of little massages here because they want to push it to my Docker hub. Those are the credentials I had so I dropped in. Wally key. And then I wanted us, make sure that the container I built is unique so I did a little dash Wally key and I. And they also made sure I have the right architecture and the right pattern. So I've customized the first five lines of this make file and everything else is the same looks like you've got a make build it does a Docker build on the make build file and you've got a stop. And here you this was interesting here, you push to Docker hub and then there's a number of horizon commands here let's go explore these two. So we're going to do an exchange horizon exchange service publish. And that's going to sign the container. We're going to make sure that we know we just in our little service JSON file which is has a couple of properties in it. We're going to make sure that it's now available in our web console. And then we're going to publish the pattern those are the two big ones are this. And then we're going to run it at the bottom here we're going to actually do an agent run we're going to actually register this edge node to run that pattern which we just built. All right, so that looks cool thank you for putting that together for us. And so let's do it fingers crossed because it wasn't working earlier. So let's just do a clear, and we'll do a make, make build. And so, from a bunch to 1804, I've got update I've got install Python install flask. Yeah, so you were. Yeah, you changed the base image so you change what software is in there so this is probably not a great example. Yeah, not a great example. Let's just walk through here. Remember, we had a, when we sign our, our container. We need signing keys. And so we have to generate the some signing keys. Now on a Raspberry Pi takes a little while does a little bit of entropy on that. What essentially what it does is it creates two little of the public key and the private key. And then we're going to do a make build, which is failing to make push, and then a public service, and then a publish pattern. And then we're going to unregister our, our node so it's not running that whole little world anymore. And then we're going to, we would have run the, the container actually on that particular node here. All right. And then it tested out there's a big test. The other thing we can do let's go see how we did here. We're finished to download on the other one. I don't know why, but we're sort of stuck on that Glenn, but I wanted to, to just encourage everyone over in my, my window here is to go and let's do a little tour of, of it here. Let's jump over to services. And remember I had a whole variety of test services as web hello, a lot of different architectures arm arm 64 Intel and power PC. And I built two of them already for, for other architectures. That's kind of cool patterns. Let's jump over to patterns. And event streams and there's all variety of those and policies we don't have very many policies. So we've made the web console nice and easy so you don't have to memorize too many of the open source CLI commands that was important. If you're planning to learn more about open horizon. I know Glenn you've got some closing slides which we can, we can turn to pretty soon. If you go to open horizon.org or open horizon. There we go. That one. We've got some nice documentation and quick starts here as well. You've got some nice videos. You can go and build the open source project. You can get the exchange up and running. There's an all in one by just grabbing the all the one you don't obviously need open shift and IBM cloud. If you just want to play with the open source project so really fun, fun way to this obviously downloads a number of containers to your system. All right, Glenn, you want to take away and do some closing remarks. Yeah, sure. We'll stop sharing. Let's go back over here. Out of the way here. I really hate the way that happens. How do I start this playing? Oh, well, we can just go in like this. Okay, and I'll close these windows. Oh, now it's moved and I can hit play from start. I hit the way that zoom puts that little toolbar right over top of the menu of the application so you can't actually execute off. I push it off to the side right away. It's not possible to do that for me. There's no nothing to grab on to maybe I can grab on that. Okay. Now I need to stop that and go back up here to play from current slide. Okay, so, oh, no, that's wrong slide. Sorry. It got jumped to the front. I want to go back down here to this slide and there we go. Okay, so there's LF edge channel on YouTube. And there's a playlist there full of technical deep dives and various technical topics and some more detailed flow animation showing the communication between the different components so you can see how secure it is. There's a QR code that points to that playlist. Next, there's a dozen videos there and all of them I think from light introductions and lots of hands on doing similar kinds of things to what john was doing here today to show you how to actually walk through the steps of doing it yourself. And this is the link that john just showed you and there's the source code link, and there's two different repositories that contain example programs as well. And if you want to reach out and communicate with the open horizon team, you can use the matrix messaging system LF edge just recently switched away from Slack to use matrix and that URL gate there need to create your own account and you can sign up and start talking to us there and of course, you've also got my email address and john's email address on the screen here as well. So, with that, I'll stop sharing. Yeah, Glenn I just so I just fixed up that you'll notice that the very end here my make bill just finished. So that was kind of awesome let's do a make push, and I'm going to push that container up to Docker hub. And, and so now I've just built that, that little Python flask container I had a, you know as I edited my Docker file there was, I was hacking it to use a bunch of. All right, so now we've got that published up to Docker hub. It's not the smallest because I switch from Alpine to 180 megabytes. Yeah, as opposed to probably eight megabytes for Alpine. Right. Well, here we go. But the next step is going to be a make publish service. And we'll we'll run that next, just so everyone sees oh we could take questions to while this is running Glenn. Does anyone is interested in asking us about open horizon or some of the. Don't see any questions. Does anyone want to raise their hand. I don't see any hands raised either. We wish you were doing this in person for sure. Next time. Yeah, in person is a lot better. 125. Come on. Raspberry Pi. All right. Amen to this. Okay. Can't make my upload go any faster. There we go. All right, then make publish services my next command. Cool. Make publish service. Do you already do a key create command? I had already. Yes, I had already done. I remember I talked about it. Yeah. I already had it takes a that takes a while on a Raspberry Pi as well. It does. But you can use existing key if you have one, but you can also make HCN key create or create your cryptographic signing key for you. Yes, if you have an existing one, you can use that instead. I remember unregister dash V. And I'm so what I'm going to do is stop the hello world. And I'm going to start this one. And I'll just do this on a clear here. Let's just clear my screen over that I'll finish. So it's tearing down the agreement for the old one should pick a faster device. But I moved to the Pi four is going to be a little bit faster. There we go. All right, now we're going to do a make agent run. So just, you know, some pro tips for you, you can reconfigure the agent to pull more frequently. So it doesn't wait around a lot like it does by default. You can configure normally because we want to be able to hit 40,000 nodes on a management. But if you're doing a smaller number, you can tell the agent to pull, you know, instead of once every minute or whatever it is, by default, you can have it, you know, every 10 seconds. So that that speeds up the interactivity of it. And also, you know, you're using a very large image for the example here. So, you know, it takes a certain amount of time to download 300 megabytes. And if you, you know, best practice for work on the edge is to use a smaller base image like a busy box or alpine or. Yeah, for sure. Or even there's a there's a small version of Ubuntu and there's what they call Ubuntu core. And that's a bit smaller as well. There's a few more things. Let's so we see here that we've got ourselves now a pattern that's there. There's our Web Hello, Python Wallachy, and it's running a service for ARM. And it is now running on if I go and jump over to there. Jump off of patterns over the services. And let's just do the list view. And we see that it's looks good. There is our, our container. Now, let's so now we know it's running here we can do a Docker PS. Let's do there it is. Let's do a clear again. And we're going to do a make test. I remember was it make test file. 8000 we should have exposed it on toward 8000. Okay. Anyway, so that is there we was glad to see it running though Glenn right. All right. If you all want to follow us along, I'm available on GitHub and also on Twitter. So follow me on Twitter. And so find me there. That's kind of fun. I think we're going to close off here any other questions before we drop. We can come off mute if you want to come off mute you can we can unmute you that's for that's for sure. All right, we want to thank everyone for joining LF edge summit and our deploy IBM edge application manager AI workloads. Be safe. Hopefully we see you next year. Bye. Thanks for joining.